Aria Valuspa

Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects

The Unseen Arena: How Community Sports Debate Workshops Forge Stronger Towns and Sharper Minds

The Unseen Arena: How Community Sports Debate Workshops Forge Stronger Towns and Sharper Minds

You know, folks, I’ve spent decades reading tells across a poker table—the micro-expressions, the chip shuffles, the subtle shifts in breathing that reveal an opponent’s hand. But let me tell you, the most revealing tells I’ve ever witnessed aren’t in a casino; they’re in community centers, libraries, and school gyms where everyday people gather not to gamble, but totalk. Specifically, I’m talking about community sports debate workshops. (more…)

The Heart’s Call to the Sea: How Ferry Rides to Island Sanctuaries Nourish Your Deepest Well-Being

The Heart’s Call to the Sea: How Ferry Rides to Island Sanctuaries Nourish Your Deepest Well-Being

There’s a quiet magic that unfolds the moment you step onto a sun-warmed ferry deck, the mainland shrinking behind you as the water stretches wide and welcoming. In our bustling lives, where screens glow late into the night and the rhythm of urgency pulses through every hour, these simple journeys across the water offer something profoundly restorative—not just for the mind, but for the very core of our being. (more…)

The Rise of the Embodied Chatbot

Chatbots are popping up everywhere. They are being used on the sites of a huge range of industries. They provide automated customer support, saving the business time, money, and resources. But as more and more people are using chatbots, people are now becoming to expect more of them. They want more in depth answers, they want a personalised service and an emotional response, almost as though they are talking to a real person.
One application of the ARIA-VALUSPA project is creating embodied chatbots. These are much more than just a simple virtual assistant, they also incorporate social aspects into their interaction with customers, offering an adequate shopping solution. In essence, they have the ability to reproduce the characteristics that a conversation with another human being would have.
For some people this may seem a bit futuristic, but it is already happening. Today, chatbots can do a lot more than answering simple questions with set answers. Nordstrom introduced a bot into their customer service department at the end of 2016, and it’s been very well received by their clientele. An intelligent bot can interact proactively and change the way it interacts, depending on the person it is talking with. It is clear that this technology has developed into a fully fledged marketing tool that can help to represent a brand.

 

80% Off Nordstrom Promo Codes May 2018 - Verified
33 Nordstrom Promo Code Offers May 2018 | Today's Special → $20 off Nordstrom promotion code for rewards program members, free shipping, and so much more!

 

 

Although it is common knowledge that this is a growing market, how big is it really? In terms of market size growth worldwide, in 2015 it was worth around $113m, just one year later this was up to $703.3m. By 2025 the global chatbot market is expected to reach $1.23 billion, which is an annual growth rate of 24.3%.
Therefore, it is not only the intelligence and capabilities of chatbots that are growing, but also their reach worldwide. It is predicted that more than 25% of the world’s population will be using mobile messaging apps by 2019. An inventor, Ray Kurzeweil, claims that they will be indistinguishable from humans by 2029!

Link to the infographic https://www.16best.net/blog/chatbots-gone-wild/

Conquering The World - Chatbots Gone Wild (2018 Infographic)
www.16best.net
The number of brands and merchants using chat bots is ever-increasing. This infographic outlines a multitude of predictions and fun facts as to the chat bots' upcoming development.

The ARIA-VALUSPA Platform in Unity3D

As part of the demonstrators build for ARIA-VALUSPA’s final review to the European Commission, we developed the capability of running the ARIA-VALUSPA Platform in Unity3D. This allows you to create and place AVP characters in any game environment, and use appearances and backgrounds from huge existing libraries. Here are two example videos of how you could use this:

Unity is an industry standard 3D environment development engine from Unity Technologies. The most recently released figures from 2015 suggested 4.5 Million registered developers with 1 million monthly active users. While there are other well-known engines, such as Unreal, Unity is considered the de facto standard, especially amongst academics and indie developers due to its low cost, and the significant community support. It also accounts for some 47% of all mobile game development. As a cross-platform development environment for 3D work, unity is currently peerless. It supports native building for: iOS, Android, Tizen, Windows, Universal Windows Platform, Mac, Linux, WebGL, PlayStation 4, PlayStation Vita, Xbox One, Wii U, 3DS, Oculus Rift, Google Cardboard, Steam VR, Playstation VR, Gear VR, Windows Mixed Reality, Daydream, Android TV, Samsung Smart TV, tvOS, Nintendo Switch, Fire OS, Facebook Gameroom, Apple ARKit, Google ARCore, and Vuforia. Unity allows developers to create AAA standard environments and interactions: from large, complex 3D environments with casts of thousands, to the simplest 2D games. Unity includes built in pipeline support for a range of interaction modalities including AR and VR.

Unity is the ultimate game development platform. Use Unity to build high-quality 3D and 2D games, deploy them across mobile, desktop, VR/AR, consoles or the Web, and …

One relevant feature is the facility to create, animate and give life to virtual characters, however the realism of these characters is limited by standard method and the tools currently available. GRETA, by allowing remote manipulations of character animation (including lip syncing) allows far more complex agents to be delivered into the unity system. A key challenge was integrating GRETA’s animation system with unity’s own built in mechanim system. Doing this allows characters to be animated ‘normally’ using well established methods, then have these animations overridden by GRETA when necessary. The separation of FAP and BAP makes this comparatively straightforward, allowing choices to be made about how to animate bodies, while allowing GRETA to effectively handle facial animations. For gestures, it is possible to choose between GRETA and Mechanim control, if the developer wishes to implement his/her own gestures based on incoming FML Data. This is particularly preferable if the character is ‘doing’ something and we still wish to interact with them. Currently the platform only handles one specific skeleton structure (those generated by autodesk character creator), though the intention is to release an updated version which will allow bone mapping, to handle any “humanoid” bone setup. This is inline with unity’s current mechanim model.

The pipeline is as follows: Unity picks up FML from whatever source (e.g. AVP). An instance of GRETA (either on the same machine or over a network) runs a series of thrift servers. Unity passes the FML to GRETA and receives FAP/BAP and audio in return. These are then used to drive the character animation and audio.

The support for Unity3D is scheduled for the 3.0.1 release of AVP, due by the end of April 2018.

Adapting a Robot’s linguistic style based on socially-aware reinforcement learning using the ARIA-Platform

A new use-case of the ARIA-Valuspa Platform:

The core of the Aria platform (SSI recognition pipeline including Emax, Opensmile, Cerevoice) is used on a social social to replicate the Aria scenario (Alice in Wonderland). In addition, the demo features reinforcement learning and natural language generation to adapt the robot’s linguistic style to the user’s personality preferences. For more details see: Hannes Ritschel, Tobias Baur, Elisabeth André: Adapting a Robot’s linguistic style based on socially-aware reinforcement learning. RO-MAN 2017: 378-384. Want to create your own Human-Agent Scenarios? Check out the Aria Valuspa Platform on https://github.com/ARIA-VALUSPA/AVP

 

An Embodied Chatbot as a Marketing Tool

Chatbots are everywhere. They offer automated customer support, provide advice and information to your staff, explain processes to new users and more. At the same time, online interactions are being transformed through technologies like augmented reality and virtual reality and by the explosion in the use of mobile devices. But how can these trends be combined to create the kind of highly personalized user experience your clients and coworkers are looking for? With a new-generation virtual assistant, that’s how!

The return of the avatar

This year, avatars are making a big comeback. The proof? The major players are using them! Facebook is developing a tool that can automatically generate a custom avatar from a photo, while Apple has already introduced avatars in its new iPhone X.

Avatars like these allow users to personify their moods and their identity, enabling them to react quickly to current events or a particular situation. For a company, using an avatar reinforces its brand image and humanizes its interactions.

Incidentally, it’s one of the challenges addressed by the ARIA-VALUSPA (Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects) project: this new-generation conversational agent boasts an expanded evaluation grid and more finely tuned perceptions. The chatbot thus becomes capable of expressing emotions, adopting non-verbal behaviors and, in some situations, even interrupting the person it’s talking to!

In short, ARIAs are much more than simply virtual assistants: they incorporate social aspects into their interactions with users. The chatbot is imbued with increased sensitivity and can reproduce the characteristics of a genuine interaction between two human beings.

A virtual assistant with proactive behaviors

As a direct consequence of the development of avatars, chatbots are no longer satisfied with merely answering questions, but instead have become bona fide tools in the hands of marketing departments everywhere. An intelligent animated avatar can interact proactively and adopt the appropriate behavior depending on the person it’s speaking with.

For example, this type of virtual assistant is central to the marketing strategy of the Canadian insurance company Ignite. A chatbot in the form of an animated robot intervenes in accordance with the various stages of the user’s search for auto or property insurance on the website. In an indispensable preliminary step, the Ignite marketing team programs all of the interventions of the chatbot, named “Igins.” With a hand gesture, it lets the user know that it may be of assistance.

What’s next?

Just as in our day-to-day interactions, it’s important for the exchange to be unobtrusive. As a result, it’s left up to the user to make the decision to interact with the chatbot, in which case Igins knows exactly what to say, based on the user’s queries and browsing behavior.

Scripting the chatbot’s dialog

If Igins intervened at the wrong time or provided the wrong responses, the chatbot wouldn’t be fulfilling its purpose. Its effectiveness rests on the quality of the interaction scenarios that are established upstream, depending not only on the intentions of the user, but also on the objectives of the marketing team.

This scripting consists of creating a type of role-play in which the avatar interrupts or responds to the user, depending on the situation. The interactions may be either verbal or non-verbal: for example, during a waiting period, Igins might do exercises or start dancing in order to distract the user, while also establishing a friendly rapport. It’s a great strategy for preventing the loss of a potential client during an important phase of the transaction!

Empathy and expressiveness for an enhanced user experience

Through its mischievous, playful interactions, Igins adapts to the users, which in the case of the Ignite insurance company consist primarily of gen X. An empathic avatar will therefore play upon all of its human aspects, both verbal and non-verbal, in order to personalize the interaction.

In a different context, Marlie, the virtual collector for CenturyLink, a US telecommunications company, uses intonation and facial expression to reassure clients as they pay their bill. Each phase of the transaction is therefore scripted to provide the appropriate explanations calmly and amicably.

While this expressiveness may take numerous guises, it doesn’t necessarily have to extend to a humanized character. In fact, some companies prefer not to “incarnate” their conversational robot. In order to respond to these varying needs, the company Living Actor works on all aspects of expressiveness:
• The personality of the virtual assistant, conveyed by its style, tone of voice and expressions
• Or a more abstract, but equally expressive representation, as illustrated in the illustration below by Chad, a conversational robot recently developed by Living Actor.

It’s obvious that chatbots are no longer simple software programs designed to understand a question: they have become full-fledged marketing tools capable of providing a premium service to the user and representing your company, respecting its identity and unique characteristics. This combination of language and behavioral scripting technologies makes it possible to forge a genuine relationship between a company and its clients or employees.

What’s the future of online interactions? Restoring humanity and empathy to their rightful place at the center of the exchange for a more personalized user experience!

ARIAValuspa at the DRONGO festival

Last weekend (September 29-30) the DRONGO language festival was held in Utrecht, The Netherlands (https://www.drongotalenfestival.nl/). It’s a Belgian/Dutch festival that has everything to do with language, from bilingualism to dialects, from language learning to spoken dialogue systems. It’s meant for all sorts of people, researchers, employees involved with language and hobbyists who work with languages. The University of Twente was also present at this festival, with three demos to show, of which one was the ARIA VALUSPA Platform (AVP). We showed our virtual agent Alice to the public, in the wild. We were a bit worried if the system would survive a whole day in such a busy place and fortunately it did! Despite the noise and many people in the room, several visitors held some decent and interesting conversations with Alice. Visitors could ask questions about Wonderland or listen to Alice cracking some jokes and commenting on their reaction. It was a great way to show our platform to a broader audience. Alice did an amazing job and we’re looking forward on show-casing more capabilities in the future.