قالب وردپرس درنا توس
Home / SmartTech / Rasa's conversation AI can selectively ignore dialogues to improve responses

Rasa's conversation AI can selectively ignore dialogues to improve responses



What could be the key to chatbots or voice assistants who respond more naturally and more humanly? Researchers at Rasa, a Berlin-based startup developing a standard infrastructure layer for conversational AI, believe that selective attention may play an outsized role. In a pre-press paper published this week on Arxiv.org, they describe a system that can selectively ignore or edit the history of dialogue so that it can skip replacements in a variety of dialogues that are not directly related to the previous utterance.

"AI Conversation Assistants promise to help users fulfill a natural language mission. Interpreting simple instructions, such as: B. turning on the light, is relatively easy. However, to handle more complex tasks, these systems need to be able to talk on multiple rounds, "the co-authors write. "Any statement in a conversation does not necessarily have to be a response to the other party's last statement."

The team proposes the so-called TED (Transformer Embedding Dialogue) policy, which determines which dialog is bypassed using transformers. For the uninitiated, Transformers is a novel kind of neural architecture, featured in a 201

7 publication co-authored by scientists from Google Brain, Google's AI research division. Like all deep neural networks, they contain neurons arranged in interconnected layers (mathematical functions) which transmit signals from input data and slowly adjust the synaptic strength (weighting) of each connection. In this way, all AI models extract features and learn to make predictions. However, transformers have clear attention, so each output element is connected to each input element. The weights between them are calculated dynamically and effectively.

 Rasa AI

Importantly, the TED policy – which can be applied either modularly or consistently – does not assume that a particular entire dialog sequence is responsible for choosing a response to an utterance is relevant. Instead, it quickly selects which historical phrases are relevant to better recover from non-sequencers and other unexpected inputs.

In a series of experiments, the team obtained a freely available dataset (MultiWOZ) with 10,438 human-human dialogues for tasks in seven different areas: hotel, restaurant, train, taxi, attraction, hospital and police. After training the model in 740 dialogues and assembling a corpus of 185 for testing purposes, they performed a detailed analysis. Although the data set for supervised learning of dialogue strategies was not ideal, the researchers reported that the model successfully recovered from "non-cooperative" user behavior at each round of dialogue and exceeded the basic approaches (with the exception of some errors).

Rasa has not yet integrated the model into production systems, but could enhance its suite of conversational-related AI tools – Rasa Stack – which target industries such as sales and marketing and advanced customer service in healthcare and insurance, telecommunications, banking and other industries , Adobe has recently created an AI wizard with Rasa's tools that allows users to search Adobe Stock using natural language commands. And Rasa says that "thousands" of developers have downloaded Rasa Stack over half a million times.


Source link