Amazon tests AI-powered customer support agents on Amazon.com
Might AI help improve customer service for the millions of people who shop on Amazon.com? Amazon intends to find out. In a blog post, the Seattle tech giant revealed that it’s testing two AI-based systems to handle incoming shopper inquiries. One fields requests from customers automatically and without human intervention, while the other helps human service agents respond more quickly and easily.
As Jared Kramer, an applied-science manager in Amazon.com’s customer service technical management organization, explained in a blog post, the automated agents use machine learning rather than rules and refer requests they can’t handle to human representatives, enabling them to tackle a broader range of interactions. That’s as opposed to Amazon.com’s old flow chart system that specified responses to particular inputs.
“It is difficult to determine what types of conversational models other customer service systems are running, but we are unaware of any announced deployments of end-to-end, neural-network-based dialogue models like ours,” wrote Kramer. “And we are working continually to expand the breadth and complexity of the conversations our models can engage in, to make customer service queries as efficient as possible for our customers.”
Above: An example of how we converted raw interaction transcripts into training examples, which include relevant information from the customer’s account profile.
Amazon says that in the customer-facing system, it’s using a template ranker — where an AI model chooses among hand-authored response templates — that allows it to control the automated agent’s vocabulary. (It plans to soon begin testing a generative model that crafts responses to replies on the fly.) The said templates are general forms of sentences, with variables for things like product names, dates, delivery times, and prices, and the model is able to incorporate new templates with little additional work because it’s pretrained on a data set of interactions between customers and representatives. Basically, because the template ranker has seen many responses that don’t fit its templates, it’s learned over time several general principles for ranking arbitrary sentences.
Researchers at Amazon trained separate versions of each model for two types of interactions: return refund status requests and order cancellations. As an input, the order cancellation model receives not only the dialogue context but also some information about the customer’s account profile. In addition to the context and the profile information, the response ranker receives a candidate response as input, and it uses what’s known as an attention mechanism to determine words in previous messages that are particularly useful for ranking the response.
During randomized trials that compared the agents to existing rule-based systems with a metric called automation rate, the new agents significantly outperformed the old ones, according to Kramer. “Automation rate combines two factors: whether the automated agent successfully completes a transaction (without referring it to a customer service representative) and whether the customer contacts customer service a second time within 24 hours,” he said. “According to that metric, the new agents significantly outperform the old ones.”
Source: Read Full Article