During May, Google’s I/O 2018 conference was held to show the latest in Google’s offerings to developers around the globe. While Google demonstrated a lot of different new tech at the conference, it was their keynote demonstration of its latest “Duplex” technology which has lit up the internet.
Duplex uses Google Assistant to call companies on a user’s behalf to perform simple, structured tasks, such as booking a haircut or scheduling a restaurant reservation. While voice synthesis isn’t exactly new, it was the humanlike inflections and natural conversational flow in these calls that many found to be jaw-dropping (or, alternately, terrifying).
If you haven’t yet seen Google’s demo, click through to watch it now, and prepare for your mind to be blown. (Skip to 43 seconds to get straight into the demo.)
Although this technology isn’t yet consumer-grade, Google says it will start to test Duplex within Google Assistant as early as this summer. How then should our customer service operations handle this upcoming customer-side automation in voice calls?
Identify Verification & Trust
Part of the reason why Duplex has caused so many ripples is because it gives a glimpse into a rather dystopian future – one where humans can’t tell whether they’re talking to an AI (causing many to wonder if the Turing testhas been passed by this new tech).
Before now, voice assistants haven’t been capable of holding natural-sounding conversations. But the calls demoed by Google, complete with inflections such as “Mm-hmm” or “Ah, gotcha”, sounded so lifelike that it’s clear the human operators on the other end had no idea they were speaking to an AI.
That in itself has caused outrage – with commentators pointing out that ethical problems occur when service workers and call center staff are unsuspectingly experimented on by Google’s human-sounding AIs. Google reacted to this outcry by asserting that working versions of Duplex should have the ability to identify itself built in.
But whether AIs self-identify or not, the cat’s already out of the bag for anyone considering whether their identity verification processes will need to change as a result of this technology – the answer is undoubtedly, yes. The key to how exactly processes will need to change lies in whether AIs are required to self-identify or not – whether by Google themselves, governments or any other regulatory body.
If AIs are required to self-identify themselves as such and state that they’re acting on behalf of a human, should agents be responding to their wishes as if they were that human? I can easily envisage scenarios where AIs can eventually make payments, change data or perform any other process that has impacts on customer or company – only for the customer to respond that the AI’s actions were a mistake and not authorized by them. How then can we determine human intent behind the actions of an AI?
If AIs are not required to self-identify, issues emerge around trust and standards. As it stands, technology like Duplex is only effective in a limited range of scenarios, making it easy to ask a question that sits outside of the AI’s programming to test whether it’s a robot or not (for example, “Who is the president of the United States?”)
Having agents ask these types of questions to try to weed out the “robots” from the humans is reasonably straightforward. But how will those questions evolve as AIs get smarter? Will they constitute a new, more intrusive layer of data protection processes that we have to subject unsuspecting customers to? What happens then when we speak to human customers who cannot answer these questions – through health issues, a lack of shared cultural understanding, or anything else? Could we be dooming them to be treated like little more than unfeeling robots?
Emotion & Empathy
Speaking of feelings, Duplex brings big questions as to what will constitute effective customer service in the future. Our current, human-focused model of optimal customer experience runs on the premise that if you focus on solving problems quickly, accurately and in a friendly manner, you’re likely to achieve good customer outcomes.
But AIs don’t feel. All the niceties and small talk in the world don’t matter to them. Considering that humans and AIs have different needs and priorities during issue resolution, we could see two distinct sets of standards emerge.
The first relates to service standards for humans – and as beings who have thought and felt in much the same way for thousands of years, I can’t see these undergoing any huge revolution in the future.
But a second set of service standards relates to how we can provide optimal service to AIs. I can see these standards relating to focusing on clear language, accurately clarifying intent, and decreasing emotionality in speech which could cause confusion to an AI – quite the opposite of the emotion-centered training we’ve been giving to front-line agents for decades.
Taking Humans Out of Interactions
Thinking about the role of our front-line customer service agents in the potential applications of this technology, we must consider the messages that Google is implicitly sending about the service employees customers speak to every day to get things done.
PC Magazine sums this up deftly: the implicit message embedded within Duplex is that there’s no need for customers to ‘suffer’ through speaking to service employees to get things done. In one of Duplex’s demonstrations, the lady taking the call has a thick accent that is a little difficult to understand. The AI handles this with little awkwardness, making it clear that even in service situations that can be tricky for customers, machines can handle this instead, removing all of the ‘bother’.
I still believe that human interaction and emotion is what humanizes our brands, and makes them friendly and accessible. And putting myself in the shoes of my agents, there’s something that stings about the implicit message within AI-driven voice calls – that other people see talking to them as a waste of their time.
But I do believe that the best kind of customer service is invisible, that is, mediated through access to a range of easy self-service and digital options available to prevent customers from needing to make inconvenient phone calls. Maybe then we need to focus less on the perceived value of individual interactions, and think instead about downsides of the phone as a communication channel that have caused Duplex to become a customer need.
Phone Calls as Inconvenience
The development of Duplex points to an issue innate in customer service operations – and that is, while phone calls are often the best way for a customer to accomplish a goal, they aren’t always convenient. The rise of live chat, self-service and social messaging channel options has happened as a result of this issue. These channels allow more customers to connect with organizations in ways that don’t take up all of their time or attention, require them to take time out of their day, or prevent them from multitasking while they solve problems with organizations.
The necessity of Duplex (and its positive reception by many) shows that while many organizations see cost or effort barriers to providing service over non-voice channels, clearly for some customers that isn’t good enough. Given that organizations such as Deloitte predict volume of voice interactions to businesses to fall from 64% of all channel communications in 2017 to 47% in 2019, organizations need to consider better ways to connect with their customers than by relying on voice-centric service models.
Automation promises to hold the key to dismantling these cost and effort barriers to multichannel service, as we’re now seeing within Chatbot uptake by firms big and small all over the world. While we’ve been exploring Duplex as a tool for customers to take advantage of automation in their own lives, let’s look at what the impacts are when the tables are turned and organizations can use tools like Duplex to evolve and improve their service offerings in a multichannel climate.
What if Duplex Could Help Organizations?
In the spirit of Moore’s law, it’s feasible to consider that given the current pace of technological advancement, and as a privately-owned company, Google will be looking for other ways to apply this technology, helping them to profit from it and secure its future development.
Because of this, I predict that it won’t be long at all until AIs like Duplex are pitched as a replacement for customer service agents on voice channels.
We can already see the evolution from human-led to AI-led service within other channels. Chatbot services are now handling a good percentage of everyday organizational queries over live chat. Considering that studies show that it’s realistic to aim to deflect between 40% – 80% of common customer service inquiries to chatbots, the same deflection principles could be used to help technology like Duplex to drive the same change for voice.
For voice as a channel, the closest thing we have to this right now is the dreaded IVR. The difference between IVR and AIs, however, is in the promise of service that truly helps, rather than hinders. While IVR is almost universally viewed as an unwelcome hurdle to jump on the way to service from a human agent, chatbots are proving that for certain service scenarios, AI can be as efficient as humans – if not more so, due to their speed, constant availability and scalability.
Projecting the development of this technology for voice interactions within the contact center, we’re faced with some questions. What types of voice queries are ripe for automation, and how can we channel these to AIs in a way that doesn’t add more options to a traditional IVR? What happens when customers can’t tell whether a voice agent is human or an AI? Whether that AI self-identifies or not, how does that reflect on our companies? Could we even be ushered into an age of universal mistrust in customer service where our human agents are treated badly by customers, as if they were robots, because our customers just can’t tell the difference?
Perhaps exploring automation within live chat can throw some light on these questions. I’ve seen many organizations who are meeting these issues head-on within chat – and many are digging deep into customer needs and preferences to harness this technology in ways that are both comfortable for their customers, and effective for their businesses.
A Values-Centered Approach to Automation in Customer Service
Now is the time to reflect on how our businesses will handle customer-side automation coming this year, and how more organizations can handle automation-related issues generally as technology develops.
We can take the lead from design ethicists such as Joe Edelman to consider how best to work with this technology in a way that doesn’t result in negative outcomes for our businesses, our agents or our customers.
Edelman proposes a values-centered approach to the design of social spaces online, and by using this same philosophy, we can consider how AI voice assistants detract from or complements the values of customers and other stakeholders interacting with it. Whether it’s us or the customer who’s automating, great service design will come from a consideration of not only what each party aims to achieve but also how their service preferences are denied or accommodated.
When we can consider the values of our customers and our employees, and how those interface with the needs of our businesses, we can start to use this technology in ways that are helpful and useful to them, morally sound, and which deliver the time and resource benefits that both businesses and customers want.
Originally published here.