Recently I wrote about Google's impressive-seeming Duplex AI voice assistant demonstration[1]. I said that although it seems magical, the trick depends on constraining the robot within very narrow parameters. Duplex is designed for "scheduling certain types of appointments" and that's it. By closing down the range of possible responses, the task for the AI is a lot simpler.
But there's another aspect of the demonstration that has ruffled feathers, put cats among pigeons and twisted knickers across the world of media and that's the question of ethics. Is it ethical for Google's AI to call a human to book an appointment, while pretending to be another human? My answer is no, I don't think it is.
When I say that Google's AI pretended to be another human, I mean that it hesitated and added "umm" and "err" noises so that it sounded more natural and less robotic. We're still not that used to talking to machines, so perhaps Google feared that people would hang up on obvious robo-calls. Whatever the reason, Google has decided that to achieve its goal of handling customers' tedious phone calls, the AI needs to engage in some trickery.
Once you begin tricking people, you have crossed what Google used to call "the creepy line"[2] and are into unethical territory. As people become aware of this kind of trickery, it is likely to erode their trust in any unsolicited calls. (Which might turn out not to be such a bad thing but that's for another post.)
We respond very strongly to voice as a cue and it isn't hard to imagine how this could be used maliciously. For example, criminals could use this technology to automate calls asking for bank account details. That would allow them to call many more people - increasing their chance of convincing some gullible, distracted or otherwise unlucky person to part with personal information.
It's also not hard to imagine businesses profiling their customers to determine which accents they are most likely to respond to, then ensuring that they receive calls from AIs that have that accent. It's not feasible to do that with a call centre staffed by humans but AI will make it easy.
That's not to say that AI calls are bad. And, of course, I wouldn't say that. My company makes AI calls. The bad bit is tricking users.
For example, at ContactEngine we know that we could get better response rates for automated calls if we used certain techniques, such as adding background noise so that it sounds like the call is coming from a call centre. So, what do we do? The opposite: we deliberately make our calls sound less human, so that people know they are receiving a call from a machine.
Another thing we do differently is that we don't provide unsolicited calls. Our clients are doing things like contacting customers to arrange an appointment to repair a washing machine - something the customer will have requested and be expecting.
There are many occasions when human to human engagement is a vital first step. For example, if someone is calling their insurer to report a car accident, then they want to talk to a human, not a machine. They will be stressed and anxious and genuine reassurance from another person will be meaningful. Scripted reassurance from a machine pretending to be a person is pointless at best and insulting at worst.
A subsequent message - to let the person know when a tow truck is coming, for example - can be automated because it's just a confirmation message. Automating that gives call centre staff more time to deal with people who need to talk to a person.
So, one difference is whether the call is solicited or not and the other concerns who benefits. The calls we make are for the benefit of the recipient of the call. We are helping them to get something done. The calls in Google's examples are for the benefit of the caller. I wouldn't advocate tricking anyone, but the trick is even worse when it isn't for the benefit of the recipient of the call.
Put it this way: a magician does tricks for the benefit of the audience - it's about entertaining them. If the trick involves stealing your watch, you can be sure you'll get it back and enjoy the process of being fooled. A magician who does the same trick for their own benefit is just somebody stealing your watch. We call that a con.
_______________________________________________________________________________________
[1] https://www.contactengine.com/blog/ai-for-the-practically-minded/
[2] http://thehill.com/policy/technology/122121-schmidt-google-gets-right-up-to-the-creepy-line