Google's AI sounds like a human on the phone — should we be worried?


It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this phone call was different. It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing about the call was that the person on the other end didn’t seem to suspect they were talking to AI at all. This is a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.

For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear? And is this another example of tech privilege, where those in the know can offload the boring conversations they don’t want to have onto a machine, while those receiving the calls (most likely low-paid service workers) have to deal with some idiot robot?

In other words, this was a typical Google demo: equal parts wonder and worry.

But let’s start with the basics. Onstage, Google didn’t talk much about the details of how the feature, called Duplex, works, but an accompanying blog post adds some important information. First, Duplex isn’t some futuristic AI chatterbox, capable of open-ended conversation. As Google’s researchers write, it can only converse in “closed domains” — exchanges that are functional, with strict limits on what is going to be said. You want a table? For how many? On what day? And what time? Okay, thanks, bye. Easy!

Mark Riedl, an associate professor of AI and storytelling at Georgia Tech, told The Verge that he thought Google’s Assistant would probably work “reasonably well,” but only in these formulaic situations. “Handling out-of-context language dialogue is a really hard problem,” Riedl told The Verge. “But there are also a lot of tricks to disguise when the AI doesn’t understand or to bring the conversation back on track.”

One of Google’s demos showed perfectly how these tricks work. The AI was able to navigate a series of misunderstandings but did so by rephrasing and repeating questions. This sort of thing is common with computer programs designed to talk to humans. Snippets of their conversation seem to show real intelligence, but when you analyze what’s actually being said, it turns out they’re just preprogrammed gambits. Google’s blog post offers some fascinating details on this, spelling out some of the ticks Duplex will use. These include elaborations (“for next Friday” “for when?” “for Friday next week, the 18th.”), syncs (“can you hear me?”), and interruptions (“the number is 212-” “sorry, can you start over?”).

It’s important to note that Google is calling Duplex an “experiment.” It’s not a finished product, and there’s no guarantee it’ll be widely available in this form, or widely available at all. (See also: the real-time translation feature Google showed off for its Pixel Buds last year. It worked flawlessly onstage, but it was hit-and-miss in real life and only available to Pixel phone owners.) Duplex works in just three scenarios at the moment: making reservations at a restaurant; scheduling haircuts; and asking businesses for their holiday hours. It will only be available to a limited (and unknown) number of users sometime this summer.

One more big caveat: if the call goes wrong, a human takes over. In its blog post, Google says Duplex has a ”self-monitoring capability” that allows it recognize when the conversation has moved beyond its capabilities. “In these cases, it signals to a human operator, who can complete the task,” says Google. This is similar to Facebook’s personal assistant M, which promised to use AI to deal with similar customer service scenarios but ended up outsourcing an unknown amount of this work to humans. (Facebook closed this part of the service in January.)

All this gives us a clearer picture of what Duplex can do, but it doesn’t begin to answer the questions of what effects Duplex will have. And as the first company to demo this tech, Google has a responsibility to face these issues head-on.

The obvious question is, should the company notify people that they’re talking to a robot? Google’s vice president of engineering, Yossi Matias, told CNET it was “likely” this would happen. Speaking to The Verge, Google said it definitely believes it has a responsibility to inform individuals.

Many experts in this domain agree, although how to notify the human on the call is tricky. If Assistant starts its calls by saying “hello, you’re speaking to a robot” then the receiver is likely to just hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls. Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Joanna Bryson, an associate professor at the University of Bath who studies AI ethics, told The Verge that Google has on obvious obligation to disclose this information. If robots can freely pose as humans, says Bryson, the scope for mischief is incredible, ranging from scam calls to automated hoaxes. (Imagine getting a panicked phone call from someone saying there was a shooting nearby. You ask them a few questions, they answer — enough to convince you it’s a real person — then hang up, saying they got the wrong number.)

But Bryson letting companies manage this themselves won’t be enough, and there will need to be new laws introduced to protect the public. “Unless we regulate it, some company in a less conspicuous position than Google will take advantage of this technology,” says Bryson. “Google may do the right thing but not everyone is going to.”

And if this technology becomes widespread, it will have other, more subtle effects, the type which can’t be legislated against. Writing for The Atlantic, Alexis Madrigal suggests that small talk — either during phone calls or conversations on the street — has an intangible social value. He quotes urbanist Jane Jacobs, who says “casual, public contact at a local level” creates a “web of public respect and trust.” What do we lose, then, if we give people another option to avoid social interactions, no matter how minor? If these calls disappear altogether, as AI starts placing them and receiving them, do we lose anything important?

One effect might be making us all a little bit ruder. If we can’t tell the difference between humans and AI on the phone, will we treat all phone calls more suspiciously? We might start cutting off real people, telling them: “Just shut up and let me speak to a human.” And if it becomes easier for us to book reservations at a restaurant, might we take advantage of that fact and book them more often, then care less when we can’t show up? (Google told The Verge it would limit the number of daily calls a business could receive from Assistant, and the number of calls Assistant could place, in order to stop people from using the service for spam.)

There are no obvious answers to these questions, but as Bryson points out, Google is at least doing the world a service by bringing attention to this technology. It’s not the only company developing these services, and it certainly won’t be the one to use them. “It’s a huge deal that they’re showcasing it,” says Bryson. “It’s important that they keep doing demos and videos so people can see this stuff is happening […] What we really need is an informed citizenry.”

In other words, we need to have a conversation about all this before the robots start doing the talking for us.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *