Photo Credit: Bloomberg photo by David Paul Morris
At its I/O conference on Tuesday, Google previewed Duplex, an experimental service that lets its voice-based digital assistant book appointments on its own. It was part of a slate of features, such as automated writing in emails, where Google touted how its artificial intelligence technology saves people time and effort. In a demonstration on stage, the Google Assistant spoke with a hair salon receptionist, mimicking the "ums" and "hmms" pauses of human speech. In another demo, it chatted with a restaurant employee to book a table. The audience of software coders cheered.
Outside the Google technology bubble, critics pounced. The company is placing robots in conversations with humans, without those people realising. The obvious question soon followed: Should AI software that's smart enough to trick humans be forced to disclose itself. Google executives don't have a clear answer yet. Duplex emerged at a sensitive time for technology companies, and the feature hasn't helped alleviate questions about their growing power over data, automation software and the consequences for privacy and work.
As in previous years, the company unveiled a feature before it was ready. Google is still debating how to unleash it, and how human to make the technology, several employees said during the conference. That debate touches on a far bigger dilemma for Google: As the company races to build uncanny, human-like intelligence, it is wary of any missteps that cause people to lose trust in using its services.
Scott Huffman, an executive on Google's Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos, while others were concerned about the implications. Huffman said he understands the concerns. Although he doesn't endorse one proposed solution to the creepy factor: Giving it an obviously robotic voice when it calls. "People will probably hang up," he said.
In an interview on Wednesday, Huffman suggested the machine could say something like, "I'm the Google assistant and I'm calling for a client." More experiments are planned for this summer, he noted.
Another Google employee working on the assistant seemed to disagree. "We don't want to pretend to be a human," designer Ryan Germick said when discussing the digital assistant at a developer session earlier on Wednesday.
Germick did agree, however, that Google's aim was to make the assistant human enough to keep users engaged. The unspoken goal: Keep users asking questions and sharing information with the company - which can use that to collect more data to improve its answers and services.
There's a thin line between Google's aim of making its assistant like a human and not deceiving real humans with software like Duplex. Google consciously decided against giving the assistant a real human background. When it's asked how old it is, or where it was born, it either avoids the question or says clever things like "I was born in a meeting."
Duplex has been designed to perform a limited range of very specific tasks. Google's AI technology isn't smart enough to learn to do many other things quickly. If the human on the other end of the line asked questions about something other than hair or restaurants, Duplex wouldn't have a human answer and may well end the call - making it clear it is software. One Googler compared it to OpenTable's online restaurant reservation system, which automates the process online. No one worries that system will dupe humans by learning to do other tasks, the employee noted.
The predicament didn't end with realistic robo-calling. Douglas Eck is a scientist at Magenta, a Google AI project researching the use of machine learning to create music, video, images and text. He was asked about his vision of the future in front of a packed audience of developers at I/O on Wednesday.
Eck said machine learning, a powerful form of AI, will be integrated into how humans communicate with each other. He raised the idea of "assistive writing" in the future with Google Docs, the company's online word processing software. This may be based on Google's upcoming Smart Compose technology that suggests words and phrases based on what's being typed. Teachers used to worry about whether students used Wikipedia for their homework. Now they may wonder what part of the work the students wrote themselves, Eck said.
This could be a dystopian vision, but it doesn't have to be that way, the Google scientist concluded.
© 2018 Bloomberg LP