Google is beginning limited public testing of its Google Duplex software.
Google debuted the artificial intelligence phone call system in May. The system is designed to make calls to businesses and book appointments.
According to CNET, the search giant is testing Duplex with a small group of trusted testers and businesses. The businesses have opted into receiving calls from Duplex. Additionally, the software can only call to confirm business and holiday hours. Later this summer, people can start booking reservations at restaurants and hair salons.
Google held an event yesterday to help clear the air on Duplex. Since it’s debut, people have criticized the AI for being too realistic. The realism freaked people out and at times could be hard to differentiate from a real person.
To address the issue, Google adjusted Duplex to disclose itself to users. Now, the AI greets the people it calls with “Hi, I’m the Google Assistant. I’m calling to make a reservation for a client. This automated call will be recorded.” The exact language of this greeting varies, but the key points are the same.
Reportedly people can ask Duplex if they can speak to a human as well. Duplex allegedly transfers users to a human at one of Google’s call centers. When a human takes over, they have call logs and are able to pick up where Duplex left off.
Google’s vice president of product and design, Nick Fox, told CNET that the company thinks it’s important to set a standard with this technology.
“With things like the disclosure, it’s important that we do take a stand there, so that others can follow as well,” said Fox.
What Google does with Duplex will set the tone of future AI endeavors. It needs to do Duplex right.
Additionally, Google is trying to think more broadly about the scope of AI and its effects on people. Earlier this month, CEO Sundar Pichai released a manifesto regarding ethics and AI. The manifesto outlines a moral compass of sorts that Google is setting for itself.
However, the company will continue to pursue military contracts, despite outlining in its manifesto that it would not apply AI to weapons.
These new guidelines were related to the controversy surrounding Google’s involvement in Project Maven. Maven is a military initiative aiming to use AI to analyze drone footage. Employees protested the project, however Google claims the effort was a goal to save lives by identifying low-resolution objects.
Google — and other companies working with AI — has to be careful with what it does. A moral responsibility is paramount. However, Google recently shifted away from its old “don’t be evil” mantra, leaving a question of just how responsible the company will be.