Google has a plan to stop its new AI from getting dirty and rude

Silicon Valley CEO When announcing the company’s next big thing, the focus is usually on the positive. In 2007, Apple’s Steve Jobs praised the first iPhone’s “revolutionary user interface” and “breakthrough software.” Google CEO Sundar Pichai took a different tack at the company’s annual meeting on Wednesday, announcing a beta test of Google’s “most advanced conversational artificial intelligence to date.”

Pichai said the chatbot, called LaMDA 2, can communicate on any topic and performed well in tests with Google employees. He announced the imminent launch of an app called AI Test Kitchen, which will make the robot available for outsiders to try out. But Pichai added a stern warning. “While we have improved safety, the model may still produce inaccurate, inappropriate or offensive responses,” he said.

Pichai’s wobbly tones illustrate the mix of excitement, confusion and worry surrounding a series of recent breakthroughs in the capabilities of machine-learning software that processes language.

The technology has improved autocomplete and web searches. It has also created new categories of productivity apps that help employees by generating fluid text or programming code. When Pichai first revealed the LaMDA project last year, he said the project could eventually run in Google’s search engine, virtual assistant and workplace applications. However, despite these dizzying promises, it’s unclear how to reliably control these new AI wordsmiths.

Google’s LaMDA, or Language Models for Conversational Applications, is an example of what machine learning researchers call large language models. The term is used to describe software that builds a statistical sense of language patterns by processing large amounts of text (usually online sources). For example, LaMDA was initially trained on over a trillion words from online forums, question-and-answer sites, Wikipedia, and other web pages. This massive amount of data helps algorithms perform tasks such as generating different styles of text, interpreting new text, or acting as a chatbot. If these systems worked, they wouldn’t be like the frustrating chatbots you use today. Currently, Google Assistant and Amazon’s Alexa can only perform certain pre-programmed tasks and deflect when they encounter content they don’t understand. What Google is proposing now is a computer you can actually talk to.

Chat transcripts released by Google show that LaMDA can be — at least sometimes — informative, thought-provoking, and even funny. Testing chatbots prompted Google vice president and AI researcher Blaise Agüera y Arcas to write a personal essay in December arguing that the technology could provide new insights into the nature of language and intelligence. “On the other side of the screen, the idea of ​​having a ‘who’ instead of an ‘it’ can be hard to shake,” he wrote.

Pichai made it clear when he announced the first version of LaMDA last year and again on Wednesday that he thought it might offer a path to a voice interface that’s more frustratingly limited than services like Alexa, Google Assistant and Apple’s Siri. The functionality is much broader. Now, Google’s leaders seem convinced they may finally have found a way to make computers that can actually talk to them.

Meanwhile, large language models have been shown to speak fluently about dirty, nasty, and simply racist. Scraping billions of words of text from the web inevitably involves a lot of nasty stuff. OpenAI, the company behind language generator GPT-3, reports that it was created to perpetuate stereotypes about gender and race, and requires customers to implement filters to screen out objectionable content.

technology.

Leave a Reply