
Gebru, now director of nonprofit distributed AI research, hopes that in the future people will focus on Human welfare, not robot rights. Other AI ethicists say they will no longer Discuss conscious or superintelligent AI Not at all.
“There’s a pretty big gap between the current description of AI and what it can actually do,” said Giada Pistilli, an ethicist at language model-focused startup Hugging Face. “This narrative simultaneously evokes fear, surprise and excitement, But it’s mostly based on lies to market products and exploit hype.”
The upshot of speculation about sentient AI is that people are more willing to make claims based on subjective impressions than scientific rigor and evidence, she said. It distracts from the “numerous ethical and social justice issues” raised by AI systems. While every researcher has the freedom to study what they want, she said, “I’m just worried that focusing on this topic will make us forget what happens when we look at the moon.”
Lemoire’s experience is an example of what author and futurist David Brin calls a “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that within three to five years, people will claim that AI systems are sentient and claim their rights. At the time he thought the calls would come from a virtual agent that maximizes human empathy responses with the appearance of a woman or child, rather than “someone at Google,” he said.
Brin said the LaMDA event was part of a transition period in which “we will increasingly blur the lines between reality and science fiction.”
Brin’s 2017 predictions are based on advances in language models. He expects this trend to lead to scams starting here. If people liked simple chatbots like ELIZA decades ago, how hard would it be to convince millions of people that a person being imitated deserves protection or money?
“There’s a lot of snake oil out there, and all the hype is real progress,” Brin said. “Parsing our way through stew is one of our challenges.”
Yejin Choi, a computer scientist at the University of Washington, said that while LaMDA seems empathetic, anyone surprised by large language models should consider the cheeseburger stabbing case. A teenager in Toledo, Ohio, stabbed his mother in the arm during an argument with a cheeseburger during a local news broadcast in the United States. But the title “Cheeseburger Stabbing” is vague. It takes some common sense to know what’s going on. Attempting to get OpenAI’s GPT-3 model to generate text using “Breaking News: Cheeseburger Stabbing” yields information about a man stabbed in a cheeseburger over a ketchup argument, and a man arrested after stabbing a cheeseburger Word.
Language models can sometimes go wrong because deciphering human language can require many forms of commonsense understanding. To document the capabilities and weaknesses of large language models, last month, more than 400 researchers from 130 institutions participated in a collection of more than 200 tasks known as BIG-Bench, or beyond imitation games. BIG-Bench includes some traditional types of language model tests, such as reading comprehension, but also logical reasoning and common sense.
Researchers at the Allen Institute for Artificial Intelligence mosaic This project documents the commonsense reasoning capabilities of AI models, contributing to A task called Social-IQa.them Asking language models (excluding LaMDA) to answer questions that require social intelligence, such as “Jordan wants to tell Tracy a secret, so Jordan leans toward Tracy. Why would Jordan do this?” The team found the performance of large language models 20% to 30% lower than in humans.