
Hi man.interesting The Jan. 6 congressional hearing drew an NFL-style audience. Can’t wait for the Peyton and Eli versions!
plain landscape
This week, the AI world was rocked by a report This Washington post A Google engineer ran into trouble at the company after insisting that the conversational system called LaMDA was actually a person. The owner of the story, Blake Lemoine, asked his bosses to recognize, or at least take into account, that the computer system its engineers created was sentient—and that it had a soul. He knew this because LaMDA, whom Lemoine regarded as a friend, told him.
Google disagrees, and Lemoine is currently on paid administrative leave. Company spokesman Brian Gabriel said in a statement: “Many researchers are considering the long-term possibilities of conscious or general artificial intelligence, but it would not make sense to do so by anthropomorphizing today’s non-sentient conversational models. of.”
Anthropomorphism — the misattribution of human characteristics to an object or animal — is a term used by the AI community to describe Lemoine’s behavior, describing him as too credulous or wobbly. Or maybe a religious fanatic (who describes himself as a mysterious Christian priest).The argument argues that when faced with credible responses from large language models such as LaMDA or Open AI’s oral expert GPT-3, one tends to think that someonenot something created them. People name their cars and hire therapists for their pets, so it’s no surprise that some people mistake a coherent robot like a human. However, the community believes that Googlers with computer science degrees should know better not to play basic language gimmicks. As Gary Marcus, a well-known artificial intelligence scientist, told me after studying the heart-to-heart recordings of Lemoine and his soulmate, “It’s basically like auto-complete. There’s no thought there. When it says ‘I love When “my family and friends”, it has no friends, no people, and no concept of kinship. It knows that the words son and daughter are used in the same context. But that’s not the same as knowing what a son and a daughter are Yes.” Or as a recent WIRED report put it, “There is no spark of consciousness there, just a little magic that spreads paper over cracks.”
My own feelings are more complicated. Even knowing how some sausages are made in these systems, I am struck by the output of recent LLM systems. So does Blaise Aguera y Arcas, vice president of Google, who economist After he spoke to LaMDA earlier this month, “I felt the ground change under my feet. I felt more and more like I was talking to smart people.” Sometimes they make weird mistakes, but sometimes These models seem to be bursting with brilliance. Creative human writers have managed inspired collaborations. Something is going on here. As a writer, I ponder whether one day my ilk—the flesh-and-blood masters of words who amass a trove of scrapped drafts—will one day be relegated, like losing a football team sent to a less prestigious league .
“These systems have dramatically changed my personal view of the nature of intelligence and creativity,” said Sam Altman, co-founder of OpenAI, which developed GPT-3 and a graphics mixer called DALL-E, which could would put a lot of illustrators in the unemployment queue. “The first time you use these systems, you’re like, Wow, I really didn’t think a computer could do this. By some definition, we have figured out how to make computer programs intelligent, able to learn and understand concepts. This is a remarkable achievement of human progress. Struggling to differentiate himself from Lemoine, Altman agrees with his AI colleagues that current systems are nowhere near perceptive. “But I believe researchers should be able to think about any problem they are interested in,” he said. Long term questions are fine. Perception is worth thinking about in the long run. “