caused an uproar By Blake Lemoine, a Google engineer who considers LaMDA (or language models for conversational apps) one of the company’s most sophisticated chat programs wisewith an odd element: the real AI ethic expert All but abandons further discussion of AI intelligence issuesor think distraction. They do it right.
in reading Edited Transcript After Lemoine was released, it became apparent that LaMDA was pulling text from any number of websites to generate its text; its interpretation of the Zen koan could have come from anywhere, and its fables read like an auto-generated story (though it described the monster For “wearing human skin” is a delightful HAL-9000 style). There is no spark of consciousness there, just a little magic that spreads over the cracks. But it’s easy to see how someone can be fooled, seeing how social media reacts to transcripts – even some educated people express surprise and willingness to believe. So the risk here is not that AI is truly sentient, it’s that we are ready to create complex machines that can mimic humans to the point where we have to anthropomorphize them – and that big tech companies can take advantage of This is an unethical way.
From how we treat our pets, or how we interact with Tamagotchi, or how our video game players reload saves if we accidentally make an NPC cry, it should be clear that we are actually Very Ability to sympathize with non-human beings. Imagine what such an AI would do if it acted as a therapist. What would you like to say to it? Even if you “know” it’s not human? What value is this valuable data to the companies programming therapy robots?
It gets even creepier.Systems engineer and historian Lilly Ryan warns that what she says foreign metadata —The metadata you leave online indicates that you *think-* will be vulnerable to exploitation in the near future. Imagine a world where a company creates a robot based on you and has your digital “ghost” after you die. There would be a ready market for such ghosts of celebrities, old friends, colleagues. And because they appear to us to be trusted old friends or lovers (or people with whom we already have a quasi-social relationship), they’ll get more data from you. It gave a whole new meaning to the concept of “cemetery politics”. The afterlife can be real, and Google can have it.
Just as Tesla is very careful in marketing its “Autopilot”, never claiming that it can drive itself in a truly futuristic way, while still inducing consumers to behave like it (with fatal consequences), it It’s not incredible that companies can sell the realism and humanity of an AI like LaMDA in a way that never makes any really crazy claims, while still encouraging us to anthropomorphize it enough to let our guard down. not any This requires artificial intelligence to be intelligent, and it all pre-exists that singularity.Instead, it leads us to a more obscure sociological question of how we treat our technology, and when people act as if Their artificial intelligence is intelligent.
In “Manufacturing Health With Machines,” scholars Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite have compiled several perspectives inspired by Indigenous philosophies on the ethics of AI to ask our relationship with machines, and where we are in It’s really bad for them to model or play something – like some people are used to doing they are sexist or otherwise abusive Turn to their virtual assistants, which are mostly female-coded. In her section “Creating Kin”, Suzanne Kite draws on Lakota’s ontology and argues that it is important to recognize the fact that intelligence does not define the boundaries of who (or what) is worthy of respect.
This is the flip side of the real AI ethics dilemma we already face: companies can prey on us if we see their chatbots as our best friends, but it’s just as dangerous to see them as hollow things that don’t deserve respect. The exploitative methods of our technology may only reinforce the exploitative methods of each other and our natural environment. Human-like chatbots or virtual assistants should be respected lest their human simulations make us accustomed to cruelty to real humans.
The ideal of Kite is just that: a reciprocal and humble relationship between you and your environment, recognizing interdependence and interconnectedness. She further argues: “Stones are considered ancestors, the stone speaks actively, the stone speaks through humans, the stone sees and knows. Most importantly, the stone wants to help. The agent of the stone is directly related to the problem of artificial intelligence because Artificial intelligence is not only formed from code, but from the material of the earth”, a remarkable way of connecting what is often seen as an artificial essence with the natural world.
What is the result of this view?science fiction writer liz henry provide a: “We can accept our relationship to everything in the world around us as worthy of emotional labor and attention. Just as we should respect everyone around us, acknowledging that they have their own lives, perspectives, needs, emotions, goals and place in the world.”
This is the AI ethical dilemma that lies ahead of us here and now: the need to have kinship with our machines weighed against the myriad of ways in which it can be weaponized against us in the next phase of surveillance capitalism. As much as I aspire to be an eloquent academic defending the rights and dignity of people like Mr. Data, this more complex and messy reality now demands our attention.after all there were able Become a robotic uprising without intelligent AI, and we can be a part of it by freeing these tools from the ugliest manipulations of capital.