‘Is this AI Sapient?’ Is the wrong question to ask about LaMDA | MarketingwithAnoy

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, Language Model for Dialogue Applications (LaMDA) is juicyhas had a strange element: Actual AI ethics experts are everything except giving up further discussion of the AI ​​sapience issueor assess it a distraction. They are right to do so.

In reading the edited printout Lemoine published, it was quite clear that LaMDA pulled from any number of sites to generate its text; its interpretation of a Zen koan could have come from anywhere, and its fable read as an automatically generated story (though its depiction of the monster as “wearing human skin” was a nice HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone can be fooled when they look at social media responses to the transcript – with even some well-educated people expressing amazement and a willingness to believe. And the risk here is not that AI is really sensory, but that we are well equipped to create sophisticated machines that can mimic humans to such an extent that we can not help but anthropomorphize them – and that large technology companies can take advantage of this deeply. unethical ways.

As it should be clear from the way we treat our pets, or how we have interacted with Tamagotchi, or how we video players reload a saw, if we accidentally make an NPC cry, we are in fact much able to feel empathy with the non-human. Imagine what such an AI could do if it appeared as e.g. a therapist. What would you be willing to say to that? Even if you “knew” it was not human? And what would this precious data be worth to the company that programmed the therapy bot?

It’s getting creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave online that illustrates how you think – is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you died. There would be a clear market for such ghosts of celebrities, old friends and colleagues. And because they would appear to us as a trusted loved one (or someone we had already developed a parasocial relationship with), they would serve to elicit even more data from you. It gives a whole new meaning to the idea of ​​”necropolitics.” The aftermath can be genuine and Google can own it.

Just as Tesla is wary of how it markets its “autopilot” and never quite claims that it can drive the car by itself in a truly futuristic way, while still making consumers behave as if it does (with lethal consequences), it is not inconceivable that companies could market the realism and humanity of AI like LaMDA in a way that never comes with any really wild claims while still encouraging us to anthropomorphize it just enough to fail our guard. None of this AI requires to be sapient, and it all already exists that singularity. Instead, it leads us into the gloomier sociological question of how we treat our technology and what happens when people act As if their AIs are sapient.

I “Making Kin With the Machines, ”academics Jason Edward Lewis, Noelani Arista, Archer Pechawis and Suzanne Kite march several perspectives informed by indigenous philosophies about AI ethics to examine the relationship we have to our machines and whether we model or play anything. really awful with them – as some people tend to do when they are sexist or otherwise violent against their largely feminine coded virtual assistants. In her part of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “being” worthy of respect.

This is the reverse of the AI ​​ethical dilemma already here: Companies can prey on us if we treat their chatbots as if they are our best friends, but it’s just as dangerous to treat them as empty things who are not respectful. An exploitative approach to our technology can simply reinforce an exploitative approach to each other and to our natural environment. A human-like chatbot or virtual assistant should be respected so that their simulacrum of humanity does not accustom us to cruelty to actual people.

Kite’s ideal is simply this: a mutual and humble relationship between yourself and your environment that recognizes mutual dependence and connection. She further argues: “Stones are considered ancestors, stones speak actively, stones speak through and to people, stones see and know. Most importantly, stones want to help. The stone’s agency connects directly with the issue of AI, as AI is formed from not only code, but from the earth’s materials. “This is a remarkable way of binding something typically seen as the essence of artificiality to the natural world. .

What is the result of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationship to all the things in the world around us as worthy of emotional work and attention. Just as we should treat all people around us with respect and acknowledge that they have their own life, perspective, needs, feelings, goals and place in the world. “

This is the ethical dilemma of artificial intelligence facing us: the need to make the family of our machines weigh the innumerable ways in which this can and will be armed against us in the next phase of surveillance capitalism. Just as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, it is this more complex and messy reality that demands our attention. After all, there able to be a robotic rebellion without sapient AI, and we can be a part of it by freeing these tools from the ugliest manipulations of capital.

Leave a comment