Shortly after me learned about Eliza, the program that asks people questions as a Rogerian psychoanalyst, I learned that I could run it in my favorite text editor, Emacs. Eliza is really a simple program with hard-coded text and flow control, pattern matching and simple, template learning for psychoanalytic triggers – as you recently mentioned your mother. Still, even though I knew how it worked, I felt a presence. However, I broke that eerie feeling forever when it occurred to me to keep pressing return. The program cycled through four possible opening prompts, and the engagement was broken as an actor in a film that made eye contact through the fourth wall.
Last week was their engagement with Google’s LaMDA – and its alleged judgment-was broken by one Economist article by AI legend Douglas Hofstadter in which he and his friend David Bender show how “amazingly hollow” the same technology sounds when asked a nonsense question like “How many pieces of sound are there in a typical cumulonimbus cloud?”
But I doubt we will ever have these blatant accounts of inhumanity.
From here on out, safe use of artificial intelligence requires demystifying the human condition. If we can not recognize and understand how AI works – if even expert engineers can fool themselves into discovering agency in a “stochastic parrot”—Then we have no means of protecting ourselves from negligent or malicious products.
This is about ending the Darwinian revolution and more. To understand what it means to be expensive, and extend the cognitive revolution to understand how algorithmic we are too. We all have to get over the obstacle by believing that a certain human skill – creativity, dexterity, empathy, whatever – will set us apart from AI. Helps us accept who we really are, how we work, without U.S losing the commitment in our lives is a hugely expanded project for humanity and the humanities.
Achieving this understanding without a significant number of us embracing polarizing, superstitious, or machine-inclusive identities that endanger our societies is not only a concern for the humanities, but also for the social sciences and for some political leaders. For other political leaders, unfortunately, this may be an option. One way to power may be to encourage and anticipate such uncertainties and misunderstandings, just as some are currently using disinformation to disrupt democracies and regulation. In particular, the technology industry must prove that it is on the side of the transparency and understanding that supports liberal democracy, not secrecy and autocratic control.
There are two things that AI really is not, no matter how much I admire them people claim otherwise: It’s not a mirror, and it’s not a parrot. Unlike a mirror, it does not just passively reflect the surface of who we are. With the help of artificial intelligence, we can generate new ideas, images, stories, sayings, music – and everyone who discovers these growing capacities has the right to be emotionally triggered. In other people, such creativity is of immense value, not only for recognizing social closeness and social investment, but also for deciding who has high-quality genes that you would like to combine your own with.
AI is not a parrot either. Parrots perceive much of the same colors and sounds that we do, the way we do, using pretty much the same hardware, and therefore experience pretty much the same phenomenology. Parrots are very social. They mimic each other, probably to prove affiliation and mutual affection, just like us. This is very, very little like what Google or Amazon do when their devices “parrot” your culture and desires for you. But at least those organizations have animals (humans) in them and worry about things like time. Parrots to parrot is absolutely nothing like what an AI device does in those same moments, moving some digital bits around in a way that is known to be able to sell people’s products.
But does all this mean that AI cannot be sentient? What is this “sensing” that some claim to discover? That Oxford English Dictionary says it is “having a perspective or a feeling.” I have heard philosophers say that it is “having a perspective.” Surveillance cameras have perspectives. Machines can “feel” (sense) everything we build sensors for – touch, taste, sound, light, time, gravity – but to represent these things as large integers derived from electrical signals means that any machine “feeling” is far more different from ours than even bumblebees or bat sonar.