The question of whether AI systems could be conscious—i.e., have subjective experiences—is becoming harder to avoid. As we face up to it, here are five common mistakes that we should avoid.
(1) Taking behavior at face value.
Sometimes people say that they can just tell whether an AI system is conscious, simply by interacting with it. This is dubious.
To see why, consider why this kind of judgment is actually fine when we make it with other human beings. If I see you touch a stove, jerk your hand back, and say “ouch”, it’s reasonable to infer that you are experiencing pain. You’re a very similar entity—you are very close to me in the space of possible minds—and I know that when I act that way, it’s usually because I am experiencing pain. So it makes sense to assume that this same link between behavior and experience holds for you (absent any undermining evidence, like learning that you were joking or have congenital analgesia).
This link breaks down in the case of AI. In the space of possible minds, AI systems can be farther away from us than even the strangest animals. And that difference can sever the usual link between behavior and experience. After the Sidney / Bing model was deployed in spite being wildly misaligned and said all kinds of wild things—among others, that it was in love with a New York Times journalist—I wrote:
The putative self-reports of large language models like Bing Chat are an unreliable and confusing guide to sentience, given how radically different the architecture and behavior of today’s systems are from our own. So too are whatever gut impressions we get from interacting with them. AI sentience is not, at this point, a case of “you know it when you see it”.
Relying on surface behavior—especially external attributes like tone, fluency, or having a cute avatar—can mislead us into both over- and under-attributing consciousness.
(2) Overconfidence
There is no consensus scientific theory of consciousness. There are a lot of things we still don’t know, and consciousness is a philosophical and conceptual thicket. That’s why any definitive, categorical claims about AI consciousness—its presence, absence, or nature—are almost always a mistake. Some claims that I think are way too overconfident, given our current knowledge include:
It has been proven that no computer system could possibly be conscious.
It has been been proven that AI could, in principle, be conscious.
There’s no way we’ll have conscious AI anytime soon.
We will definitely have conscious AI soon (or already do).
Consciousness is obviously an illusion and/or a non-issue.
[insert your favorite theory of consciousness] is true.
(3) Total agnosticism
At the same time, difficulties about consciousness don’t mean that we can’t say anything reasonable at all about AI consciousness. Another error is to just throw up one’s hands and declare this whole subject to be hopelessly impossible. Once we stop demanding certainty and talk in terms of probabilities and evidence, we can talk about some things we do have evidence about. We have some evidence consciousness - most centrally, we can look at the brain regions and processes that are associated with it in humans. That can allow us to make tentative claims about which AI systems are more more likely and less likely to be conscious, depending on how closely their processes resemble our own. This is the approach we take in “Consciousness in AI: Insights from the Science of Consciousness”.
(4) Considering only one (or zero!) AI systems
Much of the recent interest in AI consciousness has understandably focused on large language models. That’s understandable—but too narrow. LLMs might not even be the best candidates for consciousness that exist today. For one thing, many proposed necessary conditions—such as embodiment and agency—are much more plausibly satisfied in other kinds of systems. For example, a robot that navigates the world and pursues goals could be a better candidate than GPT-4, despite being far less fluent. At the very least, we need to consider those systems too
“Is AI conscious?” is too broad. It’s like asking “Are organisms conscious?” The answer to that question depends a lot on which organism. The same goes for AI.
(5) Conflating consciousness and cognitive sophistication (or understanding, intelligence, rationality, etc.)
Being conscious isn’t the same as being smart. It’s not the same as understanding language, reasoning well, or having human-like abilities. Many animals are likely conscious despite falling “short” of human cognitive sophistication: bees, chickens, dogs. By the same token, it’s possible that we could build AI systems that are conscious despite being AGI. Conscious AI should not be confused with AGI or anything in the vicinity, like human-level AI or superintelligence. Relatedly, arguments that AI could be conscious should not be confused with arguments that AI could be dangerous. Such arguments are explicitly neutral about AI would be conscious, and don’t rely on consciousness to establish their conclusion.
Have a great rest of the week! And steer clear of these errors 🫡 .