AI systems are not p-zombies
In a boring but nonetheless important terminological sense
tl;dr: AI systems cannot be “p-zombies” in the original philosophical sense of that term, because:
P-zombies are atom-for-atom physical duplicates of humans. AI systems are not.
P-zombies are behaviorally indistinguishable from humans. AI systems are not.
This terminology difference can be confusing, so heads up!
—
In discussions about (potential) AI consciousness, people sometimes call AI systems “p-zombies”. They mean something like “they can behave in a way that appears conscious/human-like, while actually lacking consciousness.” But the term “p-zombie” has an importantly different meaning in the philosophical literature from which it comes.
This mismatch is worth flagging: not because it matters whether people use terms the way analytic philosophers do (it doesn’t), but because the terminology mismatch might trip you up if you’re trying to link debates about AI consciousness with the philosophical literature.
In philosophy of mind, a p-zombie (“philosophical zombie”) is a thought-experimental entity that lacks consciousness—just as in the (more-)common parlance. But in the relevant thought experiments, p-zombies are atom-for-atom physically identical to conscious human beings, and also (relatedly) they behave exactly the same as humans. So academic p-zombies are extremely different from common parlance AI p-zombies. If you want to picture how an academic p-zombie looks and acts, you can’t imagine any AI system at all. You have to imagine1 something that looks and talks and acts…well, exactly like you, or your friend, or any other human:
“These systems will look identical to a normal conscious being from the third-person perspective: in particular, their brain processes will be molecule-for-molecule identical with the original, and their behavior will be indistinguishable.” -David Chalmers, noted p-zombiephile
Note also that p-zombies are not creatures that David Chalmers or anyone else expects to meet while out and about: “There is little reason to believe that zombies exist in the actual world. But many hold that they are at least conceivable”. P-zombies exist only in thought experiments.
So AI systems are decidedly not p-zombies in the original sense of the term, because:
They’re not physically identical to humans—not even close: they’re made of silicon, they don’t have bodies.
They’re not behaviorally indistinguishable from humans (cf. this kind of thing).2
AI systems simply don’t fall into the category the philosophical concept was designed for, which is quite narrow and specific.
Again, it’s totally fine that the term has drifted in meaning; terms are allowed to do that! But using “p-zombie” in the broader way can cause confusion, so I myself try to avoid it. And I’ve written this note to help you mind the gap if you’re trying to connect up everyday AI discussions with the philosophical literature on consciousness.
Perhaps you’re wondering: is this some kind of a trick? Does it even make sense to say we can imagine such a being? And even if we can imagine such a being, what does that tell us? You’re not alone, and there’s an intricate philosophical literature debating such issues, but that is not my subject in this post.

I feel like the fact that it's used this way indicates a need for a catchy term that refers to what people want it to mean: a system which is not conscious, but is indistinguishable from a conscious system on the basis of outputs/behaviour alone. Behavioural zombies/b-zombies?
This scenario also seems like it's going to be 10x more discussed than the original metaphysical one
i think there’s a case to be made that microphysical duplication is not an absolute requirement—that a conscious being’s zombie doppelgänger need only be indistinguishable in some physical respect or other. Which respect matters depends on what the zombie invoker is trying to argue for. If antiphysicalism, then microphysical indistinguishability is required. If, instead, antifunctionalism or antibehaviorism is the target, then the zombies in question need only be indistinguishable in some coarser-grained respect. Anyway, that’s what I’ve been telling everybody for the past few decades.