Three axes of consciousness
or, why dualists can believe in conscious robots
Scott Alexander has a nice review of the recent Trends in Cognitive Sciences paper “Identifying indicators of consciousness in AI systems”. (This paper is by Patrick Butlin, me, and many co-authors across neuroscience, AI, and philosophy). In Scott’s post, he divides theories of consciousness into three buckets:
Physical: whether a system is conscious depends on its substance or structure.
Supernatural: whether a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
Computational: whether a system is conscious depends on how it does cognitive work.
This post is about how this division is both useful, and not quite right (for example: the first category shouldn’t be called “physical”). I think the limitations of Scott’s division point the way to a better way of carving up different views of consciousness.
Let’s start with how Scott’s three-way division—physical, supernatural, and computational—is useful. It does lead to a clear and correct explanation of our paper’s methodology: namely, looking at computational theories of consciousness and using them to derive indicators of consciousness that we can look for in AI systems.
On what views of consciousness does this methodology make sense? Scott’s categories help us see the answer: views on which (a) consciousness isn’t inherently tied to biological substance [in contrast with the view that Scott calls “physical”] and on which (b) there are some lawlike principles governing consciousness [in contrast with what Scott calls “supernatural”]. If either of those conditions fails, AI consciousness is either impossible or unknowable. As Scott puts it: “If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it.”
Scott gets something very important correct about the indicators paper: we don’t derive computational indicators because we think we have proven that consciousness is computational and lawlike. Rather, we think that consciousness has a decent chance of being computational and lawlike, and we think that we can say useful things within that region of credence space. We “assume” computational functionalism only in the sense that we are making a conditional claim: IF computational functionalism is true, then X and Y are computational indicators of consciousness. We aren’t begging the question against biological views, because we don’t take ourselves to be providing arguments for or against those views.
With that said, I think there are some confusing results of Scott’s division. By conflating “physical” and “biological” in the first category, and making “computational” a contrast with “physical”, I think he obscures some important axes that we should actually separate out. Here’s how I think we should divide things:
Axis 1: Metaphysics. Is everything, including consciousness, fundamentally material? Or is there something non-material? Materialism says yes, everything is fundamentally material. Dualists say no—reality fundamentally includes matter but also some non-material properties (property dualism) or some non-material stuff (substance dualism). Panpsychists also say no, but in a more exotic way—the fundamental stuff is somehow conscious-y.
Axis 2: Lawlikeness. Are there lawlike regularities between states of matter and states of consciousness, such that a science of consciousness is possible? Importantly, all of the positions on axis 1 can answer “yes” to this. Dualists can think that (metaphorically) when God created all the matter and wrote the laws of physics, he also wrote some further laws that link matter with consciousness. Materialists think that God was able to rest after only doing the matter bit. But they can all agree that we can look for the laws that govern which material configurations are the correlates of consciousness.
Axis 3: Level of description. At what level are the correlates of consciousness specified—biological implementation, or something more abstract like computation/function? This is where the real action is for AI consciousness. A “biological chauvinist” thinks that the laws linking matter to consciousness are specified at the level of neurons, carbon-based chemistry, or some other feature of our particular biological substrate. A computational functionalist thinks the laws are specified at a more abstract level—the level of algorithms and information processing—such that consciousness can be multiply realized in different substrates: living neurons but also computer chips.
The key point is that these axes are independent. Scott’s division conflates axis 1 and axis 3 by calling the first category “physical”—but computations can certainly be construed as physical! They’re realized in physical substrates, and one way of being a physicalist about consciousness is to explain it in terms of computations. Computational functionalism pairs naturally with physicalism; it just denies that consciousness requires any particular physical substrate. So the real contrast isn’t physical vs. computational, it’s biological vs. computational.
And this independence is why David Chalmers—the philosopher most associated with the “hard problem” and property dualism—is a co-author on this paper. People sometimes find this puzzling. Shouldn’t a property dualist be skeptical of projects like this? If consciousness is something over and above the physical facts, why would examining AI’s computational structure tell us anything?
But this conflates axis 1 with axis 3. Chalmers talks about this in his paper “The Singularity: A Philosophical Analysis”:
I have occasionally encountered puzzlement that someone with my own property dualist views (or even that someone who thinks that there is a significant hard problem of consciousness) should be sympathetic to machine consciousness. But the question of whether the physical correlates of consciousness are biological or functional is largely orthogonal to the question of whether consciousness is identical to or distinct from its physical correlates. It is hard to see why the view that consciousness is restricted to creatures with our biology should be more in the spirit of property dualism!
In other words: your position on axis 1 (the metaphysical question) doesn’t determine your position on axis 3 (the substrate question). A property dualist can perfectly well think that consciousness lawfully accompanies certain computational structures regardless of whether they’re implemented in neurons or silicon.
So the paper’s methodology is informative for anyone who thinks there are lawlike regularities governing consciousness (axis 2: yes) and thinks those regularities are specified at the computational level rather than the biological level (axis 3: computational)—regardless of whether they think consciousness is identical to or distinct from its physical correlates (axis 1: either way).
That’s the region of credence space where looking at AI architectures for computational indicators is exactly what we should be doing.


Love this!
Appreciate your posts on this, it's important!
Two more things which are important to flag here. (Possibly the common theme is a strawmanning of physicalist/biologicalist views?)
First, many (all?) computationalist views are indeed non-physicalist, because *computation is an abstract, non-physical concept*. How so? 'What is this chunk of physics computing?' is an observer-subjective question, which inherently requires the answerer to bring in a particular correspondence between the mechanics of the system and the computational abstractions. *Because computation is important and useful* we 21st C humans have so much shared linguistic context and capital infrastructure built up around it that this fact often fades into the background.
I've not come across a good response to this concern, though you're certainly more widely read than me, so I'd be interested if you have any. Most discussions appear to not notice this.
Second (and related), computationalism vs biologicalism is a false dichotomy (the text here can be read as presenting that as a true dichotomy, especially 'the real contrast isn't...'). Uncharitably, this is misleading, as it erects a weakman of non-computationalism. Importantly, among non-computationalist views are many which grant that nonbiological systems could absolutely be a substrate for consciousness, but not necessarily by virtue of a posited computational property (e.g. it might be about particular energy or field configurations, it might be particular cybernetic properties, it might be chemical, it might be quantum, ...)