What does panpsychism entail about AI consciousness?
On the hard problem and the pretty hard problem
One venerable response to the problem of fitting conscious experience into a materialist picture of the world - aka to the hard problem of consciousness - is panpsychism, the view that conscious experience is a fundamental and widespread property of the natural world (SEP article). If panpsychism is correct, does that entail that GPT-4 is conscious - indeed, that all AI systems are conscious?
Panpsychism is a respectable view. Although it sometimes produces the incredulous stare, panpsychism of some kind has been endorsed, or given substantial credence, by:
historical greats in philosophy and science like Spinoza, Leibniz, Gustav Fechner, Wilhelm Wundt, William James, Schopenhauer, and Alfred North Whitehead
contemporary analytic philosophers like David Chalmers, Philip Goff, Luke Roelofs, and Hedda Hassel Mørch
So you should take panpsychism at least a bit seriously. (I do, though I confess that even after many hours hanging out with panpsychists I still have trouble understanding exactly what the view is). And since AI consciousness is likely to be one of the most important questions we will face this century, you might well wonder what panpsychism entails about consciousness in AI systems.1
The answer is: surprisingly little. The version of panpsychism defended by contemporary philosophers like Chalmers et al. says essentially nothing one way or the other. Panpsychism is a view about the fundamental relationship between matter and consciousness, and not about which complex systems are conscious; today’s panpsychists argue that fundamental particles (or other fundamental entities) are conscious, but not that every collection of them is itself conscious. As my friend Hedda Hassel Mørch helpfully puts it in an interview with Scientific American’s John Horgan:
[P]anpsychism does not imply that all things are conscious as a whole. Human brains (or certain parts of it) are conscious as a whole, but tables and chairs are probably not—they should rather be regarded as mere collections of conscious particles. The question is whether the same holds for, for example, insects, jellyfish and plants.
So panpsychism does not solve, nor does it try to solve, the "Pretty Hard Problem" that is the topic of much of consciousness science: the problem of determining which complex physical systems - like fish, dogs, patients in vegetative states, or Palm-E - are conscious subjects and what experiences they may have. The main philosophical arguments for panpsychism tend to be quite silent on these questions.
This is a specific instance of a more general phenomenon: the Hard Problem and the Pretty Hard Problem are quite separate questions, and so “hard problem” disputes about panpsychism vs. dualism vs. physicalism are fairly separable from questions of AI consciousness. The Pretty Hard Problem arises as a further question for all of the major metaphysical positions about consciousness:
-Physicalism: Which physical states or processes are identical to, or ground, consciousness?
-Dualism: Which physical states or processes are correlated with consciousness? Dualists think that a physical system like a brain is required for consciousness, they just think that brain states or processes correlate with consciousness, which is not identical to the physical but rather linked to it via some kind of bridging laws of nature which specify the relationship between physical states and consciousness.
-Panpsychism: Which physical states or processes are not just comprised of conscious matter, but also “combine” into an aggregate experience (as in the human brain)?
So it’s possible that we could come to an answer to the Pretty Hard Problem, and philosophers could still dispute different answers to the Hard Problem. And in the mean time, your credence in panpsychism shouldn’t be a lower bound on your credence in AI consciousness.
P.S. As a related point, a scientific theory called integrated information theory (IIT) also implies that consciousness is widespread. IIT calculates a "phi value" to measure “integrated information” and argues that even very simple systems like photodiodes have a small amount of phi, and thus a small amount (according to the theory) of consciousness. Does IIT imply that AI systems are conscious? In fact, no - the proponents of IIT argue that digital computers and feedforward neural networks can't be conscious, for reasons I won’t get into here (here is IIT proponent Christof Koch on the issue). So that’s another panpsychist-ish view that doesn’t entail AI systems are conscious. And unlike panpsychism proper, it actually is taken by its proponents to entail the opposite.
Indeed, I’ve been asked this quite a few times, hence writing this post as a one-stop pointer to the answer.
I agree that it's hard to understand what people mean by panpsychism, and I probably count as one myself. to the extent that I understand it, my favorite analogy is to the property of mass. Mass is a property of reality that depends on the structure of *stuff* and changing the arrangement of the stuff can change the mass (e.g., bond structure, etc). I suspect consciousness is similar. It is a fundamental property of *stuff* that can change based on how it's arranged.
But this doesn't specify what the actual relationship btw the structure and the consciousness is. so I also agree that panpsychism by itself is too broad or vague to have many implications for whether neural networks are/can be conscious entities.
I recently watched the documentary “All that breathes” and couldn’t help but notice the theme of panpsychism. What would you say from an ecological perspective?