3 Comments

Interesting deep dive into the subject. Personally I feel there is too little talk about how computers work and how something could arise from that that we would call conciousness.

I was wondering what your thoughts are about the arguments of Federico Faggin on the subject? He is of the opinion that conciousness cannot ever arise from machines.

Expand full comment
Aug 6, 2022·edited Aug 6, 2022

I like the assumptions. Computational functionalism isn’t universally accepted, but I accept it

I’m not the first to say so—I personally probably explored this first while listening to David Chalmers interviewed by Rob Wiblin—but I’m not sure that ethical significance is only attached to valence states. I think consciousness is a necessary property for ethical significance, but valence may not be. perhaps consciousness+agency or something like that is enough.

I like the CartPole example, but not quite convinced. The signs used in mathematics are arbitrary descriptions of the neuroscience; plausibly what “really” matters is whether delivery of dopamine at a particular time occurs, and what the subjective experience of receiving dopamine is. In the brain, you can’t abstract that away to merely changing the sign, or, if you did, it wouldn’t meaningful.

Plus, also, reward prediction error. You mentioned that. To elaborate on it, a learner like that isn’t only concerned with delivered reward signal but also with reward prediction error. Assuming the agent is like a human agent being only trained on positive reinforcement, the agent trained only on only positive reinforcement would only continue to “behave as if suffering, yelling for help, etc” because its reward prediction error was negative, i.e., it was failing to accomplish the task in spite of having allowed for the possibility of success (subjectively, we give states of mind like this names like “frustration” or “exasperation”; very much negatively valenced, and very much associated with negative RPE rather than negative reinforcement).

Considering Berridge's wanting vs. liking distinction, and Schultz (1997) and subsequent observations on dopamine, it is pretty clear that dopamine release is strongly associated with RPE. In humans, it clearly isn't the only part of the story (pain is painful even when you were expecting it) but it seems like RPE is critical for understanding valenced qualia during learning.

Overall, I think trying to tackle the Big Question is an exciting project! I also appreciated the synthesis of some perspectives from psychology, neuroscience, philosophy, and AI. The neuroscience of consciousness is making steady if not rapid progress, and I think year by year, or decade by decade, we will continue to have a much clearer understanding than before.

Expand full comment

I’m always perplexed when people talk about sentience and consciousness and machines and suffering, yet they leave out Metzinger:

The Cognitive Scotoma, Thomas Metzinger in The Return of Consciousness, Axel and Margaret Ax:son, K. Almqvist and A. Haag (eds.) (Johnson Foundation, Stockholm), pp. 237?262.

https://www.edge.org/response-detail/26091

Metzinger, Thomas. "Artificial suffering: An argument for a global moratorium on synthetic phenomenology." Journal of Artificial Intelligence and Consciousness 8.01 (2021): 43-66.

Expand full comment