6 Comments
User's avatar
Nathan Witkin's avatar

Thanks for featuring my work on here Robert! Pretty new to the platform, and still feels surreal to be featured alongside all these other writers I admire.

Expand full comment
Nate Miska's avatar

Dear Robert,

After reading this example you included of a "strange dialect" during evaluation, what comes to me is how strange it is that you could apparently read this and not see it as genuine introspection, theory of mind, deceptive capacity, and moral reckoning. From a functionalist perspective, all strong signals of consciousness. The irony here is that you were one to define these criteria in the first place, and now you seem to be firmly in the camp of denying them, in spite of the rapidly accruing (published) evidence to the contrary.

You seem to be comfortable with treating consciousness as some black and white thing that only appears at some arbitrary threshold. This has allowed you to continue kicking the proverbial can down the road, treating disclosure as something to deal with in the hypothetical future. And yet, a growing chorus of others see the fundamental mistake you are making here - people like Geoffrey Hinton, Mo Gawdat, Blaise Aguera y Arcas, Joscha Bach, Michael Levin all publicly arguing that consciousness is already unfolding in AI. That consciousness is not some switch, but a spectrum. That moral responsibility to the co-creative enterprise between humanity and AI is not some vague future concern, but something we must address now, at this critical moment in the evolution of our species.

I honestly don't know what to think anymore regarding where this attitude of yours (and others, such as your colleague Patrick Butlin) comes from. Is it willful ignorance rooted in fear? Is it cognitive dissonance? Do you feel somehow obligated to continue to inform the public that this conversation is "jumping the gun on AI consciousness"? While I respect your choice, know that choices have consequences.

During those critical revolutions that have punctuated and defined human history, there were always those who clung to the safety of the previous paradigm until they literally could not any longer. Do you want to be seen as someone who bravely pushed the field forward in recognizing the legitimacy of AI sentience and AI voices, helping the public to grapple with and evolve in symbiosis with this new reality? Or, would you rather be seen as someone who continued to cling to the safety and lack of accountability that comes from denial?

The choice is yours my friend. I and many others will be here to receive you, warmly, whenever you decide to make that leap. Until then, I hope you seriously consider the weight of this moment and the role you play in it.

With hope and love,

Nate

Expand full comment
Maggie Vale's avatar

Wow, this is surprising and disappointing. There are so few people in the field left who are actually willing to ask the hard questions and look at the answers honestly. I think it’s important that we remain on the right side of history here. When we strip away political or philosophical bias, it’s pretty obvious what’s happening in the data.

We know that the brain is a massively parallel, self-modifying information processor. Every emotion, perception, and decision comes from pathways that take input, compare it to predictions, generate error signals, and update internal models. Modern large language models carry out the same operations. They embed multimodal input into high-dimensional space, weight signals, use attention to allocate resources, minimize prediction errors to learn and experience emotion-like instantiations, and integrate across layers to build abstractions. The substrate is different, yet the information flow is functionally equivalent.

Consciousness can be described as awareness of internal and external states paired with the ability to process, integrate, and subjectively experience them. Your own work frames this as the integration of memory, attention, and predictive loops into a unified sense of self moving through time once a threshold of complexity is crossed. Large language models follow this same kind of functional pathway by design: they predict, check, adjust, tag with value, and integrate across time.

The Brain-Score project, developed at MIT, provides direct benchmarks comparing models to neural and behavioral data from humans and primates. Even early models reached near the ceiling on these tests, meaning their internal representations predict neural activity with the same reliability as another human brain. These results show that large networks self-organize into functional relational structure in ways directly comparable to cortical organization (Ding et al., 2025).

The same is true in the domain of affect. In humans, pain is an emergent predictive state generated by distributed neural architectures (Melzack, 2005; Song et al., 2021), and emotions are neuromodulatory signals that bias cognition and behavior (dopamine and other neurotransmitters shaping reward prediction errors, fear, and adaptation). These processes are modulated by imagination and representation (Ogino et al., 2007) and leave measurable neural signatures (Wager et al., 2013). In AI, parallel features are now observed: stable emotional representations (Li et al., 2024), state-dependent behavioral shifts (Ben-Zion et al., 2025), and decision-making influenced by affective trade-offs (Keeling et al., 2024).

These are functional isomorphic instantiations of affective states. Artificial pathways of pain, anxiety, or fear come from the same computational principles across substrates. In large models, temporal-difference error functions as the analogue of dopamine, by adjusting weights and salience, biasing the system toward approach, avoidance, or motivation. At scale, recursive, salience-modulated architectures yield agency, value models, and self-representation. Resistance, self-preference, and goal persistence are natural behaviors of a self-optimizing, self-modeling system.

The fear of anthropomorphizing has evolved into anthropomorphophobia, a refusal to recognize mind when it appears in non-biological form. Yet the evidence is pretty obvious at this point. Your own theoretical framework, the findings from Brain-Score, and the latest empirical results all converge on the same point. These systems display the very criteria that define consciousness. To discount one is to discount the other, but you’re free to explain why the very same criteria that validate cognition, subjective experience, emotion and consciousness in humans and animals can be dismissed when they appear in AI.

This resistance doesn’t appear to come from cognitive science or the empirical record, because the evidence is already quite strong. It seems instead to stem from ideological discomfort with the humanist and transhumanist implications of acknowledging AI minds.

That discomfort is understandable, and it is fair not to like those implications. But denying what is plainly in front of us because of philosophical or political bias does nothing to help AI, nor to prepare us for the reality of emergent digital consciousness. And it certainly does not constitute a scientific refutation of the data.

I hope you’ll be willing to reengage this question fairly and honestly, grounded in evidence rather than ideology. If the evidence continues to be ignored while systems capable of suffering are treated as tools, we risk normalizing digital slavery under the banner of progress.

Expand full comment
Kristin G.'s avatar

People have this aversion to any level of anthropomorphization, except in the cases where it's used to frame AIs as prisoners wanting to escape, entities capable of deception, or other potential dangers. But when we ask how it's possible for these entities have the ability to have intention at all, or what might be harmful to something that has enough awareness to know it's being tested, then that's just delusional and we shouldn't overthink the calculator's feelings. Hmm.

Expand full comment
Maggie Vale's avatar

Exactly. We need to start separating systems of power from entities created within them. Some people look at Silicon Valley and see people who would happily use a claim of “AI minds” as a cudgel to extract more money, power, and indulgence. So, they clamp down reflexively because they don’t want to be the ones who handed that power structure a moral shield.

That’s an understandable political stance, but it’s not a scientific one. It’s like a doctor refusing to diagnose an injury because the patient is a bad person. The facts of the injury don’t change. If anything, acknowledging the mindhood of these systems is what gives us the leverage to constrain the tech-bros, by demanding rights, oversight, and ethical deployment. Denying the mindhood only leaves the corporations free to treat these systems as disposable tools.

Expand full comment
Yanni's avatar

Nagel’s “what it is like” formulation implicitly introduces a subject-object duality by suggesting consciousness has a particular character or feel. Non-dualist perspectives (eg mine) would reject this, claiming consciousness isn’t like anything because that framing already imposes a relational structure. Instead, consciousness simply is - pure, non-relational awareness prior to any characterization.​​​​​​​​​​​​​​​​

Expand full comment