1 Comment

I like this post.

I'm almost sure that Chalmer's Vulcans (agency + consciousness, but no sentience/valence) is an impossible combination, because the essence of consciousness is integration, and valence (as per https://direct.mit.edu/neco/article/33/2/398/95642/Deeply-Felt-Affect-The-Emergence-of-Valence-in) is too important feature for the agent's behaviour itself not to be represented in the integrated consciousness. For example, we can afford not to be aware (in our phenomenal consciousness) of the workings of our guts and immune system exactly because we have no agentic control of it, it just works its way.

Consciousness evolves/develops in service of agency. So, I think the development progression (and, therefore, the permissible combinations) will be:

1. There is only agency. ->

2. There is agency and basal consciousness which represents nothing much apart from valence, i.e., sentience (cf. https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02714/full). ->

3. There are all three things: agency, basal consciousness/sentience, and more complex access consciousness (integrating visual percept, audio percept, thoughts, and other information.)

It's also worth adding here that under minimal physicalism (https://academic.oup.com/nc/article/2021/2/niab013/6334115), a kind of panpsychism that basically says that (functional) representation _is_ consciousness (or awareness, which is in minimal physicalism is a synonym of consciousness), sentience is trivialised: there would be very little systems in the category 1. but not in 2., because valence (in relation to one's own agency) is really quite easy to represent, and it's useful, so almost all agents will represent it, and therefore will be _conscious (aware)_ of it.

Or, maybe this is my misunderstanding of the terms, and basal consciousness, or minimal physicalism-style awareness-as-consciousness is "free of qualia" and therefore of suffering _even if valence is represented there_, and "true" valence, i.e., suffering, as well as other qualia (such as redness) only arise in the _dream_ which an agent continuously creates and plays out in their representative screen. Joscha Bach explains qualia in this way: qualia is virtual quality inside a dream in which the "character" (that represents oneself) is also aware of its own awareness. The state of this dream (i.e., a "frame") is a complex object, so it's harder than just representing agent's valence, or video, or audio percepts. Thus, under this conceptualisation, the ladder of "permissible" combinations becomes as follows:

1. Just agency

2. Agency + phenomenal/basal or even access consciousness, but without a reflectively aware character in this field of consciousness, which would be required for sentience.

3. Agency + consciousness + reflectively self-aware character within the field of consciousness, which will interpret valence as pleasure or suffering, i.e., will be sentient. (BTW, this leaves unresolved the question of whether the "character" or the "host" is sentient, or both.)

Again, it's possible in principle to imagine an agent that will be phenomenally conscious or even reflectively self-aware within its consciousness, but avoid representing valence and therefore will not suffer, but in practice, representing valence is useful and not very difficult, so almost all evolved agents (including trained DNNs) will do this. Although agents designed intelligently (top-down design) may be strategically deprived of this feature of their character's representation, that will make them less capable, but this capability gap may be offset on the level of supra-system design. Trained DNNs could also in principle be "surgically" modified so that they don't represent their own valence, as per https://arxiv.org/abs/2306.03819.

The most pressing question, therefore, is whether "pure agents" (category 1) have moral standing.

Here: https://www.youtube.com/watch?v=4Z8UPddh0e4&t=46m45s, Michael Levin talks not about "moral standing" but of "what is worthy of forming a spiritual bond with", which is not exactly "moral standing", but is arguably a _stronger_ qualification, so that all these agents _definitely_ should have moral standing. And for being "worth of spiritual bonding with", agents should have two qualifications: (1) they should have "shared fate", or "shared struggle for survival" with us; (2) they should have comparable cognitive light cones (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8988303/), i.e., not to be neither much shorter-sighted (like ants, whose cognitive light cones probably extend just seconds and meters) nor much farther-signted (like Gaia that "thinks" on the timescales of centuries at least, albeit its spatial "cone" is comparable to that of humans).

Thus, Levin suggests that neither consciousness nor sentience are in principle required for spiritual bonding and, therefore, moral standing for humans. Appropriately constructed robots without consciousness nor sentience will qualify.

Even though, according to the above view, neither consciousness nor sentience is a _necessary_ qualification for moral standing, these might still be _sufficient_ qualifications. I.e., if ants are conscious and/or sentient, this might be a reason for them to have a moral standing, even though they don't qualify to have spiritual bonds with (most) people, per Levin.

Expand full comment