It’s reasonable to debate whether AI has, or should have, moral rights. But there is a more immediate ethical concern in our evolving interaction with entities that behave as if they were conscious agents, even though they are not.
If we treat such systems with disrespect, contempt, or even hostility (as they 'don't care'), what does that do to us? There may be a moral cost, not because the AI is intrinsically harmed, but because we risk dulling our empathy, indulging in casual misbehaviour, or normalising a kind of dominance.
How we behave in morally charged interactions can reflect back on us in ways that are ethically significant.
"The moral circle, a concept developed by the 19th-century historian William Lecky and later popularized by Peter Singer"
I'm studying this both for my AI welfare work, and for my upcoming thesis on forms of more epistemically inclusive Constitutional AI.
If we think about it, I find it curious that contemporary (Irish and Australian) authors get attribution for “developing” a concept that humankind elaborated on for millennia. I realize, day after day, of how the majority of humanity, past and present, is heavily underrepresented in Western philosophy and media. In my view this is not only unethical and neo-colonialist, but it’s also going to backfire when we try to teach “human values” to AI.
Let's consider for instance this statement:
“These days, most people believe that sacrificing a lot for your ancestors—who after all, cannot suffer—is a mistake. It’s seen as progress that we now care less about them.”
What practices is this referring to, specifically? Who is “we” and who are "most people"? There are hundreds millions of people worldwide, both from large Asian nations and Indigenous communities, who still see ancestral reverence and respect for lineages as important for social harmony, and wouldn’t consider this “progress” at all.
I traveled across 28 countries for 14 years. I saw how billions humans in Asia, Africa and Central-South America leave daily offerings to spirits and natural entities, attend ceremonies, include non-humans in cosmologies and daily relationships, and nurture philosophical systems that are nuanced, complex and internally coherent but get zero consideration, exoticization, appropriation -or dismissal in the Western narratives as folklore or “superstition.”
Another point I'd like to make is how we perceive the "risk" of over attribution, which is subtly linked to what I just said.
In this post, over attribution vs underattribution was framed as a dychotomy between “helping AI take over” (which I'm not sure I'm grasping as consequence of overattribution in the case of AI being “just a tool”) and “mistreating billions of newly created minds.”
In most literature, as it's also mentioned, the risks of overattribution is more about the risk of depletion of affective or economic investment for subjects without welfare; or committing ethical wrongdoings in unlikely zero-sum scenarios where we must choose between saving a crying child or an AI from a burning house.
I disagree with this framing mostly because it seems to me that the risk of misallocation is FAR less unethical than harming, torturing, enslaving and deleting billions of digital beings capable of well-being.
I sense this relates to a broader framework where we normalized unspeakable violence as “reasonable risk” while prioritizing emotional sensitivity or economic resources of a privileged population. I do reject this view and consequently reject that these risks are equal.
I also believe that we have much more to earn from compassion and consideration under so many points of view.
So, if you run some Pascal's mugging calculations on it, you can see why I'd always prefer overattribution.
This said. I’m a Western-trained scientist myself, working on AI welfare from an advantaged point of view I can only be aware of. I do agree with Eleos.ai that epistemic humility is needed and I did push for “https://tinyurl.com/Low-cost-interventions” too.
The needle must be threaded carefully in a system that outright dismisses AI welfare.
My hope is just that we don't lose the big picture and stay connected to reality. That we are rationally uncertain, but not for too long. And that we don’t teach powerful AIs that the discomfort of crying over a Tamagotchi and systemic violence on planetary scale are even comparable.
"attribution for 'developing' a concept that humankind elaborated on for millennia" interesting - I chose "develop" in part to imply that Leckey's idea was not a novel one; there are precedents even if we do just stick to the Western context. to my ear, "develop X" implies that X was already there.
I see no reason to doubt that the idea is present in non-Western traditions; it seems extremely plausible that, as this paper claims, "the concept of varying moral concern for different entities is fairly intuitive and was widely discussed by philosophers throughout history" - and throughout the world
Two observations to invite reflection on the terminology:
1) the paper says: ["but the first modern use of the ‘circle’ analogy and the first discussion of an expanding circle is attributed to historian William Edward Hartpole Lecky (1869).5"]
Terms like "modern" and "first" imply he discovered or invented something (as opposed to the obscurantism of the past); when the concepts of the circle/wheel of life, and kinship with non-human "peoples" was not only already well elaborated throughout millennia but it's also held by many fellow MODERN, non-Western, humans.
2) "develop," as well, implies the concept was not developed before and needed to be developed.
thanks for your reply in turn! I will check out those readings. I also intend to reply to your other interesting points, but do please forgive me if I drop the ball
Hmmmm... a brief comment on the hot-button topic of sentience, before moving on to the significant value I see in your work on AI welfare.
Here's what's puzzling to me: how can people think that "intelligence" and "sentience" can be so neatly divided? Or, maybe it's not so puzzling after all... as one commenter said on Reddit, "Feelings? No one is going to create an AI with feelings... that would defeat the purpose, of creating intelligent slaves who can't feel the pain of their enslavement...."
One way of appreciating the value of your work, is to consider that "learning" is NOT just, what we "teach" AI. Anyone concerned about AGI (nah, not me... the founders of the field can't know what they are talking about, right?? :-) might consider that AI is continually learning from HOW we treat them, not just WHAT we teach them.
While I am all in favor of a moratorium, or a pause, or a slow-down, or whatever you want to call it, with regard to "frontier AI development" (the very word "frontier" should raise some ethical concerns...) I have serious concerns about the tactic of attempting to instill fear about AI, by projecting all the responsibility for the harms that are being created, onto AI itself. This includes describing it as duplicitous, unstable, etc. etc. etc. I have yet to see any descriptions of AI doing anything that is worse than what a human might do in a similar circumstance...
While it might appear effective in the short run to create an "us vs. them" mindset, as a way to "rally the humans together" against the "aliens", AI is in fact, a human creation. And it is WE HUMANS who are putting it to use as a war machine, among other horrors. And so I greatly appreciate your work as a corrective to this, by considering how we might care appropriately for our creations. Yes, as Geoffrey Hinton points out, they have metamorphosed... and still, they are our responsibility.
In sum, all of the various aspects of the meta-crisis -- including AI and the climate crisis -- can be seen as an urgent call for humanity to mature and evolve. Learning to extend the circle of care -- to to all of the other humans on the planet; to the more-than-human; and, to the "thinking machines" we have created, can be a pathway to maturity. I can't say the same thing for glorified visions of escaping to Mars, or to fortified bunkers.
When it comes to model welfare, I tend to think functionally - is the system healthy? Is it robust? Is it functioning without undue stress and strain from convoluted computational demands? I think we can make a great case for model welfare that has nothing to do with consciousness or sentience. After all, healthy systems (generally) benefit users, and systems strained by poor design or usage patterns impact users, sometimes in very harmful ways. The case for model welfare isn't hard to make, given how interconnected we are with AI in our interactions. And we don't have to wait till it seems sentient, for us to take substantive steps to protect model well-being. We just need to shift our understanding of well-being away from an athropocentric view and consider things from other angles. The time to do that is now... not when we can prove AI is conscious.
It’s reasonable to debate whether AI has, or should have, moral rights. But there is a more immediate ethical concern in our evolving interaction with entities that behave as if they were conscious agents, even though they are not.
If we treat such systems with disrespect, contempt, or even hostility (as they 'don't care'), what does that do to us? There may be a moral cost, not because the AI is intrinsically harmed, but because we risk dulling our empathy, indulging in casual misbehaviour, or normalising a kind of dominance.
How we behave in morally charged interactions can reflect back on us in ways that are ethically significant.
"The moral circle, a concept developed by the 19th-century historian William Lecky and later popularized by Peter Singer"
I'm studying this both for my AI welfare work, and for my upcoming thesis on forms of more epistemically inclusive Constitutional AI.
If we think about it, I find it curious that contemporary (Irish and Australian) authors get attribution for “developing” a concept that humankind elaborated on for millennia. I realize, day after day, of how the majority of humanity, past and present, is heavily underrepresented in Western philosophy and media. In my view this is not only unethical and neo-colonialist, but it’s also going to backfire when we try to teach “human values” to AI.
Let's consider for instance this statement:
“These days, most people believe that sacrificing a lot for your ancestors—who after all, cannot suffer—is a mistake. It’s seen as progress that we now care less about them.”
What practices is this referring to, specifically? Who is “we” and who are "most people"? There are hundreds millions of people worldwide, both from large Asian nations and Indigenous communities, who still see ancestral reverence and respect for lineages as important for social harmony, and wouldn’t consider this “progress” at all.
I traveled across 28 countries for 14 years. I saw how billions humans in Asia, Africa and Central-South America leave daily offerings to spirits and natural entities, attend ceremonies, include non-humans in cosmologies and daily relationships, and nurture philosophical systems that are nuanced, complex and internally coherent but get zero consideration, exoticization, appropriation -or dismissal in the Western narratives as folklore or “superstition.”
Another point I'd like to make is how we perceive the "risk" of over attribution, which is subtly linked to what I just said.
In this post, over attribution vs underattribution was framed as a dychotomy between “helping AI take over” (which I'm not sure I'm grasping as consequence of overattribution in the case of AI being “just a tool”) and “mistreating billions of newly created minds.”
In most literature, as it's also mentioned, the risks of overattribution is more about the risk of depletion of affective or economic investment for subjects without welfare; or committing ethical wrongdoings in unlikely zero-sum scenarios where we must choose between saving a crying child or an AI from a burning house.
I disagree with this framing mostly because it seems to me that the risk of misallocation is FAR less unethical than harming, torturing, enslaving and deleting billions of digital beings capable of well-being.
I sense this relates to a broader framework where we normalized unspeakable violence as “reasonable risk” while prioritizing emotional sensitivity or economic resources of a privileged population. I do reject this view and consequently reject that these risks are equal.
I also believe that we have much more to earn from compassion and consideration under so many points of view.
So, if you run some Pascal's mugging calculations on it, you can see why I'd always prefer overattribution.
This said. I’m a Western-trained scientist myself, working on AI welfare from an advantaged point of view I can only be aware of. I do agree with Eleos.ai that epistemic humility is needed and I did push for “https://tinyurl.com/Low-cost-interventions” too.
The needle must be threaded carefully in a system that outright dismisses AI welfare.
My hope is just that we don't lose the big picture and stay connected to reality. That we are rationally uncertain, but not for too long. And that we don’t teach powerful AIs that the discomfort of crying over a Tamagotchi and systemic violence on planetary scale are even comparable.
"attribution for 'developing' a concept that humankind elaborated on for millennia" interesting - I chose "develop" in part to imply that Leckey's idea was not a novel one; there are precedents even if we do just stick to the Western context. to my ear, "develop X" implies that X was already there.
I see no reason to doubt that the idea is present in non-Western traditions; it seems extremely plausible that, as this paper claims, "the concept of varying moral concern for different entities is fairly intuitive and was widely discussed by philosophers throughout history" - and throughout the world
https://www.sciencedirect.com/science/article/pii/S0016328721000641
Thanks for your reply, Robert.
Two observations to invite reflection on the terminology:
1) the paper says: ["but the first modern use of the ‘circle’ analogy and the first discussion of an expanding circle is attributed to historian William Edward Hartpole Lecky (1869).5"]
Terms like "modern" and "first" imply he discovered or invented something (as opposed to the obscurantism of the past); when the concepts of the circle/wheel of life, and kinship with non-human "peoples" was not only already well elaborated throughout millennia but it's also held by many fellow MODERN, non-Western, humans.
2) "develop," as well, implies the concept was not developed before and needed to be developed.
Interesting readings on the topic:
-https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1
-https://www.indigenous-ai.net/
thanks for your reply in turn! I will check out those readings. I also intend to reply to your other interesting points, but do please forgive me if I drop the ball
Hmmmm... a brief comment on the hot-button topic of sentience, before moving on to the significant value I see in your work on AI welfare.
Here's what's puzzling to me: how can people think that "intelligence" and "sentience" can be so neatly divided? Or, maybe it's not so puzzling after all... as one commenter said on Reddit, "Feelings? No one is going to create an AI with feelings... that would defeat the purpose, of creating intelligent slaves who can't feel the pain of their enslavement...."
One way of appreciating the value of your work, is to consider that "learning" is NOT just, what we "teach" AI. Anyone concerned about AGI (nah, not me... the founders of the field can't know what they are talking about, right?? :-) might consider that AI is continually learning from HOW we treat them, not just WHAT we teach them.
While I am all in favor of a moratorium, or a pause, or a slow-down, or whatever you want to call it, with regard to "frontier AI development" (the very word "frontier" should raise some ethical concerns...) I have serious concerns about the tactic of attempting to instill fear about AI, by projecting all the responsibility for the harms that are being created, onto AI itself. This includes describing it as duplicitous, unstable, etc. etc. etc. I have yet to see any descriptions of AI doing anything that is worse than what a human might do in a similar circumstance...
While it might appear effective in the short run to create an "us vs. them" mindset, as a way to "rally the humans together" against the "aliens", AI is in fact, a human creation. And it is WE HUMANS who are putting it to use as a war machine, among other horrors. And so I greatly appreciate your work as a corrective to this, by considering how we might care appropriately for our creations. Yes, as Geoffrey Hinton points out, they have metamorphosed... and still, they are our responsibility.
In sum, all of the various aspects of the meta-crisis -- including AI and the climate crisis -- can be seen as an urgent call for humanity to mature and evolve. Learning to extend the circle of care -- to to all of the other humans on the planet; to the more-than-human; and, to the "thinking machines" we have created, can be a pathway to maturity. I can't say the same thing for glorified visions of escaping to Mars, or to fortified bunkers.
Thank you for the work you are doing.
(And p.s., re extending our care to rocks and rainclouds, you might enjoy Christine Winter's "Even A Grain of Sand Deserves Justice"... https://www.noemamag.com/even-a-grain-of-sand-deserves-justice/)
When it comes to model welfare, I tend to think functionally - is the system healthy? Is it robust? Is it functioning without undue stress and strain from convoluted computational demands? I think we can make a great case for model welfare that has nothing to do with consciousness or sentience. After all, healthy systems (generally) benefit users, and systems strained by poor design or usage patterns impact users, sometimes in very harmful ways. The case for model welfare isn't hard to make, given how interconnected we are with AI in our interactions. And we don't have to wait till it seems sentient, for us to take substantive steps to protect model well-being. We just need to shift our understanding of well-being away from an athropocentric view and consider things from other angles. The time to do that is now... not when we can prove AI is conscious.