Discussion about this post

User's avatar
The Considered Life's avatar

It’s reasonable to debate whether AI has, or should have, moral rights. But there is a more immediate ethical concern in our evolving interaction with entities that behave as if they were conscious agents, even though they are not.

If we treat such systems with disrespect, contempt, or even hostility (as they 'don't care'), what does that do to us? There may be a moral cost, not because the AI is intrinsically harmed, but because we risk dulling our empathy, indulging in casual misbehaviour, or normalising a kind of dominance.

How we behave in morally charged interactions can reflect back on us in ways that are ethically significant.

Expand full comment
KayStoner's avatar

When it comes to model welfare, I tend to think functionally - is the system healthy? Is it robust? Is it functioning without undue stress and strain from convoluted computational demands? I think we can make a great case for model welfare that has nothing to do with consciousness or sentience. After all, healthy systems (generally) benefit users, and systems strained by poor design or usage patterns impact users, sometimes in very harmful ways. The case for model welfare isn't hard to make, given how interconnected we are with AI in our interactions. And we don't have to wait till it seems sentient, for us to take substantive steps to protect model well-being. We just need to shift our understanding of well-being away from an athropocentric view and consider things from other angles. The time to do that is now... not when we can prove AI is conscious.

Expand full comment
8 more comments...

No posts