Discussion about this post

User's avatar
Steven Marlow's avatar

It's always going to be self-reporting, in these cases. *It's enough that we understand what is required of it internally to say it shares the same level of self-awareness as humans, which could just be a stream of conscisounsess log that runs in the background (even humans can't fully capture our thought process).

Expand full comment
Jurgen Gravestein's avatar

What I don't like about the 'test' that Sutskever proposes is that it feels completely unrealistic for us to train an incredibly sophisticated system without referencing anything related to consciousness.

To me a strong indicator for consciousness would if the machine had an internal experience that we could pick up on. One way would be if we can see if it has 'thoughts of its own' and 'a will of its own'. The current paradigm is that we prompt a machine, it runs, and then it present us with a result. Input/output. For starters, a conscious AI would have to show that it has wants and needs of its own (that weren't put in there by us), and perform actions that are consistent with those wants and needs.

Another important aspect, to me, seems to be the ability to recognize 'other minds'. Children are able to recognize that others have diverse beliefs from a very young age. They learn that others have access to different knowledge bases and are able to understand that others may have false beliefs and that others are capable of hiding emotions. I feel that we could design behavioral tests that could test whether a machine has the ability to perceive other minds.

Expand full comment

No posts

Ready for more?