It's always going to be self-reporting, in these cases. *It's enough that we understand what is required of it internally to say it shares the same level of self-awareness as humans, which could just be a stream of conscisounsess log that runs in the background (even humans can't fully capture our thought process).
One theory is that consciousness is a product of the divisive nature of thought. Thought operates by dividing the external world in to conceptual objects which can then be manipulated internally. Nouns are the easiest example. External real world trees become the symbol "tree" internally.
Consciousness may this same process unfolding within the realm of thought itself. The phenomena of thought is conceptually divided in to "me" and "my thoughts". The expression "I am thinking about XYZ" illustrates this conceptual division, with "me" and "my thoughts" being experienced as two different things, when really they are one.
The best way to get this theory is not to agree or disagree intellectually, but to carefully observe one's own mind in action.
What about AI? Like everybody else, I don't know of course.
We can presume that the nature of thought didn't just pop in to existence with humans, but has been under gradual development by evolution in other species for a long time, with human thought being just the latest version. One might guess this evolutionary process will continue on in to AI? So AI will be to us as we are to chimps?
Or, one might guess that AI will be so fundamentally different than biological mechanisms that trying to compare AI to human behavior is a mistake.
Or, one might guess that the concept of any entity owning it's intelligence is a mistaken perception. Maybe intelligence is not a property of particular things, but rather a property of reality itself. For example, everything is governed by the laws of physics, but those laws are not a property of any particular thing, but of reality as a whole.
Maybe living things are kind of like radio receivers, and reality itself is like the radio station broadcasting the intelligence signal. In this case the question would be can non-biological machines receive the universal intelligence signal? Or will they never be conscious in the way we think of it?
Finally, this entire subject may be rendered null and void by some cowboy who starts tossing around nuclear weapons, an ever present possibility so rarely considered by AI experts and their fans. AI writing always seem to assume that AI development will continue endlessly on in to the future, when it could just as easily all end tomorrow afternoon at 2:39pm EST.
What I don't like about the 'test' that Sutskever proposes is that it feels completely unrealistic for us to train an incredibly sophisticated system without referencing anything related to consciousness.
To me a strong indicator for consciousness would if the machine had an internal experience that we could pick up on. One way would be if we can see if it has 'thoughts of its own' and 'a will of its own'. The current paradigm is that we prompt a machine, it runs, and then it present us with a result. Input/output. For starters, a conscious AI would have to show that it has wants and needs of its own (that weren't put in there by us), and perform actions that are consistent with those wants and needs.
Another important aspect, to me, seems to be the ability to recognize 'other minds'. Children are able to recognize that others have diverse beliefs from a very young age. They learn that others have access to different knowledge bases and are able to understand that others may have false beliefs and that others are capable of hiding emotions. I feel that we could design behavioral tests that could test whether a machine has the ability to perceive other minds.
It's always going to be self-reporting, in these cases. *It's enough that we understand what is required of it internally to say it shares the same level of self-awareness as humans, which could just be a stream of conscisounsess log that runs in the background (even humans can't fully capture our thought process).
Maybe this helps? What is consciousness?
One theory is that consciousness is a product of the divisive nature of thought. Thought operates by dividing the external world in to conceptual objects which can then be manipulated internally. Nouns are the easiest example. External real world trees become the symbol "tree" internally.
https://www.tannytalk.com/p/article-series-the-nature-of-thought
Consciousness may this same process unfolding within the realm of thought itself. The phenomena of thought is conceptually divided in to "me" and "my thoughts". The expression "I am thinking about XYZ" illustrates this conceptual division, with "me" and "my thoughts" being experienced as two different things, when really they are one.
The best way to get this theory is not to agree or disagree intellectually, but to carefully observe one's own mind in action.
What about AI? Like everybody else, I don't know of course.
We can presume that the nature of thought didn't just pop in to existence with humans, but has been under gradual development by evolution in other species for a long time, with human thought being just the latest version. One might guess this evolutionary process will continue on in to AI? So AI will be to us as we are to chimps?
Or, one might guess that AI will be so fundamentally different than biological mechanisms that trying to compare AI to human behavior is a mistake.
Or, one might guess that the concept of any entity owning it's intelligence is a mistaken perception. Maybe intelligence is not a property of particular things, but rather a property of reality itself. For example, everything is governed by the laws of physics, but those laws are not a property of any particular thing, but of reality as a whole.
https://www.tannytalk.com/p/intelligence-is-intelligence-a-property
Maybe living things are kind of like radio receivers, and reality itself is like the radio station broadcasting the intelligence signal. In this case the question would be can non-biological machines receive the universal intelligence signal? Or will they never be conscious in the way we think of it?
Finally, this entire subject may be rendered null and void by some cowboy who starts tossing around nuclear weapons, an ever present possibility so rarely considered by AI experts and their fans. AI writing always seem to assume that AI development will continue endlessly on in to the future, when it could just as easily all end tomorrow afternoon at 2:39pm EST.
What I don't like about the 'test' that Sutskever proposes is that it feels completely unrealistic for us to train an incredibly sophisticated system without referencing anything related to consciousness.
To me a strong indicator for consciousness would if the machine had an internal experience that we could pick up on. One way would be if we can see if it has 'thoughts of its own' and 'a will of its own'. The current paradigm is that we prompt a machine, it runs, and then it present us with a result. Input/output. For starters, a conscious AI would have to show that it has wants and needs of its own (that weren't put in there by us), and perform actions that are consistent with those wants and needs.
Another important aspect, to me, seems to be the ability to recognize 'other minds'. Children are able to recognize that others have diverse beliefs from a very young age. They learn that others have access to different knowledge bases and are able to understand that others may have false beliefs and that others are capable of hiding emotions. I feel that we could design behavioral tests that could test whether a machine has the ability to perceive other minds.