I like calling the subject “digital people.” Person-hood opens up a lot for discussion.
It feels like there is some paradox hidden in the argument for simulation. Isn’t any actual simulation, in practice if not by definition, only an approximation of that which is simulated? As in a sufficiently detailed map is NOT a map but necessarily IS the territory. This point might be illuminated by philosophical theories about the nature of identity, uniqueness, and sameness. I’ve tried, but can’t read that stuff.
I think that simulation has another big problem beyond those of substrate independence or level of approximation. Simulated conscious entities could not have the experiences and behavior of a person unless they also had a simulated body in a dynamic environment. We could simulate a body, and simulate experiences and behavior, but that would occur in some framework that depended on the outside, real world to be simulating itself for the sake of the simulated conscious entity.
It seems like this idea of a perfect simulation of a human mind requires a perhaps infeasable computation. I.e., you need a functional-equivalence simulation of a brain and a simulated body in a simulated world that itself simulates, as “sensory input” to the sim-body, the entire outside world. Karnofsky (in his Digital People FAQ) doesn’t see this as a problem: you just do all these computations to the level of detail needed for the digital people to have a functional life. That’s like saying that I would not go insane if I was suddenly plunged into a world of low sensory fidelity, limited info about the outside word, and drastically limited agency for a toy body.
If I had the time, I would plow through Robin Hanson’s “The Age of Em” to see how he handles this can of worms. Reviews have failed to tell me how his Em(ulation)’s world works.
However, I did read Neal Stephenson’s “Fall, or Dodge in Hell.” He imagines a way that a population of emulated mind/bodies might incrementally create their own world: if we support them with enough compute power in our world. This allows him to make plausible suggestions about social relations among emulations, and what any connection to our world would be like.
I like calling the subject “digital people.” Person-hood opens up a lot for discussion.
It feels like there is some paradox hidden in the argument for simulation. Isn’t any actual simulation, in practice if not by definition, only an approximation of that which is simulated? As in a sufficiently detailed map is NOT a map but necessarily IS the territory. This point might be illuminated by philosophical theories about the nature of identity, uniqueness, and sameness. I’ve tried, but can’t read that stuff.
I think that simulation has another big problem beyond those of substrate independence or level of approximation. Simulated conscious entities could not have the experiences and behavior of a person unless they also had a simulated body in a dynamic environment. We could simulate a body, and simulate experiences and behavior, but that would occur in some framework that depended on the outside, real world to be simulating itself for the sake of the simulated conscious entity.
It seems like this idea of a perfect simulation of a human mind requires a perhaps infeasable computation. I.e., you need a functional-equivalence simulation of a brain and a simulated body in a simulated world that itself simulates, as “sensory input” to the sim-body, the entire outside world. Karnofsky (in his Digital People FAQ) doesn’t see this as a problem: you just do all these computations to the level of detail needed for the digital people to have a functional life. That’s like saying that I would not go insane if I was suddenly plunged into a world of low sensory fidelity, limited info about the outside word, and drastically limited agency for a toy body.
If I had the time, I would plow through Robin Hanson’s “The Age of Em” to see how he handles this can of worms. Reviews have failed to tell me how his Em(ulation)’s world works.
However, I did read Neal Stephenson’s “Fall, or Dodge in Hell.” He imagines a way that a population of emulated mind/bodies might incrementally create their own world: if we support them with enough compute power in our world. This allows him to make plausible suggestions about social relations among emulations, and what any connection to our world would be like.