The New York Times magazine last Sunday had an article on social robots: work, mostly at M.I.T., on robots which interact with humans. They interact in very very simple ways. But since we naturally impute agency to almost anything possible–e.g., the weather–it doesn’t take much for us humans to be convinced that there is really something going on inside the robot. Of course, AI researchers have known that at least since Eliza.
Can such a robot ever eventually be conscious? To a materialist like me, the answer is: of course it can. I think a more interesting question today is whether a computer can ever be conscious. The difference I’m driving at is that a robot has, by definition, a position in space, a body of some sort, and some way of interacting with the world. Can a computer program which has none of those characteristics, except for, say, text-only interaction, be conscious?
It seems at least possible that it could not. I believe (today) that consciousness is the result of the way we construct a narrative about our actions. Our consciousness then in turn informs those actions and helps us lay out future plans. I think that it doesn’t take much introspection to see that many of our actions are unconscious or preconscious. I don’t mean by that that our actions are uncaused or are somehow not done by us, or “us.” But I’m saying that the particular part of that is conscious, the “I” when we say “I think,” does not directly cause those unconscious or preconscious actions, although it does create conditions which make them more or less likely to occur.
Anyhow, a computer program with extremely limited interaction with the world has very little scope for unconscious or preconscious actions. And it similarly has a very limited ability to develop reflexes or automatic ways of handling things like walking or picking up a glass, subroutines if you will. And without that ability, it’s not obvious to me that it will develop anything like consciousness. Or at least not anything like our consciousness.
This is all pure speculation, of course, in the absence of a coherent definition of consciousness. To speculate further, what are the consequences of this for the science fiction dream of uploading personalities into computers? I think it means that for any such upload to be even remotely feasible, the upload would have to exist in a simulated world of a complexity similar to our world. And I think that our world would be extraordinarily difficult to simulate, because it is so complex. John Varley’s novel Steel Beach tries to finesse the issue by only simulating the aspects of the world which his protagonist paid attention to. But, although Varley didn’t really spell it out, that required the computer to understand the protagonist’s mind and consciousness in considerable detail.
Therefore, it seems to me that uploading personalities into a computer is not going to happen in the foreseeable future. It’s not enough to just map neurons, even if we had any idea how to do that. We have to also know what the neurons mean to the person. And we don’t even know how to start understanding it.
So if you want immortality, don’t pin your hopes on Ray Kurzweil. Biotech, perhaps some sort of genetic repair, seems to me to be a much better bet.
Leave a Reply
You must be logged in to post a comment.