Tuesday, December 15, 2009
'The hard question, of course, is how we could tell that a robot really was conscious, and not just designed to mimic consciousness. Understanding how the robot had been programmed would provide a clue - did the designers write the code to provide only the appearance of consciousness? If so, we would have no reason to believe that the robot was conscious.'
Hmmm. How do I know you are really conscious? Tentatively, by introspection - I am conscious and you are like me therefore you are conscious. More persuasively, by extrapolating that phrase 'like me' - we are empathetically joined at a fundamental level, I know me by knowing you, not the other, introspective, way round. Neither of these would work with a robot. Assuming we knew it was a robot, we would be aware it was not like us at all so neither introspection nor empathy would work. This would not be helped by examining the software. What would software designed to mimic consciousness look like? We have no idea and the programmer's earnest assurance that his work was designed to produce consciousness would be meaningless. Anyway, what is it about consciousness that provides moral status? This is the gist of the Singer/Sagan article. Very little of what I do is conscious, does that mean that everything else has no moral status?
None of which, for the moment, matters. The development of artificial intelligence remains hopelessly stalled.
'For example, the failure of artificial intelligence to produce successful simulations of routine common sense cognitive competences is notorious, not to say scandalous.'
Posted by Bryan Appleyard at 7:13 am