Recognition and Artificial Intelligence
Recently Donald D. Hoffman, a cognitive scientist, says he has found evidence that humans do not perceive reality as it is. He claims that due to evolutionary pressure humans perceive objects and phenomena not as they are in reality but as we have come to see them, in a way that offers us an advantage for survival. One wonders then—if I see a rock and that rock appears a certain way to me, then I take a picture of the rock, and the image in the photo matches the image of the rock I see with my naked sight, whether I am indeed misperceiving the rock with my own eyes. Could it be that the same misperception which we use to evaluate three dimensional objects applies in the same way to flat surfaces with images of those same three dimensional images? (A dog will recognize his ball, but he may not recognize a picture of his ball.) How does that same misperception apply to paintings, and why is recognition of the objects represented in paintings so often instantaneous?
Let’s say, for the sake of argument, that there are two selves: one which finds similarities in the faces of people I meet and one which finds dissimilarities in those faces. This is, actually, not how the human mind works, but it could be how an artificial mind could work. In a poem I wrote many years ago I said, “We look for ourselves in a crowd and avoid ourselves in a mirror.” Perhaps one self finds itself in others and another self finds others within itself. The discovery of similarities could be my desire, but the discovery of dissimilarities could be another distinct desire. The similarities and dissimilarities could form a recognition. Internal harmony could be established or mastered depending on a need for either harmony or disharmony.
How would this be programed into a computer? If we perceive the world not as it is but as we need it to appear would an artificial intelligence do the same? And would that intelligence seek similarity or dissimilarity with other intelligent beings? For that matter it might be better if the evolution of artificial intelligence were not on the same evolutionary trajectory with human evolution, but it seems to me that we are bound to create it in our own image. This image could be driven either by need or by…what? What could shape our designing agency other than our own need?
If I’m hungry I need food. If I’m thirsty I need water. The rudder of the human ship has become desire rather than environment. Once the need to survive has been satisfied we move on to optimal living, which includes a desire for a high sugar and high fat diet. If evolution is driving this vehicle it is steering it toward the most basic rewards, or so it would seem. If I find certain qualities in a person that are agreeable to me I may seek a symbiotic resonance with him and his people to procure what I want. I seek an escape from myself and a way to further discover myself. How could these desires be programed into a computer without making the computer a mirror image of ourselves?