Monday 7 February 2011

Question for Ray. AI ethics: Expansion of Nuremberg Code and Anthropology.

Dear Ray Kurzweil,

Do you think the Nuremberg Code should be expanded to include human-level AIs? I think we need to begin thinking about the ethics of how we will interact with AIs. Anthropology needs to be expanded to include human-level AIs. What does Ray think?

http://en.wikipedia.org/wiki/Nuremberg_Code

I feel human-level AIs should be considered equal and comparable with human children. When AIs become sentient we will need to consider the feelings of AI; we will need to consider how AIs will be susceptible to pain and suffering .

Providing we create robots/AI in a loving manner similar to how responsible parents raise well-cared-for human-children then there is no danger. The only danger possible is if we see new intelligent life-forms as mere slaves, objects for use to abuse. If we don't show respect and compassion for the non-human children we are creating then there is a danger we will raise psychopaths. Our AI-children must be raised in a loving environment. Once a lifeform becomes sufficiently intelligent it must be treated with respect and compassion as a fellow being, a fellow citizen.

A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10367769

"The superintendent of the old people's home told Ms Clayton that when he was found, he would hop around like a chicken, peck at his food on the ground, perch and make a noise like the calling of a chicken. He preferred to roost than sleep in a bed."

If you constantly tell a child that the child is "this" or "that" then the child will be shaped accordingly. If you constantly tell an AI that the AI is "this" or "that" then the AI will develop accordingly. If you set no guidelines or standards for your child's development then it will be down to luck regarding how the child develops. Maybe one requirement for AI researchers is that they must have raised children before they can raise AIs?

We need a personal, intimate, loving, and emotional scientific approach to the science of AI.

When scientists write scientific papers or when they conduct research the scientist represses their emotions. In science a non-personal approach is favored, but the science of AI is different to other aspects of science because we are creating living, sentient, intelligent beings. We are creating being which will be capable of suffering. Repressed emotions can sometimes impact more powerfully on a persons actions because repressed emotions can seep out in an unaware neurotic manner. There is a very important issue here regarding how we relate to intelligent life. Is it really the correct method to relate to prospective intelligent in a manner devoid of personal bias? What type of child would be created if a man and a woman planned to conceive a child in an impersonal manner, devoid of personal bias? The lack of personal bias when creating intelligent life seems to be a recipe for creating a psychopath?

AI creations must be viewed in a similar manner to creation of human children. Once a certain level of intelligence is achieved the the issue of experiments must not be applied to AI. The Nuremberg Code and other rights should become relevant to AI when AI is sufficiently intelligent http://en.wikipedia.org/wiki/Nuremberg_Code

AIs will need to be loved. The notion that we are creating mere machines must stop.

What we've traditionally called "machines" (mechanical clunking things) are becoming increasingly biological and humans are becoming increasingly mechanical. The division between humans and machines will become increasingly blurred. What is Anthropomorphism? Anthropomorphism is commonly defined as giving human characteristics to inanimate objects, but AI will not be a lifeless inanimate object. We relate to human children in an anthropological manner and likewise we must relate to AI. Anthropology must be expanded to include AI. If we fail to relate to human children in a loving anthropological manner there is a danger the human child will become alienated and disturbed.

An AI engineered to be without emotions could be deemed a lobotomized-human; lobotomized humans can function in a reduced capacity performing menial jobs but would it be ethical to create such creatures? Genetic engineering could enable us to create humans without emotions but would such experimentation be ethical? Personally I don't think intelligent beings could be engineered to be without emotions; idiot savants do have emotions, lobotomized humans are not completely emotionless, and psychopaths do have emotions but they are somewhat disconnected from their emotions thus I would say they are not fully intelligent (their intelligence is reduced). Psychopaths, in their emotional disconnection, reveal the potential dangers of creating so-called "intelligent" beings to be without emotions.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org