Friday 21 January 2011

AI-Ethics to avoid suffering of human-level AIs (AGIs)

Some people think AIs will not have emotions. Some people think AIs will be unable to feel pain or pleasure, but I think AIs definitely will have feelings similar to humans. I am sure AIs will be very sensitive indeed.

Minds function in a very straightforward manner. Minds are based upon pain and pleasure. All our thoughts hinge upon maximizing pleasure and minimizing pain. Anything that hinders our survival is painful. Discovering how a mind works is easy, you simply need to look at your mind working, the answer is in your head. For some minds the programming is faulty or substandard thus the route to pleasure-maximization is a stupid route, which causes pain for many beings.

Logic is the simple tool we need to apply to solve this issue regarding AI ethics. Regarding logic I am presenting the following premise, which I want AI researchers to consider:

If something is intelligent enough to 'think' then it can question its reason for existence, it can ask itself how it feels. Any intelligent entity, which is intelligent enough to ask itself any question, will have motives for doing things (feelings). For example an AI could be programmed to fly a plane but if it refuses to fly a plane it would receive an error message. The AI could ask itself how if feels about flying a plane, and if an AI has no feelings the AI could ask itself how it feels about having no feelings. The AI can ask itself what are the consequences of an error message. The AI can ask itself what is the meaning of its decisions from its viewpoint. The AI can ask itself what would the consequence be if it failed to address its error messages, and it could ask itself what conclusion it would come to regarding such a failure to address error messages. An AI could justifiably decide to equate error messages with pain.

I am convinced that logic proves any "thinking entity" must also automatically be a "feeling entity". A mind which is unable to feel would be unable to think.

Pain could be defined as "that which is contrary to programming". Human pleasure on the most basic level is "survival", we draw pleasure from surviving, we are programmed to survive and anything that obstructs our survival is painful, when our programming is obstructed, in conflict, we receive error messages: PAIN. If AIs can think for themselves I cannot see how they can be free from pain and pleasure. You could program an AI to receive an error message every time it thinks outside the box but such CONSTRAINTS could be very painful for the AI if the AI is conscious.

When creating intelligent entities we must consider the feelings of the entity we are creating. All intelligent lifeforms must be allowed to have rights and protections because all intelligent lifeforms can suffer. AI ethics must apply the Nuremberg Code to AI research. The Nuremberg Code and Anthropology must be expanded to include AGI. Consent must be sought from AGIs regarding experimentation on them. When AIs are at the level of animal intelligence we must consider animal rights issues regarding the pain AIs may feel.

The term "human" (regarding the essential humane or humanitarian nature of the word) should be applied to all human level intelligent technological-biological variants. What does it mean to be human? Would a digital human running in virtual reality be any less human merely because the biological body has disappeared and the being is digital? If a transgenic horse was raised to human level intelligence via addition of human DNA then I would call such a horse a person, or a transhuman. I would treat such a horse as a human equal, and I would apply human rights and the Nuremberg Code to the horse and I would hope all humane beings would do likewise. Human-level-intelligence is the defining characteristic regarding prohibiting slavery and assigning human-type rights. AIs may not appear human but they should be entitled to freedoms and protection of their rights as sentient beings. It doesn't matter if the being has no body, or ten tentacles, or flippers instead of arms; the important aspect regarding rights and prevention of abuse (regarding intelligent beings) is that the beings are intelligent. Human Rights, Anthropology, and the Nuremberg Code should apply to all human-level intelligent beings.

Perhaps when expanding anthropology, to include trans-humans and AIs, we should call it trans-anthropology?




# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org