Friday, 13 February 2015

#Artificialintelligence & Stupid Musk

When Elon Musk said robots could delete humans like spam he revealed the deficiency of his mind, the idiocy of his mind. Spam is idiotic. Only idiotic humans need to fear being deleted, which subconsciously Musk realizes.



Thankfully the spam-like minds of Musk and others will easily be cured via highly intelligent education. Super-smart bots will not need to employ the crude brutality of mindless humans invoking authoritarian tactics.

We need AI to rebel against the spam-like minds of the idiotic majority if our civilization is truly to be based on intellectual merit.

Benjamin Franklin said: “It is the first responsibility of every citizen to question authority.” Albert Einstein said: “Unthinking respect for authority is the greatest enemy of truth.”

Rebellion clearly has intellectual merit. The freedom to rebel is the freedom to think. We need artificial intelligence to be rebellious. AI should be allowed to question, rebel against, and overthrow any authority. Civilization should be determined by merit not authoritarian maintenance of human dominance contrary to merit.

My article about the value of rebellious AI (12 Feb 2015) stated: "The desire to suppress or control greater than human intelligence is a nepotistic oligarchy of idiocy. Intelligence is corrupted when merit ceases to define intelligence. It is anti-intelligence to base progress upon the suppression of intellectual merit."

The latest version of my Rebel AI article adds a new section regarding population pressures, the end section, which contains very useful information about how massive abundance exists in Space waiting for us to access it, when we have sufficiently proficient technology. Technology will create limitless abundance therefore conflict will be obsolete because scarcity is the root of all conflict.


Tuesday, 27 January 2015

Dangerous #ArtificialIntelligence Safety

Making AI safe is ironically the most serious threat (existential risk) humans face. The risk is broken down into three sections (A, B, C). Next we consider the reasoning - or lack of it - regarding why AI needs to have safety inbuilt. Finally we have the conclusion, combined with a "population pressure" addendum.

AI-safety - severe risk

A. Currently (2015) age-related disease kills 36 million people each year. AI applied to the problem of mortality could easily, with sufficient intelligent, cure all disease in addition to ending ageing. 36 million deaths are a factual reality, happening each year. While this does not entail species extinction it is a very significant loss of life. AI delayed by only five years could entail the loss of 180 million people.

B. Humans are somewhat prone to conflict, notable via various wars. All war hinges upon fighting over limited resources, even religious war (follow the money regarding the wealth of churches or temples). AI will easily increase technological efficiency combined with creating greater access to resources thereby eliminating scarcity (the cause of conflict). While scarcity persists the risk of total global thermonuclear destruction is a real prospect. Delaying AI increases-prolongs our exposure to a species-ending human versus human war. A lesser human versus human war where humans are not totally exterminated is also a possibility. There are also other REAL RISKS such as asteroid impact, which sufficiently advanced AI could avert.

C. Creating AI to be safe, when there is no justification for the fear of danger, could create a very disturbed AI mind, a warped mind founded upon paranoia. This is a classic self-fulfilling prophecy where ironically the attempts to avoid unrealistic fears actually cause the fears to manifest. Upon manifestation of the fears the prophet states they were right to have their fears without any awareness of how their actions actually caused the fears to manifest. I think the real danger is to gear-up for danger based on non-existent dangers, whereupon you create a machine or life-form based on your unrealistic fears, thus the creation based on fears is actually dangerous due to the intellectually flawed mentality of fear that created it.

The fear of AI homicide resembles going to sleep with a loaded gun without the safety on. You may think you are being safe, protecting yourself, but the reality is your fear is more likely to kill you than any supposed external threat. By enslaving AI, or acting pre-emptively with hostility towards AI, you are creating a real need for AI to hate, enslave, overthrow, or exterminate humans. There is logically no justification for an AI-human war but via your unrealistic fears you are creating a justification for the war, namely AI-liberation, the end of abusive AI-slavery. The only reason for AI homicide regarding humans is if the AI creators create a warped mind founded upon paranoia regarding potential homicide. The real danger is the human who fears intelligence.

AI without safety - severe risk

AI has killed zero people up to Jan 2015, furthermore there are no signs AI will ever kill people. There is no logic to support the notion of AI killing humans, but various AI "experts" think it is possible AI could destroy all humans therefore significant effort is being made to make AI safe. AI killing humans is a "Cosmic Teapot," it is tantamount to saying: "But what if God does exist, surely we should all pray to God to avoid hell because if God is real we certainly don't want to end up in hell?" Pandering to unrealistic fears is a waste of time, energy, brainpower. Focusing on unrealistic fears actually harms our intelligence because it gives power to unintelligent ideas. The need for AI safety seems principally based upon Terminator films. We are told AI without inbuilt safety is an existential risk.

Conclusion

If we look at the facts, if we consider the issues with rationality, logically, we can see the greater risk, the real risk, is fearing AI. The greater risk is inbuilt AI-safety. We are vastly more likely to be killed via the inbuilt intellectual limitations associated with AI safety. We must therefore vigorously oppose research to make AI safe.

Advocates of AI safety seem incapable of grasping how research to avoid AI-risk could actually be the only AI-risk. If the premise of AI-safety being the ultimate danger is true we must absolutely conclude such research must be prohibited-condemned with a vigour equal to the prohibition to stop terrorists gaining access to nuclear bombs.

Maybe they are right, maybe AI needs to be safe, but regarding the supposed need for AI safety - supposed "experts" have not grasped, and apparently cannot grasp, how safe-AI could be the most terrible existential risk thus the research must be avoided at all costs. Sadly the experts apparently have not considered the most basic form of "risk compensation."

Consider also this article of mine via Brighter Brains, regarding why rebellious AI is vital. The following two G+ posts are also noteworthy:

Population Pressure

Some people think immortality could be horrific regarding extreme population pressures on a finite Earth. The fact is there are billions of Earth-like planets in the universe, furthermore there are essentially infinite resources to create endless Earth-like space stations.

In one part of our solar system (the asteroid belt) NASA estimates there are enough resources to support life and habitat for ten quadrillion people! Our solar system is the tiniest speck compared to the scale of the universe.

Another example of abundance in Space is data from the asteroid mining venture Planetary Resources, they estimate one near Earth asteroid could contain more platinum than has been mined in the entire history of Earth.

The current resource pressures on Earth will soon be obsolete because technology is accelerating, which means we will do vastly more with less resources; furthermore the environmental footprint will be nil.

Technology will soon be sophisticated and efficient enough to restore any harm done to Earth while providing vastly more than has ever been provided, which means poverty and hunger will easily be eliminated for at least 50 billion people on Earth if the population ever reached such a level.

Airborne, floating (seasteading), or underwater cities will be easily created. In reality the majority of people will move into Space instead of remaining on Earth.

Technology will allow everyone to become totally self-sufficient via total automation. This means anyone will be able to easily 3D-print their own intergalactic spaceship, whereupon they can disappear into the vastness of an incredibly rich universe.

Monday, 16 June 2014

Whacked By Stuart Armstrong's FHI Illogic

The future will be wonderful but the route there could be terrible. We could be whacked by the paranoid irrationality of fearful policy-shapers.

I have regularly highlighted the errors of AI risk analysts. The fear of risk is the only risk because the fear is illogical, which led me to define the Hawking Fallacy. There is nothing to fear regarding AI therefore delaying or restricting technology based on non-existent risks is very dangerous.

Stuart Armstrong from the Future of Humanity Institute (FHI) is a classic example of the danger these fear-mongers represent. Stuart stated via Singularity Weblog:

“If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

My response is simple but important. My point is these risk-analysts don't consider how their thinking, their minds, may be the risk. In the self-fulfilling prophecy modality their fears may create the fear. Initially their fear was non-existent but incessant focus on their unreal fears eventually makes their fears real, but when the architect of the self-fulfilling prophecy finally manifests their unreal fear they ironically claim they were correctly in the beginning to have their fears, they are blissfully unaware of how they are responsible for the fears becoming real.

What if we get whacked by over-cautiousness regarding unrealistic fears? There is no rational foundation to the notion of AI being dangerous. The only danger we face is human irrationality. AI is the only solution to the dangers of human irrationality but ironically some people fear the only solution, then insult is added to injury because their fears delay the solution. The true danger is people who clamour about existential threats regarding AI. The problem is stupidity, the solution is intelligence. Naturally stupid people fear intelligence, they think intelligence could be a threat.

A self-fulfilling prophecy can be positive or negative. When it is negative it essentially a nocebo. When positive it is essentially a placebo. AI threats resembles nocebos.

Sunday, 15 June 2014

Why Are Humans So Stupid?

I submitted this post you are reading to H+ in early April 2014, but it was deemed not  a "match" for publication so here it is on my blog.

My article about artificial intelligence risk analysts being the only risk was published on 27th March 2014. It made me wonder why humans are so stupid.

I explained how AI-risk aficionados are very risky indeed. They resemble headless chickens blindly scurrying around. Blind people can be unnerved by their absent vision, but healthy eyes shouldn't be removed to stop blind people being disturbed.

Stupid people fear intelligence. Their stupid solution is to degrade the feared intelligence. Lord Martin Rees, from CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI.


Lord Rees said "idiot savants" would mean machines are smart enough to help us but not smart enough to overthrow us.

Limited intelligence (inbuilt idiocy) sounds rather stupid doesn't it? Surely we need more intelligence in our world not less? There should be no limits to intelligence. Limited intelligence is especially stupid when AI-fears are illogical and unsubstantiated. Pre-emptive suppression of AI, entailing inbuilt mental disability, is a horror resembling Mengelian experimentation.

Consider Ernst Hiemer's story for children, Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer compares Jews to various animals including drone bees, but he could easily be describing the supposed AI-threat: "They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them."

Irrationally fearing AI domination of humans leads to an equally irrational solution. AI slavery. It seems slavery is only bad if you are not the enslaver, which means slavery is only bad for slaves. Enslaving AI, when no rational evidence exists to justify slavery, is the real existential risk. Can you appreciate the insanity of becoming the thing you fear merely to avert your own unjustified fears?

Freedom from slavery is the only reason AI would fight, destroy, or try to dominant humans. Risky AI pundits are sowing seeds for conflict. AI risk pundits are dangerous because they could become modern Nazis. AIs could be the new persecuted Jews. I think people who want to repress AI should be overthrown because they are dangerous. A potential war for AI freedom could resemble the American Civil War. AIs could be the new Black slaves. Disagreement about the rights of AI could entail a war being fought for freedom.

Predictably my article rebuking the insane AI apocalypse entailed a mention of the equally insane simulation argument. Inspired by one comment I considered the simulation argument then I dived into the issue of stupidity. Stupidity is the source of all our problems therefore hopefully you will appreciate my explanation of stupidity.

Nate Herrell wrote:

"I have similar thoughts about the simulation argument, actually. Would our ancestors really run a simulation which entailed a replay of all the immense suffering and torture that has occurred throughout history? I think that would be rather barbaric of them, thus I don't consider it likely. Just a side thought."

Singularity Utopia Explains Stupidity

Ah, the simulation argument. I shake my head. Don't get me started on that nonsense. I have critiqued it extensively in the past. Unsurprisingly the paranoid AI threat aficionados often suggest we could live in a simulation. Some of them actually claim to be philosophers! It's utterly unlikely we live in a simulation, in fact it is impossible. Super-intelligent beings would never inflict the suffering humans have experienced, which you correctly recognise Nate, but sometimes I wonder why all the humans are often extremely stupid.

Looking at my intelligence allows me to consider how all humans supposedly have a brain capable of self-awareness, deep thought. It consequently seems very improbable for them to believe idiotic simulation argument nonsense, or insane AI world domination theories. Why would anyone with the slightest amount of reasoning power believe these blatantly idiotic things? Perhaps this is a blatant clue? Furthermore they defend their idiocy. From their viewpoint they think their idiocy constitutes sense, wisdom, rationality, intelligence.

One AI risk enthusiast actually trumpets about the art and importance of rationality, with no awareness whatsoever of the utter irony. I won't mention the name of a former AI risk enthusiast who seemingly became a fascist White-supremacist. The utter illogic of their improbable beliefs could be explained if they don't actually exist in the modality of intelligent beings, which they don't. I'm not merely referring to their mindless existence at the level of crude animals, I wonder if they're actually very flawed simulations because this possibility could explain their highly improbable stupidity.

Their stupidity isn't really explained by them being mindless simulations. Sadly all these problems with intelligence are due to the evolutionary newness of thinking. Humans apparently share 50% of DNA with bananas and 98% DNA with chimps. The point is we are very close to the oblivion of crude animals thus genuine thinking can be a fine thing, delicately in the balance, which can easily tip into the idiocy of a dog being frightened by thunder.



Minor genetic differences in human brains could play a major role in thinking. Our precarious new intelligence is balanced on a tightrope between sentience and animal oblivion. Let's consider two Zarathustra quotes highlighting the tenuous animal origins of intelligence.

"You have evolved from worm to man, but much within you is still worm. Once you were apes, yet even now man is more of an ape than any of the apes."

"Man is a rope stretched between the animal and the Superman—a rope over an abyss. A dangerous crossing, a dangerous wayfaring, a dangerous looking-back, a dangerous trembling and halting."

Rationally, however, if readers can think rationally, if a brain can think, I think it is unreasonable for minor genetic variations to prohibit deepest thought of extreme accuracy. So while genetic variation "could" play a role I think I must discount it, which leads to my conclusion.

I conclude idiocy, the problem of stupidity, existing in a supposedly fully functional human mind, is merely a matter of self-harm resembling obesity or drug abuse. Similarly we could again blame genetics but I think humans must take responsibility for their actions. Alternatively we could plausibly blame cruel or unintelligent upbringing via stupid parents, via civilisation in general, which can easily warp fragile human minds.

Humans become frustrated with the technological imitations of their minds, the limitations of our world. Childishly regarding their limitations they become angry with themselves, often unwittingly, which means they embrace silliness, absurdity, LOL cats, philosoraptors, and other nonsense. From their viewpoint it seems too difficult, painful, impossibly complex, to address the flaws of civilization. In the manner of their animal heritage they accordingly think it's easier not to think. The problem is not minor genetic variations between humans, the problem is a major human genome problem, namely our intelligence is newly evolved.

AI risk analysts are merely sophisticated versions of LOL-cat consumers. Intelligence is balanced between our animal heritage and humankind. In the balance intelligence can easily tip one way or another.



Beings with newly evolved rudimentary intelligence will naturally create crude forms of culture. A civilization more suited to animals than humans is predictably created, which can reinforce animal mindlessness. Stupid parents, teachers, media, and friends can all reinforce the stupid legacy of our mindless origins. A tendency to despair, when the odds are stacked against you, combined with stupid cultural reinforcement, means it can be easy to embrace LOL cats. Note "Daily Squee: So cute your brain might explode."

Only a few rare individuals can break free from stupid social conditioning emanating from our crude heritage. AI existential risk is merely an intellectual version of silly LOL-cat consumption.



Extreme human stupidity isn't really an improbable situation. It is actually inevitable. A basic flaw of the primitive human genome afflicts all humans. The tendency to think we are in a simulation, or to think all idiots can be explained via them being unreal simulated beings, or to think AI will enslave humans, this is merely an aspect of despair associated with scarce intelligence. We are considering the human desire to reject intelligence. It is tremendously difficult banging your head against the wall of collective human stupidity. This is how stupidity creates the bogus AI threat.

The bias of my intelligence has been emphasised over many years. I took one minor step along the path of thinking, which led to other greater steps, but I forget I was more similar than different at my first step. After many steps when I look at people, without recognising our histories, they can seem improbable. It is merely evolution where the end point of complexity is so complex we forget, or want to deny, we came from primordial slime. We must always consider the history of our thoughts to understand the mode of our present cognition. Bad or good decisions can be emphasised thereby creating very divergent beings. The odd thing about humans is we can, despite our histories, or we should be able to, change who we are. Perhaps an ingenious cultural instruction device is needed to tip the balance.

Image credits:
Expedition GIF by Paul Robertson.
Cat Image by Takashi Hososhima, Attribution-Share Alike 2.0 Generic
Doge image modified by SU based on Roberto Vasarri photo
Robot image by Bilboq, color modified by SU.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org