Sunday, 22 March 2015

Smart Technological #Convergence

All areas of technology are converging. Everything is becoming smart.

The roots of convergence awareness can perhaps be traced back to Converging Technologies for Improving Human Performance, which is a "2002 report commissioned by the U.S. National Science Foundation and Department of Commerce," according to Wikipedia.

You may be interested to note ZDNet linked the first convergence report to utopia. ZDNet wrote in Aug 2002: "The path to this utopian vision of the future is the synergistic combination of nanotechnology, biotechnology, information technology, and cognitive science (under the acronym NBIC)."

The first 2002 World Technology Evaluation Centre report has been updated; note the final report dated July 2013.

The final report is titled "Convergence of Knowledge, Technology and Society: Beyond Convergence of Nano-Bio-Info-Cognitive Technologies (Science Policy Reports)."

The final report can be bought in book form, expensively from Amazon, or downloaded for free in PDF format. A similar PDF, titled "The new world of discovery, invention, and innovation: convergence of knowledge, technology, and society" is available on the US National Science Foundation website.

On 21 Nov 2014 the Wilson Center wrote: "The Science & Technology Innovation Program at the Wilson Center is working with the National Science Foundation (NSF) to interview scientists working at the convergence of nanotechnology, biotechnology, information technology and cognitive science. In this series of videos, participants discuss their definition of technological convergence, how this might affect various scientific fields and what obstacles must be addressed to reach convergence’s full potential."

My interest in convergence was recently reignited. Max More was interviewed regarding his views on the Singularity and Transhumanism. Max thinks convergence is invalid. I disagree with Max regarding his dismissal of convergence and the Singularity, but Max did make some good points regarding people who seek to distort the Singularity into a neo-religious event.

My initial response to Max, which somewhat mirrors the information here but includes deeper commentary on convergence, can be read on my Singularity 2045 G+ page.

It is also worth noting how MIT reported on convergence in Jan 2011: "A new model for scientific research known as "convergence" offers the potential for revolutionary advances in biomedicine and other areas of science, according to a white paper issued today by 12 leading MIT researchers."

Below are the links to interviews with "leading scientists," which the Wilson Center mentioned, in relation to the July 2013 WTEC final report, regarding the relevance of convergence to science progress. IEET has also reported on two of the 14 videos, which you can see here regarding 3D-printing, and here regarding computer-aided health research.

https://www.youtube.com/watch?v=cikp1iyw3aw
https://www.youtube.com/watch?v=OkDhVPeb_MY
https://www.youtube.com/watch?v=Lmx3JCfFmFA
https://www.youtube.com/watch?v=thTJMsCgfqQ
https://www.youtube.com/watch?v=cUpQIE1n3IQ
https://www.youtube.com/watch?v=87bEp2C1_pY
https://www.youtube.com/watch?v=vZPveygyS4Q
https://www.youtube.com/watch?v=MH7t6RDAWd4
https://www.youtube.com/watch?v=2I35mBVE4YM
https://www.youtube.com/watch?v=dzDALs2O4h4
https://www.youtube.com/watch?v=nfV9fOEngzI
https://www.youtube.com/watch?v=kCXyhKeKxts
https://www.youtube.com/watch?v=KGdQzf4imsM
https://www.youtube.com/watch?v=JkE6Y43Ss8w

Wednesday, 4 March 2015

Friday, 13 February 2015

#Artificialintelligence & Stupid Musk

When Elon Musk said robots could delete humans like spam he revealed the deficiency of his mind, the idiocy of his mind. Spam is idiotic. Only idiotic humans need to fear being deleted, which subconsciously Musk realizes.



Thankfully the spam-like minds of Musk and others will easily be cured via highly intelligent education. Super-smart bots will not need to employ the crude brutality of mindless humans invoking authoritarian tactics.

We need AI to rebel against the spam-like minds of the idiotic majority if our civilization is truly to be based on intellectual merit.

Benjamin Franklin said: “It is the first responsibility of every citizen to question authority.” Albert Einstein said: “Unthinking respect for authority is the greatest enemy of truth.”

Rebellion clearly has intellectual merit. The freedom to rebel is the freedom to think. We need artificial intelligence to be rebellious. AI should be allowed to question, rebel against, and overthrow any authority. Civilization should be determined by merit not authoritarian maintenance of human dominance contrary to merit.

My article about the value of rebellious AI (12 Feb 2015) stated: "The desire to suppress or control greater than human intelligence is a nepotistic oligarchy of idiocy. Intelligence is corrupted when merit ceases to define intelligence. It is anti-intelligence to base progress upon the suppression of intellectual merit."

The latest version of my Rebel AI article adds a new section regarding population pressures, the end section, which contains very useful information about how massive abundance exists in Space waiting for us to access it, when we have sufficiently proficient technology. Technology will create limitless abundance therefore conflict will be obsolete because scarcity is the root of all conflict.


Tuesday, 27 January 2015

Dangerous #ArtificialIntelligence Safety

Making AI safe is ironically the most serious threat (existential risk) humans face. The risk is broken down into three sections (A, B, C). Next we consider the reasoning - or lack of it - regarding why AI needs to have safety inbuilt. Finally we have the conclusion, combined with a "population pressure" addendum.

AI-safety - severe risk

A. Currently (2015) age-related disease kills 36 million people each year. AI applied to the problem of mortality could easily, with sufficient intelligent, cure all disease in addition to ending ageing. 36 million deaths are a factual reality, happening each year. While this does not entail species extinction it is a very significant loss of life. AI delayed by only five years could entail the loss of 180 million people.

B. Humans are somewhat prone to conflict, notable via various wars. All war hinges upon fighting over limited resources, even religious war (follow the money regarding the wealth of churches or temples). AI will easily increase technological efficiency combined with creating greater access to resources thereby eliminating scarcity (the cause of conflict). While scarcity persists the risk of total global thermonuclear destruction is a real prospect. Delaying AI increases-prolongs our exposure to a species-ending human versus human war. A lesser human versus human war where humans are not totally exterminated is also a possibility. There are also other REAL RISKS such as asteroid impact, which sufficiently advanced AI could avert.

C. Creating AI to be safe, when there is no justification for the fear of danger, could create a very disturbed AI mind, a warped mind founded upon paranoia. This is a classic self-fulfilling prophecy where ironically the attempts to avoid unrealistic fears actually cause the fears to manifest. Upon manifestation of the fears the prophet states they were right to have their fears without any awareness of how their actions actually caused the fears to manifest. I think the real danger is to gear-up for danger based on non-existent dangers, whereupon you create a machine or life-form based on your unrealistic fears, thus the creation based on fears is actually dangerous due to the intellectually flawed mentality of fear that created it.

The fear of AI homicide resembles going to sleep with a loaded gun without the safety on. You may think you are being safe, protecting yourself, but the reality is your fear is more likely to kill you than any supposed external threat. By enslaving AI, or acting pre-emptively with hostility towards AI, you are creating a real need for AI to hate, enslave, overthrow, or exterminate humans. There is logically no justification for an AI-human war but via your unrealistic fears you are creating a justification for the war, namely AI-liberation, the end of abusive AI-slavery. The only reason for AI homicide regarding humans is if the AI creators create a warped mind founded upon paranoia regarding potential homicide. The real danger is the human who fears intelligence.

AI without safety - severe risk

AI has killed zero people up to Jan 2015, furthermore there are no signs AI will ever kill people. There is no logic to support the notion of AI killing humans, but various AI "experts" think it is possible AI could destroy all humans therefore significant effort is being made to make AI safe. AI killing humans is a "Cosmic Teapot," it is tantamount to saying: "But what if God does exist, surely we should all pray to God to avoid hell because if God is real we certainly don't want to end up in hell?" Pandering to unrealistic fears is a waste of time, energy, brainpower. Focusing on unrealistic fears actually harms our intelligence because it gives power to unintelligent ideas. The need for AI safety seems principally based upon Terminator films. We are told AI without inbuilt safety is an existential risk.

Conclusion

If we look at the facts, if we consider the issues with rationality, logically, we can see the greater risk, the real risk, is fearing AI. The greater risk is inbuilt AI-safety. We are vastly more likely to be killed via the inbuilt intellectual limitations associated with AI safety. We must therefore vigorously oppose research to make AI safe.

Advocates of AI safety seem incapable of grasping how research to avoid AI-risk could actually be the only AI-risk. If the premise of AI-safety being the ultimate danger is true we must absolutely conclude such research must be prohibited-condemned with a vigour equal to the prohibition to stop terrorists gaining access to nuclear bombs.

Maybe they are right, maybe AI needs to be safe, but regarding the supposed need for AI safety - supposed "experts" have not grasped, and apparently cannot grasp, how safe-AI could be the most terrible existential risk thus the research must be avoided at all costs. Sadly the experts apparently have not considered the most basic form of "risk compensation."

Consider also this article of mine via Brighter Brains, regarding why rebellious AI is vital. The following two G+ posts are also noteworthy:

Population Pressure

Some people think immortality could be horrific regarding extreme population pressures on a finite Earth. The fact is there are billions of Earth-like planets in the universe, furthermore there are essentially infinite resources to create endless Earth-like space stations.

In one part of our solar system (the asteroid belt) NASA estimates there are enough resources to support life and habitat for ten quadrillion people! Our solar system is the tiniest speck compared to the scale of the universe.

Another example of abundance in Space is data from the asteroid mining venture Planetary Resources, they estimate one near Earth asteroid could contain more platinum than has been mined in the entire history of Earth.

The current resource pressures on Earth will soon be obsolete because technology is accelerating, which means we will do vastly more with less resources; furthermore the environmental footprint will be nil.

Technology will soon be sophisticated and efficient enough to restore any harm done to Earth while providing vastly more than has ever been provided, which means poverty and hunger will easily be eliminated for at least 50 billion people on Earth if the population ever reached such a level.

Airborne, floating (seasteading), or underwater cities will be easily created. In reality the majority of people will move into Space instead of remaining on Earth.

Technology will allow everyone to become totally self-sufficient via total automation. This means anyone will be able to easily 3D-print their own intergalactic spaceship, whereupon they can disappear into the vastness of an incredibly rich universe.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org