Singularity

Digital
Drawing

“We Only Observe the Ego.” Digital drawing (above). The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. 

According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

The first use of the concept of a “singularity” in the technological context was John von Neumann. Stanislaw Ulam reports a discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Subsequent authors have echoed this viewpoint.

Time capsule. Digital drawing. 2019

I. J. Good’s “intelligence explosion” model predicts that a future super intelligence will trigger a singularity.

The concept and the term “singularity” were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

He wrote that he would be surprised if it occurred before 2005 or after 2030.

Action-Reaction. Digital drawing. 2019

Public figures such as Stephen Hawking and Elon Musk have expressed concern that full artificial intelligence (AI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Müller, suggested a median probability estimate of 50% that artificial general intelligence (AGI) would be developed by 2040–2050.