The singularity concept refers to the development of technology to an extent where humans wouldn’t be able to predict the capabilities that super-intelligent machines hold. For now, it is just a theoretical concept. However, it is believed that at the rate at which technology is growing and developing, there might come a time when it will become uncontrollable and unpredictable.
A Brief Explanation to the Singularity Concept
It is believed that technology may grow to a level where these unpredictably intelligent technologies will result in changing reality.
According to the theoretical point of view, the singularity will involve the advancement of computer programs, and artificial intelligence (AI) will exceed human intelligence. Such upgrades will result in the removal of the barrier between computers and humans. Nanotechnology is believed to be one of the primary technologies that might create a singularity in the future.
The singularity in AI will have a major and significant impact on humanity, as artificial intelligence will turn into artificial superintelligence with the cognitive capacity of making decisions and self-development.
Is There Any Chance of AI Singularity Concept Coming to Life?
John Von Neumann was the first one to study the technological singularity concept during the early 20th century. After his discussion, several authors started adapting the viewpoint in their writings. However, whatever they wrote was usually apocalyptic in nature. Moreover, these writings also described a future where the development rate exceeds the comprehensible levels.
The idea of machines taking over their own development has been present in human minds for a very long time, and singularity theory just defines it in easier words. Many public figures and entrepreneurs like Elon Musk have also shown their concern about the rapid advances in artificial intelligence that may lead to the extinction of human beings during the first half of the 21st century.
There are always some disagreements over such topics; however, most scientists, AI experts, and philosophers believe that there will come a turning point where humanity will witness intelligence turn into superintelligence. These experts also believe and agree upon the other aspects of singularity, such as speed and time, i.e., they agree upon the thought of smart systems’ self-improving ability at a very rapid rate.
Other than using the superintelligence abilities in machines and computers, there are thoughts of using the technology in humans, such as biological alteration of the brain, brain-computer interfaces, and genetic engineering. If the singularity theory becomes real, the post-singularity world would be nothing like the world we live in today. The approach will either allow the humans to live an eternal life or go extinct.
AI Singularity and the View Point of Several Futurists
We have been told by several futurists and inventors that singularity is coming! Mostly, it is defined as that long-awaited time in future when robots will most likely be smarter than humans and would easily take over many traditional machines. Such advances in artificial intelligence are not a long shot anymore and the world might be witnessing the most efficient form of AI in coming years.
Many futurists and inventors have been working on the concept of singularity and bringing its applications to life. If Ray Kurzweil is to be believed, we’d be witnessing the singularity in 2045. However, if we have a look at Louis Rosenberg’s point of view, the day will be arriving a little sooner i.e. sometime in 2030. Patrick Winston of MIT lies a little closer to the prediction of Kurzweil, and according to him, the singularity era will begin officially in 2040.
But wait, what difference does it actually make? We are just playing with time and talking about the difference of 10 to 15 years. The real question is, is the singularity really on its way or all of it is just an ideal form of world in a scientist’s head?
In an interview, father of artificial intelligence i.e. Jürgen Schmidhuber, who is also a Chief Scientist at AI Company NNAISENSE and Director of the Swiss AI lab IDSIA, said that he is confident about the singularity and that,
“It is just 30 years away, if the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain but are much faster’’
He further explained that all of it is just the beginning. In near future, there will be devices that are not just smarter than humans but can also compute as much data as all human brains taken together. Additionally, these device won’t also break your bank and will definitely be cheaper than most AI products these days. He concluded saying that,
“And there will be many, many of those. There is no doubt in my mind that AIs are going to become super smart”
The most popular version of the singularity theory is the “Intelligence explosion.” The term was coined for describing the development of general artificial intelligence and that this development may lead to a singularity in AI.
According to Intelligence explosion theory, the work on artificial intelligence will lead humanity to a point where the “artificial superintelligence” will surpass human intelligence and cognition. The theory depicts that AI’s self-replicating (decision-making) aspects will take over the control from the human handlers.
The first person to study and research the intelligence explosion was I.J. Good, an English mathematics and computer science expert. During the Second World War, Good worked on code-breaking along with Alan Turing for the Allied powers. While explaining intelligence explosion, Good highlighted the idea of self-improving technology. He said,
“An ultra-intelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind.”
Other Ideas and Manifestations about the Singularity Concept
The singularity concept in artificial intelligence is quite broad, and there are several manifestations about it too, for example, speed superintelligence, non-AI singularity, and the emergence of superintelligence.
1. Superintelligence and Other Hypothetical Agents
Hyper and superhuman intelligence are the hypothetical agents that state the possibility of machines possessing the intelligence that will surpass the brightest human minds. Most researchers and technology experts believe that there is no way for humans to know when and how superintelligence may emerge.
Some researchers believe that AI advancements will result in cognitive systems lacking human cognitive limitations, i.e., the reasoning systems will be better than human minds. On the other hand, others argue that humans will modify their own biology and achieve higher intelligence.
2. Speed Super-intelligence
While some focus on the intelligence levels of super-intelligent technologies, others believe that artificial intelligence will be able to do everything a normal human can, but much faster. The researchers believe that the speed factor would increase by a million-fold for the information processing, and one subjective year may pass in just 30 physical seconds.
3. Non-AI Singularity
Some researchers also believe that the singularity concept may be broader than artificial intelligence. They believe that the singularity could be the changes caused in society by new technologies, such as nanotechnology. However, other experts think that the changes that occur without superintelligence cannot qualify as singularity.
Singularity in AI: Good or Deadly?
The singularity mentioned in Sci-Fi and writings of futurists is actually the AI singularity or technological singularity. But the question is, would AI singularity be good for humans or not? Well, there cannot be a close-ended answer to such a question, as several factors can prove a yes or no completely wrong.
Most writers depict technological singularity as bad or even deadly for humankind. A famous example explaining the worst scenario of AI singularity is that of a machine whose sole purpose is to make paper clips. However, what if it alters its programming, rewrites its own codes, and turns the whole world into a paperclip manufacturing factory due to its extra efficient coding? That’ll be worst, and we don’t need that many paper clips.
The example may seem a little unrealistic, but a few years ago, AI taking over the world was also an unrealistic idea. Even if AI singularity doesn’t destroy the world in ten seconds, it will surely destroy it slowly. The scariest part of AI singularity is that the computers and machines will take the place of humans, and we as humans wouldn’t be the apex species anymore.
But then again, let’s be a little more optimistic and believe that humans will be able to rein over the AI singularity by programming the technology accordingly; the singularity may result in being helpful for human beings. Humans will have to reconsider singularity and outsmart artificial intelligence in such situations, which wouldn’t be easy, as singularity is defined as technology outsmarting human intelligence.
It is hard to tell whether singularity will ever happen or not, and, if it happens, how will life turn out to be for humans? Just imagine. However, even with the uncertainties present, the idea of machines being able to make decisions, rewrite their codes (even better than humans), and self-improvement is still quite sinister and scary. We never know when a robot may take over the world and ask all humans to evacuate it in 24 hours. We have seen may “smart devices” come and go such as the Apple Newton which didn’t work out so well.