Humanity's dominion over fire is often described as a singular event for our species. In the simplest of explanations, this means that in the pre-fire world that our ancestors lived in, the world where early humankind couldn't create and utilize fire, looked nothing like the post-fire dominion world.
Many of our world's most important living minds believe that a technological singularity is near. Right now, there isn't a universal language that futurists can use to explain how different our world will be post-singularity, because the rapid explosion of technology that will flow from the singularity is completely unknowable. The technological singularity will create a world so advanced in terms of utilizing computers and artificial intelligence that it will look dramatically different from the world we live in currently... possibly meaning we humans may not be welcome in the new world.
Many scientists, philosophers, and engineers believe singularity can mean the end of humanity. Stephen Hawking has fears that artificial intelligence, or AI, "may replace humans all together," and believes AI will be "a new form of life that outperforms humans."
Without a doubt, Stephen Hawking is one of the greatest physicists and thinkers of our world. While speaking at a technology convention in Lisbon, Portugal in November 2017, he indicated that while computers and artificial intelligence have enormous potential for value to humankind, they also have many dangers, which includes the potential to destroy humanity. Computers and AI already threaten millions of jobs. However, these technologies could also lead to new world destructive technologies.
"AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy," Hawking stated. He also calls out proponents and regulators of AI to pause their efforts momentarily in order to thoroughly process the potential disasters that come with AI: "Perhaps we should all stop for a moment and focus not only on making our AI better and more successful, but also on the benefit of humanity."
While Hawking warns about the impending technological singularity, others embrace it, and jokingly say that the end of the world as we know it is a good thing. The technological singularity is the concept that in the near future, computers and artificial intelligence will reach a climax where the intelligence of those technologies will surpass human intelligence. That singular event will change our world so much so, that our current society will literally be unrecognizable. Our world the day before the singularity and the day after will be drastically different places. It's similar to the day that humankind discovered the ability to create and control fire; the next day was filled with possibilities never before possible.
Futurists have pushed up their predictions on when this event will occur. Google engineer Ray Kurzweil previously estimated that our world would experience this singularity by 2045, and as of March 2017, he pushed his prediction forward to 2029.
Like Hawking, SpaceX CEO Elon Musk has concerns about AI, and has compared AI to “summoning the demon.” In the summer of 2017, Musk met with US governors to articulate the real need to start regulating AI now. If what futurists say about the singularity is true, then Musk has a point. When we reach the tipping point and AI surpasses human intelligence, regulators will not be able to keep up with computers and AI that function around the clock and think better than they do. Musk stated, “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
Many critics of Hawking's and Musk's position believe that the worry is overblown. They believe AI will not be malicious and will ultimately benefit society. MIT physicist Max Tegmark, made the distinction that it is not malevolent AI we need to worry about, it's competent AI. Tegmark uses the example of a highly intelligent self-driving car. He states that the concern about AI is that when you tell that highly competent autonomous car to get you to a destination as fast as possible, the car isn't worried about the damage along the way, it's focused on getting you to your destination as quickly as possible. The repercussions of a single-focused AI could, in theory, be disastrous.