Roko's Basilisk Warns Of Potential AI Horrors And Sparked The Unlikeliest Romance Of 2018
By now, almost everyone is aware of the strange romance fostered between tech giant Elon Musk and Canadian pop star Grimes. While it's always fun to watch an unlikely pair try (and maybe fail) to make it work, what's most interesting about their relationship is the way they were brought together. Initially, Musk and Grimes bonded over a shared interest in a controversial and terrifying thought experiment called Roko's basilisk.
While many learned about Roko's basilisk as a byproduct of Musk and Grimes's romance, the theory has actually been around since 2010. Originally put forth by a forum user named Roko, the hypothesis suggests humankind will one day create an objectively just artificial intelligence (AI) that punishes anyone who didn't help further its development.
Potentially horrifying technology is introduced all the time, but some people - including Musk - are especially concerned about AI and have been issuing calls for increased regulation. While Roko's basilisk may have seemed like a wild sci-fi story in the late 2000s, the idea only gets creepier with each passing year as AI becomes more prolific and pieces of the theory start lining up with real-world facts.
- Photo: Pixabay
The Theory Suggests A Super Intelligent AI Might Learn To Punish Its Detractors
In 2010, someone by the name of Roko posted on a forum called LessWrong, presenting a truly terrifying theory. According to the poster, if humanity were to create an extremely intelligent AI designed to maximize the common good, it could potentially punish and torture those who didn't contribute to its existence.
"Of course, this would be unjust, but it is the kind of unjust thing that is oh-so-very utilitarian," Roko wrote in their thought experiment.
The theory consumed LessWrong and its users, as many were horrified by the prospect of this hypothetical situation coming to pass.
The Theory Was Quickly Banned By The Forum's Founder
After Roko presented their thought experiment on LessWrong, the forum's moderator and founder, Eliezer Yudkowsky, slammed the poster in an intense response:
Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
Following the outburst, Yudkowsky deleted the original post and placed a five-year embargo on all conversations involving Roko's Basilisk.
The Forum Where The Theory Was Presented Is Run By An AI Researcher
Eliezer Yudkowsky, the founder of LessWrong, is an AI researcher who runs the Machine Intelligence Research Institute, which contributes heavily to the study and growth of AI technology.
Yudkowsky's outrage over Roko's Basilisk makes sense when one considers his work, especially in the context of his contributions to discussions on technological ethics. Yudkowsky, who claims the thought experiment drove many LessWrong users to the point of mental breakdown, later explained why he was so horrified by the theory.
In a Reddit thread from 2014, Yudkowsky said he was, "indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public internet."
- Photo: Wenceslaus Hollar / Wikimedia Commons
A Basilisk Is A Mythical Snake That Can Kill With A Glance
The name Roko's basilisk references a creature that's arguably just as terrifying as the thought experiment. In classical mythology, a basilisk is a powerful serpent that that can kill a person just by looking at them.
Accounts vary as to what the basilisk looks like; some sources think it's a giant snake while others believe it is chimera with the head of a rooster and the tail of a serpent. While modern humans will most likely recognize the name from the second Harry Potter book, the creature has been around for centuries, dating back to at least 79 CE in Rome.
If the thought of a massive snake with a fatal stare is horrifying, imagine an all-powerful AI program dead set on seeking revenge.
Once You Read The Theory, You're Immediately Implicated
Perhaps the most hideous thing about Roko's basilisk is the way the theory traps its readers. Once you learn about the existence of the theory, you're automatically implicated. From there, you can either devote yourself to bringing the AI to life or ignore it and risk becoming one of its victims.
According to the original forum post:
The resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity... a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half).
If you've never heard of Roko's basilisk, you'll be fine, as it would be unjust to punish someone for not working towards an ideal they had no idea existed. However, once you hear about it, it's your moral obligation (according to the theoretical AI) to help make it real; anything else could be considered standing in the way of progress.
- Photo: OnInnovation / Flickr
Musk Is Fascinated By The Theory And Advocates For AI Regulation
Musk has a long-documented obsession with artificial intelligence and openly advocates for AI regulation. In Do You Trust This Computer?, a 2018 documentary about the rise of artificial intelligence, Musk expressed fear over the future of AI's development:
The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world.
At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape.