By now, almost everyone is aware of the strange romance fostered between tech giant Elon Musk and Canadian pop star Grimes. While it's always fun to watch an unlikely pair try (and maybe fail) to make it work, what's most interesting about their relationship is the way they were brought together. Initially, Musk and Grimes bonded over a shared interest in a controversial and terrifying thought experiment called Roko's basilisk.
While many learned about Roko's basilisk as a byproduct of Musk and Grimes's romance, the theory has actually been around since 2010. Originally put forth by a forum user named Roko, the hypothesis suggests humankind will one day create an objectively just artificial intelligence (AI) that punishes anyone who didn't help further its development.
Potentially horrifying technology is introduced all the time, but some people - including Musk - are especially concerned about AI and have been issuing calls for increased regulation. While Roko's basilisk may have seemed like a wild sci-fi story in the late 2000s, the idea only gets creepier with each passing year as AI becomes more prolific and pieces of the theory start lining up with real-world facts.
In 2010, someone by the name of Roko posted on a forum called LessWrong, presenting a truly terrifying theory. According to the poster, if humanity were to create an extremely intelligent AI designed to maximize the common good, it could potentially punish and torture those who didn't contribute to its existence.
"Of course, this would be unjust, but it is the kind of unjust thing that is oh-so-very utilitarian," Roko wrote in their thought experiment.
The theory consumed LessWrong and its users, as many were horrified by the prospect of this hypothetical situation coming to pass.
After Roko presented their thought experiment on LessWrong, the forum's moderator and founder, Eliezer Yudkowsky, slammed the poster in an intense response:
Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
Following the outburst, Yudkowsky deleted the original post and placed a five-year embargo on all conversations involving Roko's Basilisk.
Eliezer Yudkowsky, the founder of LessWrong, is an AI researcher who runs the Machine Intelligence Research Institute, which contributes heavily to the study and growth of AI technology.
Yudkowsky's outrage over Roko's Basilisk makes sense when one considers his work, especially in the context of his contributions to discussions on technological ethics. Yudkowsky, who claims the thought experiment drove many LessWrong users to the point of mental breakdown, later explained why he was so horrified by the theory.
In a Reddit thread from 2014, Yudkowsky said he was, "indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public internet."
The name Roko's basilisk references a creature that's arguably just as terrifying as the thought experiment. In classical mythology, a basilisk is a powerful serpent that that can kill a person just by looking at them.
Accounts vary as to what the basilisk looks like; some sources think it's a giant snake while others believe it is chimera with the head of a rooster and the tail of a serpent. While modern humans will most likely recognize the name from the second Harry Potter book, the creature has been around for centuries, dating back to at least 79 CE in Rome.
If the thought of a massive snake with a fatal stare is horrifying, imagine an all-powerful AI program dead-set on seeking revenge.