These days, the internet abounds with theories about the potential threats posed by artificial intelligence (AI). Movies set in frightening dystopian futures where humans wage war against their dictatorial robot overlords fill theaters. People debate the potential dates and theoretical causes of the Singularity in the comments sections and on major conference stages. But what bad things do those in the know (scientists, technological wizards, and Silicon Valley investors) think AI is really capable of?
This list brings together some of the most widespread theories about the possible threats posed by AI. Sure, the Singularity is something to be concerned about, but have you ever considered that the next major military race might be driven by developing AI for the battlefield? Or that your job might become redundant because of AI? Or, for that matter, that a benevolent AI might destroy all that stands between it and its created function? If these questions interest you, read on to get the scientific thoughts on the negative effects of artificial intelligence!
On July 28, 2015, a list of technological giants - including Elon Musk, Stephen Hawking, and Noam Chomsky - released an open letter about the biggest threat posed by AI: an autonomous weapons arms race. Essentially, Musk and Hawking, along with an impressively long list of fellow technological scientists, describe the potential for “a third revolution in warfare, after gunpowder and nuclear arms.” On the one hand, autonomous weapons could potentially reduce the number of human soldiers who die in battle. But, on the other hand, competing military powers could start a global arms race that could create autonomous weapons capable of everything from assassinations to genocide. According to the technologists, avoiding these outcomes is a matter of shared human ethics, not technological capability.
Related to the idea of an AI arms race, technologists fear the possibility of AI with the capability to kill falling into the wrong hands. The Future of Life Institute (the same group of people who signed a letter about avoiding an AI arms race) warn that, should these kinds of weapons come into existence, they would be designed to be “extremely difficult to simply ‘turn off.’” Apparently these risks are already present with existing technology; however, they become more extreme as “levels of AI intelligence and autonomy increase.”
Although a little less scintillating than an AI arms race (but still pretty darn scary), experts predict that AI making human labor redundant and creating unemployment will continue. We’ve already seen this happen in industries involving manual labor, with human labor replaced by mechanized assembly lines. However, in the future, experts envision many of the “knowledge” industries (the ones where human knowledge and creative problem-solving abilities are really important) also being affected by AI replacing humans. Medicine, law, and architecture are all beginning to experience changes as AI makes its way into the professions.
There’s a philosopher named Nick Bostrom who likes to use a scenario about paper clips to illustrate the problem of AI as a runaway task master. In the scenario, Bostrom envisions a technology that’s really good at its job: turning things into paper clips. But, because it doesn’t have human values and decision-making skills, the AI starts turning everything in sight into paper clips, including people! And the paper clips keep piling and piling and piling, and the super technology is on a runaway paper clip train to nowhere.
The point of this seemingly silly example is to demonstrate the fear that some experts hold about AI’s unintended consequences. It’s possible, theoretically, for an AI technology to become incredibly good at the thing it’s designed to do, so good, in fact, that its pursuit of the intended purpose causes it to overtake anything in its way. Like turning the planet into a pile of paper clips, for example.