These days, the internet abounds with theories about the potential threats posed by artificial intelligence (AI). Movies set in frightening dystopian futures where humans wage war against their dictatorial robot overlords fill theaters. People debate the potential dates and theoretical causes of the Singularity in the comments sections and on major conference stages. But what bad things do those in the know (scientists, technological wizards, and Silicon Valley investors) think AI is really capable of?
This list brings together some of the most widespread theories about the possible threats posed by AI. Sure, the Singularity is something to be concerned about, but have you ever considered that the next major military race might be driven by developing AI for the battlefield? Or that your job might become redundant because of AI? Or, for that matter, that a benevolent AI might destroy all that stands between it and its created function? If these questions interest you, read on to get the scientific thoughts on the negative effects of artificial intelligence!
Because the entire range of future consequences of a given AI likely can’t be known, experts fear the possibilities of AI that are designed to do good things and inadvertently do… not so good things to achieve their goals and develop destructive methods for accomplishing things. The philosopher Nick Bostrom imagines a scenario where AI tasked with the goal of “harm-no-humans” elects to achieve this goal by preventing “any humans from ever being born.” It would just be doing its job, right?
The rich history of science fiction on the topic makes the idea of “the Singularity” caused by AI perhaps the best-known one in popular discourse on the topic. For those who don’t know, “the Singularity” is the moment at which AI officially outpaces humankind. It’s the moment when the machines - if they haven’t previously been programmed to treat us with compassion - take over. Although it’s not the most talked-about “realistic” possibility posed by AI, theoretical physicist Stephen Hawking and Tesla mastermind Elon Musk have both been quoted as stating that they fear the existential threat AI poses to humankind. Hawking is optimistic, though. He theorizes that by the time the Singularity happens, humans might be living in galactic spaces beyond Earth, so the AI can just go ahead and have it.
Although it’s possible for self-driving vehicles to get hacked for malicious purposes (like a steering column suddenly being overtaken by a nefarious attacker, resulting in an unsuspecting driver being enlisted into a suicide mission), experts believe that the most realistic concern we should have about hacking and cars is our data getting stolen.
Craig Smith, Rapid7 Transportation Security Director, told Bloomberg News that plugging our phones into the USB ports in our cars poses a greater risk of “hacking” than having our vehicles remotely overtaken. He says that “an attacker usually isn’t going to try to control your car as much as they may try to get information from your car.” And the fancier the car’s technological capabilities, the more opportunities for attackers to access the owner’s data.
There’s a group of scientists at a start-up called Kernel in Venice, CA, who are currently working to “unlock” the “trapped potential” of the human brain. And they’re doing this by building bridges between the human and the machine through microchip implants in brains. At the moment, Kernel is testing out the “neuroprosthetic” (as they call it) on people with cognitive disorders, including Alzheimer’s and traumatic brain injuries, to see if it helps them accomplish cognitive tasks with greater ease and fluency by replicating and replacing brain cell communication. In the future, Kernel hopes the microchip will become widely affordable and accessible as a cognitive enhancement device. Doesn't this sound like some creepy cyborg fiction?