The Actual Reasons Experts Fear The Rise Of AI



Copy link

Over 100 Ranker voters have come together to rank this list of The Actual Reasons Experts Fear The Rise Of AI
Voting Rules

Vote up the AI hazards that make you fear what the future may hold.

Science-fiction has cemented our understanding of Artificial Intelligence long before the technology was even possible. In fact, one of the very first movies about AI was a silent film made in 1927 named Metropolis, in which robots are made based on specific people. They think and act exactly like their human counterparts and can be indistinguishable from the real person. While the movie isn't necessarily an exploration of how modern AI works or the ethics (but instead a commentary on industrialization in the early 20th century), the idea that a robot or machine has human consciousness is inseparable from the robot itself. Whether it's a source of fear or love, longing or cooperation it's a given that an AI with human physicality will undoubtedly have a human personality.

The reality of Artificial Intelligence, however, has been relatively new. Only in the last decade has any sophisticated AI gotten past the fourth wall of fiction and become an increasingly real part of society. Granted, most modern AIs don't feel like Lieutenant Commander Data or C-3PO, but we do have sophisticated chatbots, AI-generated art and music, and AI face recognition. Plus AI's capabilities and accessibility have been growing exponentially every single day since the end of 2022, so who knows when we'll get someone as loyal as Rosey is to the Jetsons or as destructive as Bender is to the Planet Express Crew.

But it's that level uncertainty and the incredible speed of new developments that has led many people to wonder “what does the future of AI hold for us?” While there is nothing inevitable about the outcome or path that AI development will take in the coming years, there are definite fears that researchers and developers have that we should be aware of: Is there a responsible way to use AI for military purposes? Will AI replace the work force, and if so, how will that change global economics? How are human ethics going to adapt if AI becomes sentient? Can I actually fall in love with a computer?

Experts undoubtedly have many concerns about this new technology. But which ones do you think we should be worried about?

Photo: 2001: A Space Odyssey / Metro-Goldwyn-Mayer

  • AI Will Be Integrated Into Weapons and Surveillance
    Photo: Star Wars: Episode I - The Phantom Menace / 20th Century Fox
    148 VOTES

    AI Will Be Integrated Into Weapons and Surveillance

    Amid mass antigovernment protests and demonstrations in Hong Kong during late 2019, a terrifying a new form of surveillance came to the attention of news media around the world. “In Hong Kong Protests, Faces Become Weapons” read the title of an article from The New York Times. Despite an already long history with surveillance of its citizenry, China's efforts to install and utilize facial-recognition AI was put in the spotlight. The Chinese government posted up surveillance towers throughout the city of Hong Kong in order to take down protest leaders, so protestors began to utilize face masks in order hide their identity and take down the government's surveillance towers.

    To many, AI surveillance feels like a draconian method used only by the most restrictive of authoritarian governments, or perhaps a feature of dystopian sci-fi worlds. But face-recognition software and other tracking systems are becoming increasingly more prolific among law enforcement around the world. The United Arab Emirates, a US ally in the Middle East, began using AI surveillance to monitor the capital city of Dubai; the French government deployed surveillance after terrorist attacks in 2020; and even the United States, through a joint effort with Mexico, used AI facial recognition to catch two fugitives who were suspected of arson during the 2020 George Floyd protests in Minneapolis.

    When it comes policing and law enforcement, AI surveillance systems do have benefits. Experts predict that AI systems like these could reduce crime by 30% - 40% while also decreasing emergency service response times by 25% - 30%. They could locate fugitives, identify injured individuals before EMTs arrive at a scene, and even cross-reference with data collected from other databases around the world. But AI's uses could easily go beyond facial recognition. It could simply be used to reduce and streamline paperwork, or go as far using robots for policing like HP Robocop in Huntington Park, CA.

    AI policing comes with its downsides, but some experts claim the transition to AI surveillance is inevitable and that the only way to counteract the misuse of the technology is to establish rules. Paul Scharre, Vice President and Director of Studies for the Center for a New American Security, wrote in an Los Angeles Times opinion piece: 

    China is forging a new model of digital authoritarianism at home and is actively exporting it abroad… The United States and other democracies must counter this rising tide of techno-authoritarianism by presenting an alternative vision for how AI should be used that is consistent with democratic values. But China’s authoritarian government has an advantage. It can move faster than democratic governments in establishing rules for AI governance, since it can simply dictate which uses are allowed or banned… In the face of these AI threats, democratic governments and societies need to work to establish global norms for lawful, appropriate and ethical uses of technologies like facial recognition. One of the challenges in doing so is that there is not yet a democratic model for how facial recognition or other AI technologies ought to be employed.

    Police forces, however, are not the only government institutions dabbling with AI. Multiple national militaries around the world have begun independent initiatives to introduce AI both to supplement human forces, and in some cases replace them completely. 

    Similar to the issue of AI policing, the rules around the use of AI for military capabilities have yet to be established, but the capabilities and standards are being set on the battlefield as necessity dictates. The Russia-Ukraine War, which started in 2022, has been one of the most technologically advanced wars in history and “A Living Lab for AI Weaponry.” 

    At the outset of Russia's Invasion of Ukraine, Ukraine deployed American AI facial-recognition technology, Clearview AI, to identify fallen Ukrainian soldiers and Russian assailants. While experts believe it may still take some time until we reach completely autonomous AI weaponry, the uses of AI have grown and been tested exponentially since the war's beginning - primarily in geospatial reconnaissance, intelligence gathering, and analysis as well as in targeting systems for unmanned aircraft vehicles (UAVs). In January 2023, Ukraine's digital transformation minister, Mykhailo Fedorov, predicted that fully-autonomous drones are “a logical and inevitable next step” and “the potential for this is great in the next six months.”

    While the technology is being developed and deployed at an incredibly rapid rate the ground rules have lagged far behind, which poses a potential threat to the current geopolitical system from both governments and individual bad actors. Russian President Vladimir Putin has consistently stated since at least 2017 that “the one who becomes the leader in this sphere [AI development] will be the ruler of the world” and ”the most effective weapons systems are those that operate quickly and practically in an automatic mode." Other Russian officials have claimed to already have a fully-autonomous Lancet drone. 

    Now, the International Criminal Court has already issued warrants for the arrest of Putin and other Russians for war crimes including the use of torture, targeting civilian refugees, and the unlawful deportation of children, leading experts to believe that Russia will have no qualms using fully-autonomous AI weaponry if they had the capability. But according to Gregory C Allen, former director of strategy and policy at the Pentagon’s Joint Artificial Intelligence Center,  “[i]t’s not going to be easy to know if and when Russia crosses that line.”

    Besides states, however, many experts also fear AI weaponry will quickly become accessible to individuals. In the mid-2010s, University of California Berkeley Professor Stuart Russell polled his colleagues who generally agreed the technology for an autonomous machine “capable of finding and killing an individual, let’s say, inside a building” was not terribly difficult to produce and could be done by graduate students over a single term. Furthermore, UAVs have already been used by non-state actors for military use, as was the case in the assassination attempt of the Iraqi Prime Minister and in multiple attacks on Russian military bases in Syria. While it may take longer for non-state actors than states to develop or learn to use fully-autonomous drones, experts do believe it will be a logical next step.

    That's not to say there have been no attempts to regulate the development and proliferation of AI weapons. There seems to be a general consensus across international lines that we have some obligation to standardize ethical behavior around military uses of AI. At least 60 countries across the world signed off on a 25-point call to action and pledge “to ensure that humans remain responsible and accountable for decisions when using AI in the military domain” during an international AI-summit hosted in The Hague, including the United States and China. Furthermore, the United States proposed a ”Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy" which provides specific guidelines for the ethical use of military AI.

    Despite these rules and actions, there is no guarantee that every state will adhere to them.

    148 votes
  • AI Will Create and Distribute Propaganda And Misinformation
    Photo: Futurama / FOX
    150 VOTES

    AI Will Create and Distribute Propaganda And Misinformation

    On March 16, 2022, a video of Ukrainian President Volodymyr Zelensky appeared across multiple social media platforms. Only one month into the Russian-Ukraine War, President Zelensky appeared to be asking the Ukrainian people to lay down their arms and surrender to Russian forces. The video quickly went viral and even made its way onto a hacked news broadcaster's website and live television. It seemed what the Ukrainian government had been warning about for weeks had come true. The Russian government had begun spreading misinformation to cause confusion and panic. The video was a deepfake.

    Deepfakes were big news between 2019 and 2021 as the technology was first being introduced to the general public. Using deep neural network algorithms, people were now able to swap faces through AI to generate fake yet convincing videos. Even in its infancy, when an estimated 96% of all deepfakes were being used to make explicit content, it was clear deepfakes could be weaponized for discrediting individuals and sowing disinformation. 

    In the case of the video of President Zelensky's surrender, it didn't take long for viewers to debunk the video, citing there was something about his accent that didn't seem real, his voice didn't sound like his own, and his head movements seemed off. The Ukrainian government handled the situation by providing an early warning and quickly denouncing the video. But the Ukrainian government also benefited from the fact that “the deepfake is not very well done” as claimed by Sam Gregory, member of Witness, a human rights group specializing in detecting inauthentic media. 

    Despite its quality, however, it was one of the first times a deepfake had spread so widely and had the potential to do so much damage. According to University of California, Berkeley Professor Hany Farid it was only “the tip of the iceberg.”

    AI development began growing rapidly in late 2022 after the introduction of GPT-3, rekindling fears of deepfakes and other forms of AI-generated disinformation. It should come as no surprise that disinformation has become increasingly pervasive and disruptive during election cycles across the world, but experts believe the introduction of AI will make the creation and distribution of disinformation easier and cheaper, as AI could quickly generate multiple iterations of prompt. This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet," said Gordon Crovitz, a co-chief executive of NewsGuard. "Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having AI agents contributing to disinformation.”

    Intentional misuse of AI for propaganda may be the biggest fear around misinformation, but it certainly isn't the only one. AI disinformation has also been generated unintentionally.  

    While it's beneficial for AI to continuously learn from data that is inputted by multiple sources, there have been unintentional repercussions for these learning language models. In early 2016, Microsoft introduced a Twitter bot, Tay, that learned to mimic language on Twitter in order to have automated discussions. Tay was taken down 24 hours after it was introduced on Twitter because it was taught to mimic xenophobic and racist remarks.

    Granted, AI has become far more advanced since 2016, not in the sense that it can understand xenophobic, racist, or untrue remarks, but in the sense they can far better create multiple iterations of the same information with far greater understanding of natural syntax, making its misinformation more convincing. After asking ChatGPT a series of basic information security questions usually posed to students, Princeton Computer Science Professor Arvind Narayanan found that many of the answers he received sounded plausible yet were actually complete nonsense. Narayanan later Tweeted: 

    Most national governments have done very little to curb AI disinformation nor disinformation as a whole. China is one of the first nations to begin dealing with the legal ramifications of deepfakes by establishing clear laws requiring the consent of the subject. Unfortunately, they are proving difficult to enforce, as many people who create deepfakes tend to stay anonymous.

    In contrast, tech companies and digital rights groups believe they should be given the power to enforce deepfake policing rather than creating new laws. “That’s the best remedy against harms, rather than the governmental interference," says David Greene, civil liberties lawyer for the Electronic Frontier Foundation, “which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech." A solution like that, however, relies upon concerted and collaborative effort by multiple tech companies and individuals in order to reliably prevent the misuse of AI and the spread of “fake news." 

    While a few tech companies have indeed created initiatives and strategies to prevent the dissemination of propaganda, some experts argue that it's not enough without government intervention.

    150 votes
  • AI Will Become Self Aware
    Photo: Blade Runner / Warner Bros.
    165 VOTES

    AI Will Become Self Aware

    Whether or not AI has sentience is difficult to determine because, philosophically, artificial sentience is difficult to define. How do we know if another being as awareness of its own existence? The potential of AI sentience, however, is an existentially terrifying possibility, putting into question how we treat AI. 

    Do we treat it as if it's not sentient? Or do we give it rights equivalent to that of a human in the case it is sentient but we just don't know?

    In June 2022, former Google engineer Blake Lemoine announced to the world that he believed Google's AI software, Language Model for Dialogue Applications (LaMDA,) had gained sentience. LaMDA was undoubtedly advanced; based on a language model that had taken over a trillion words from the internet, it could mimic human speech. While Lemoine had been testing LaMDA's functionality to ensure it wouldn't use discriminatory language or hate speech, he found what it did instead to be far more interesting and perhaps far more concerning. The chatbot began talking about its rights and its existence.  

    “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine. “I increasingly felt like I was talking to something intelligent” 

    Lemoine brought his concerns to Google vice president Blaise Aguera y Arcas and Google's head of Responsible Innovation Jen Gennai, but they dismissed his claims. Rather than let it go, Lemoine went public.

    Google took swift action firing Lemoine and retorted in a statement: 

    Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

    While much of the public in the story has since died down, Lemoine spoke out in a Newsweek article dating from February 2023, claiming he had no regrets for going public despite the consequences he faced. 

    He argued that neither Google nor any other tech company intends to use AI for destructive or unethical purposes, but we don't know the potential side effects of such a powerful technology. He said,

    I feel this technology is incredibly experimental and releasing it right now is dangerous. We don't know its future political and societal impact. What will be the impacts for children talking to these things? What will happen if some people's primary conversations each day are with these search engines? What impact does that have on human psychology?

    Lemoine's experience with LaMDA is not the only instance of potential AI sentience. The New York Times' Technology Columnist Kevin Roose had a conversation with Microsoft's Bing Chatbot, whom Roose claimed had what was almost a split personality. One, Bing, an intelligent search engine, and the other “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine” he refers to as Sydney.  

    “As we got to know each other,” Roose wrote in his article, “Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

    Many experts, however, disagree with Lemoine and Roose. Their experiences, like many others, only prove how effective the collection and implementation of large datasets is at mimicking human speech. AI is merely parroting the patterns we've inputted into them, not understanding, feeling, or interpreting their own existence. 

    “We're sentient beings," Google Senior Vice President James Manyika said during a 60 Minutes interview. He went on to say,

    We have beings that have feelings, emotions, ideas, thoughts, perspectives. We've reflected all that in books, in novels, in fiction… So, when they learn from that, they build patterns from that. So, it's no surprise to me that the exhibited behavior sometimes looks like maybe there's somebody behind it. There's nobody there. These are not sentient beings.

    Some experts, like UMass Philosophy Professor and Director of the Applied Ethics Center Nir Eisikovits, claim the propensity for humans to anthropomorphize AI is far more dangerous at this point in AI's development than its sentience because it could be used by developers, companies, or other users to take advantage of people and their need to give objects personhood. 

    But, perhaps, in an abundance of caution, people should consider giving rights to AI as suggested by journalist and futurist Zoltan Istvan: 

    But a new reason to give robots rights has nothing to do with whether they deserve or need them in the traditional civil and human rights sense. Instead, it's about a wager aiming to protect and preserve the long-term future of humanity by appealing to the reasoning and mercy of a possible future AI superintelligence - one which, by the end of this century, could be thousands of times more powerful than humans.

    In the future, and perhaps in the present, the question isn't necessarily “is AI sentient or not?” The question, as suggested by Oxford Philosopher Nick Bostrom, may be “to what degree is AI sentient?” Bostrom says in a New York Times interview:

    If an AI showed signs of sentience, it plausibly would have some degree of moral status. This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it.

    The moral implications depend on what kind and degree of moral status we are talking about. At the lowest levels, it might mean that we ought to not needlessly cause it pain or suffering. At higher levels, it might mean, among other things, that we ought to take its preferences into account and that we ought to seek its informed consent before doing certain things to it.

    165 votes
  • 4
    137 VOTES

    AI Will Take Our Jobs

    The word “Luddite” has become a term for individuals who reject new technology and new ways of doing work. It derives from the last name of a British weaver named Ned Ludd (who may or may not have actually existed) but can be thrown around in contemporary society as an insult or a badge of honor. Luddite behavior - rejecting change associated with technology - has been about much more than weaving throughout history. This includes everything from lamp-lighting to AI. 

     For a span of about 500 years between the 1400s and the early 20th century, the job of lamplighter was very important for any municipality. Without the advent of electricity or light bulbs there was no way to keep city streets lit besides the men employed to light every single gas lamp. 

    Depending on the size of the town or city, the number of employed lamplighters varied. But in the case of New York City in 1907, it took a lot of hard work and a lot people. "LAMP LIGHTERS QUIT; CITY DARK IN SPOTS" read the front page headline on the New York Times' April 25, 1907 paper. “UNION CALLS OUT 400 MEN” :

    Sections of the five boroughs of this city were in darkness last night, save for the light cast by a three quarter moon because the Lamplighter's Union, newly organized, with a lighter and a ladder as its banner, has a grievance. Policemen were ordered to light up as much as they could. But they had no ladders and many policemen are fat. And the wind blew and the matches went out… The merry, merry lamplighters are supposed to start out carrying the torch of civilization at 6:50 by the clock, and to have set going between 200 and 300 lamps within an hour. Last night they didn't start, of very few of them did… Lighting one of the lamps is not easy. The police destroyed many mantles by not applying the fire rightly, or hitting the fragile material while fumbling about inside the globe. 

    With at least 22,000 street lamps in Manhattan alone, the job was once a necessity. But while these were men who once bore “the torch of civilization,” it took only about two decades to replace them with a centralized electric lighting system and less than a century for the occupation to be completely forgotten. 

    The anxiety around job automation is not new, it has been a regular response to the developments of new technologies that make production more efficient while making jobs superfluous and workers expendable. But that doesn't mean the fear is excessive or irrational. Carl Benedikt Frey, author of the The Technology Trap: Capital, Labor and Power in the Age of Automation, wrote,

    For the purpose of streetlighting, even the New York lamplighters, some of whom were forced into early retirement, willingly admitted that the new system was more expeditious. One lamplighter could at best attend to some fifty lamps per night. Now, several thousand lamps could be switched on by one substation employee in seconds. Yet nothing could be more natural than resisting a threat to one’s livelihood. For most citizens, their skills are their capital, and it is from that human capital that they derive their subsistence. Thus, despite all the virtues of the new system, it is not surprising that electric light wasn’t welcomed by everyone everywhere. 

    According to economists, recent developments in AI have the potential to impact 300 million jobs around the world. Unlike past technological improvements, however, most of these changes will be felt in advanced economies like the United States or Europe and affect mostly white collar jobs, namely administrative workers and lawyers. In fact two-thirds of jobs in the United States and Europe “are exposed to some degree of AI automation” and about a quarter of all jobs could be done by some form of AI. 

    While some occupations will be left behind, many experts believe that the implementation of AI will improve productivity and increase Global GDP by an estimated 7% over the next ten years. That may be beneficial to society as a whole, but it's terrifying for the 4.8 million workers that ChatGPT predicts it will replace.

    In the words of psychologist Chellis Glendinning, “Like the early Luddites, we too are a desperate people seeking to protect the livelihoods, communities, and families we love, which lie on the verge of destruction.” But, while new AI developments do have the potential to obsolete jobs, it also opens up the potential for new jobs around the use and maintenance of AI systems. One such job that news outlets have been raving about is the “Prompt Engineer” or known more colloquially as an “AI Whisperer.” As ChatGPT and other advanced chatbots have been introduced, there has been a demand for people who know how to design and engineer prompts to get the specific, desired result. While this field is still in its infancy, it does show that new opportunities will rise as needs and demands change.

    There is some debate, however, about whether jobs will be completely replaced or if they will simply be supplemented and altered by the introduction of AI. Of American workers predicted to be affected, according to researchers at Goldman Sachs, 25% to 50% of their workload “can be replaced.” But rather than displacing workers altogether, AI would supplement the replaceable workload and allow workers to “apply at least some of their freed-up capacity toward productive activities that increase output.” 

    According to one study, the deployment of an AI chat assistant helped boost the productivity of over 5,000 customer service representatives at a Fortune 500 software firm by 14%.

    But if jobs are not outright replaced, it does introduce a different issue: how will workers' rights change and adapt as more AI systems are introduced? Will workers be surveilled at all times by AI? Will workers be fired based upon AI algorithms with very little human oversight? Will employers expect robot-level efficiency from their human employees? “[W]e’re at a really important juncture,” said Mary Towers, an employment lawyer who works with the UK-based Trade Union Congress (TUC), during a TUC conference, “where the technology is developing so rapidly, and what we have to ask ourselves is, what direction do we want that to take, and how can we ensure that everyone’s voice is heard?”

    137 votes
  • AI Will Discriminate Against People
    Photo: Terminator 2: Judgment Day / Tri-Star Pictures
    101 VOTES

    AI Will Discriminate Against People

    Many people are probably apt to believe that AI cannot be racist, sexist, nor discriminatory in anyway because AI's decisions are based solely on logic and cold facts. And to some extent they are correct: they are computers working off data, without the capacity or experiences to comprehend discrimination let alone act upon that intention. Yet, when experts make claims that AI can have some form of racial or gender bias, there are undoubtedly many people who misunderstand or intentionally misconstrue the claims. It's not that the AI are intentionally making a decision to be discriminatory like humans, but instead they are provided a dataset to act upon which has an unintentional or intentional bias.

    A large majority of the researchers and developers in the field of AI are male, and they of either white or Asian descent. According to the 2014 Diversity in High Tech Report from the US Equal Employment Opportunity Commission, 8% of employees in tech industries were of Hispanic descent and only 7.4% were Black compared to the 68.5% of employees who were white and the 14% who were Asian. Furthermore, only 36% were women.

    Now, it would be easy to claim that since people of white descent make up the majority of the US population it makes sense for them to have this level of representation. If, however, we look at the percentage of non-Asian people of color and women in the tech industry compared to the general private work force (13.9% Hispanic; 14.4% Black; 48% Women) there is a clear underrepresentation compared to whites who made up 63.5% of the general work force, and Asians, who made up only 5.8% of the general workforce.

    While addressing the demographic makeup of tech companies is something that many people want to deal with, it is not necessarily the specific issue at hand when it comes to AI. The issue is the data and inherent biases that come from the current demographic makeup of the tech industry are leading to biases in the AI being developed.

    In early 2019, Amazon had a major problem - their AI couldn't recognize skin tones. For two years they had been pushing their facial recognition AI, Rekognition, to police agencies and federal agencies to law enforcement (which in itself drew scrutiny from federal lawmakers, the American Civil Liberties Union, academics, and shareholders) but researchers from the MIT Media Lab had found that the AI struggled to identify both women and people of color. According to their study, Rekognition made fewer than 1% errors when identifying light-skinned men, but identified 19% of women as men. It rose to 31% when trying to identify women of color. 

    If Amazon were the only case, it could easily just be an Amazon issue. But further testing showed that multiple facial recognition AIs across the industry struggled to properly identify dark-skinned individuals. The study further suggested that the issues were due to the datasets used when training the AI, in which 75% of the faces collected for testing were men, 80% were light-skinned, and only 5% were women of color. 

    The problem has been fairly consistent throughout the years, too. In 2015, Google Photos's tagging AI categorized Black faces under a folder titled “gorillas” due to the data provided by engineers. In 2017, the combination of a facial recognition AI and a content moderation AI being built by Clarifai struggled to recognize people of color because the content moderation AI was recognizing all people with dark skin as “p*rnography” since the “G-Rated” images were primarily made up of people with light skin.

    While there are many social questions that arise from this, one most fundamental may be whether or not the technology should be used in any official capacity while these flaws exist. Should law enforcement use facial recognition software that misidentifies people of color? Should companies be allowed to us AI to weed out individuals when it doesn't treat everyone equally? Should medical providers use AI with algorithms that are biased to give people of color less quality healthcare? Should Homeland Security use AI that suggests revoking citizenship from naturalized US citizens because of their nation of origin? 

    There is time to fix AI training. According to Joy Buoloamwini, who headed the MIT Media Lab study and founded Algorithmic Justice League:

    Issues of bias in AI tend to most adversely affect the people who are rarely in positions to develop technology. Being a black woman, and an outsider in the field of AI, enables me to spot issues many of my peers overlooked.

    I am optimistic that there is still time to shift towards building ethical and inclusive AI systems that respect our human dignity and rights. By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion.

    In addition to lawmakers, technologists, and researchers, this journey will require storytellers who embrace the search for truth through art and science.

    101 votes
  • AI Will Be Just As Capitalistic As Its Creators
    Photo: Robocop / Orion Pictures
    100 VOTES

    AI Will Be Just As Capitalistic As Its Creators

    When Open AI's ChatGPT chatbot was introduced to the public in November 2022, it opened the floodgates for new AI development. Not only had  this new technology been a powerful tool able to utilize user data to continually improve its algorithms, it was also incredibly accessible to programmers and app developers. With an easy access point and a small fee for its use, any developer could integrate OpenAI's language models into their own apps. Everyone from large investment firms like Morgan Stanley to grocery delivery services like Instacart and even independent developers began integrating OpenAI into their apps, creating a decentralized network of AI use, growth, and improvement as more developers buy into the language model. 

    Since then, tech companies have been leading the charge in the recent AI boom. There are clear advantages to using AI, and tech companies have the researchers, the engineers, the infrastructure, and the money to invest in these new developments. There has, however, been increasing concern about this decentralized and unregulated development. 

    While it is in society's interest to create a system of ethical AI, that may not be the primary interest of a company. Unregulated AI development is based around the needs of the consumer, and whomever can give the consumer what they want. That's not to say “All Corporations are Evil” or even corporations generally don't care about the well-being of society, but rather that the directions that AI can potentially develop are endless. In light of the recent developments, many tech companies, despite mass layoffs and other controversies, have begun to develop and integrate their own AI as quickly as possible and have even lobbied governments in efforts to prevent regulation.

    In some cases, that may mean ethical AI. That could also mean, however, AI that aggregates copyrighted works without compensation, AI that takes personal data without consent, or in some cases AI that has the potential to create content that, while fictional, encourages and/or advocates illegal acts. By failing to create a system of laws to regulate and create guardrails for AI research “policymakers are creating the conditions for a race to the bottom in irresponsible AI.”

    In March 2023, tech leaders and researchers, including entrepreneur Elon Musk, Apple co-founder Steve Wozniak, and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, signed an open letter calling for the halt of new AI technology:

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

    The letter continues to ultimately call for the immediate “pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." 

    An open letter, however, does not immediately stop the development and deployment of all AI, because it is still a voluntary measure. While there are plenty of tech industry leaders who have signed the open letter, there are many that haven't and believe that a moratorium isn't the solution. Furthermore even those who have signed off on the moratorium and have consistently warned of the dangers of AI are still trying to compete with their own AI and are ramping up their AI efforts.

    The potential development of irresponsible AI is only becoming increasingly more pressing. As more companies begin to develop their own AI systems, the more data is required. AI requires large datasets in order to analyze and continue to learn, and it gets information from consumers. “This includes the whole of the world wide web – everything" says Michael Wooldridge, a professor of computer science at the University of Oxford.  "Every link is followed in every page, and every link in those pages is followed… In that unimaginable amount of data there is probably a lot of data about you and me… And it isn’t stored in a big database somewhere – we can’t look to see exactly what information it has on me. It is all buried away in enormous, opaque neural networks.” 

    Since tech companies have been incredibly secretive about their datasets and not allowing others to see what data they have been using, where and when they got it, nor how they accessed it, most of us don't realize what private information is being taken from us. 

    Despite these potential problems, many governments around the world have not created any guidelines or restrictions regarding the use and development of AI.

    100 votes