AI’s 6 worst-case scenarios – IEEE Spectrum

Hollywood’s worst case a scenario involving artificial intelligence (AI) is well known as a big movie sci-fi movie: Machines acquire human-like intelligence, gain sensation and inevitably become evil overlords who try to destroy the human race. This narrative exploits our innate fear of technology, a reflection of the profound change that often accompanies new technological developments.

But as Malcolm Murdock, mechanical learning engineer and author of the 2019 novel
Quantum price, says it: “AI does not have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.”

“We are entering dangerous and unknown territory with the increase in surveillance and tracking through data, and we have almost no understanding of the potential implications.”
-Andrew Lohn, Georgetown University

In interviews with AI experts,
IEEE spectrum has revealed six AI-worst-case scenarios in the real world that are far more banal than those depicted in the movies. But they are no less dystopian. And most do not require a malicious dictator to bring them to full unfoldment. Rather, they could simply happen by default, unfold organically – that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop culture notions of artificial intelligence and take its unintended consequences seriously.

1. When fiction defines our reality …

Unnecessary tragedy can strike if we allow fiction to define our reality. But what choice is there when we can not tell the difference between what is right and what is false in the digital world?

In a scary scenario, the rise of deepfakes – fake images, video, audio and text generated with advanced machine learning tools – could one day cause national security decision-makers to take action in the real world based on false information, leading to a bigger crisis, or worse , a war.

Andrew Lohn, senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), says that “AI-enabled systems are now capable of generating disinformation at. [large scales]”By producing larger quantities and several different false messages, these systems can obscure their true nature and optimize for success, enhancing their desired effect over time.

The notion of deepfakes alone in the midst of a crisis can also make managers hesitate to act if the validity of information cannot be confirmed in a timely manner.

Marina Favaro, a researcher at the Department of Research and Security Policy in Hamburg, Germany, notes that “deepfakes compromise our confidence in information flows by default.” Both action and passivity caused by deepfakes have the potential to have catastrophic consequences for the world.

2. A dangerous race to the bottom

When it comes to artificial intelligence and national security, speed is both the point and the problem. As AI-enabled systems provide users with greater speed benefits, the first countries to develop military applications will gain a strategic advantage. But what design principles can be sacrificed in the process?

Things can be fixed from the slightest bug in the system and be exploited by hackers. Helen Toner, director of strategy at CSET, suggests that a crisis can “start as a harmless simple mistake that makes all communication dark, causing people to panic and economic activity to stall. persistent lack of information, followed by other miscalculations, can lead to a situation getting out of control. ”

Vincent Boulanin, senior researcher at the Stockholm International Peace Research Institute (SIPRI), in Sweden, warns that major disasters can occur “when great powers cut corners to gain the benefit of getting there first. If a country prioritizes speed over security, testing or human oversight, it will be a dangerous race to the bottom. ”

For example, national security leaders may be tempted to delegate command and control decisions, removing human oversight of machine learning models that we do not fully understand in order to gain a speed advantage. In such a scenario, even an automated launch of missile defense systems initiated without human permission could produce unintentional escalation and lead to nuclear war.

The end of privacy and free will

With every digital action, we produce new data – emails, text messages, downloads, purchases, postings, selfies and GPS locations. By giving companies and governments unlimited access to this data, we hand over the tools for monitoring and control.

With the addition of face recognition, biometrics, genomic data and AI-enabled predictive analytics, Lohn from CSET worries that “we are entering dangerous and unknown territory with the increase in surveillance and tracking through data, and we have almost no understanding of potential consequences . ”

Michael C. Horowitz, director of the Perry World House at the University of Pennsylvania, warns “about the logic of AI and what it means for domestic oppression. In the past, the ability of autocrats to oppress their populations depended on a large group of soldiers, some of whom may parties with society and carrying out a coup. AI could reduce these kinds of restrictions. ”

The power of data, once collected and analyzed, extends far beyond the monitoring and monitoring functions to allow for predictable control. Today, AI-enabled systems predict which products we will buy, which entertainment we will watch, and which links we click on. When these platforms know us far better than we know ourselves, we may not notice the slow creep that robs us of our free will and puts us in control of external forces.

Mock flowchart, centered around a close-up of an eye surrounding an absurd logical tree with boxes and arrows and ending with two squares reading  u201cSYSTEM  u201d and  u201cEND
Mike McQuade

4. A Human Skinner Box

Children’s ability to delay instant gratification, to wait for the second marshmallow, was once considered an important prediction for success in life. Soon, even the other marshmallow kids will succumb to the tempting conditioning of engagement-based algorithms.

Social media users have become rats in lab experiments, living in human Skinner boxes, glued to the screens of their smartphones, forced to sacrifice more precious time and attention on platforms that profit from it at their expense.

Helen Toner of CSET says that “algorithms are optimized to keep users on the platform for as long as possible.” By offering rewards in the form of likes, comments and followers, Malcolm Murdock explains, “the algorithms short-circuit the way our brain works, making our next engagement irresistible.”

To maximize advertising revenue, companies steal our attention from our jobs, families and friends, responsibilities and even our hobbies. To make matters worse, the content often makes us feel miserable and disadvantaged than before. Toner warns that “the more time we spend on these platforms, the less time we spend in pursuit of positive, productive and fulfilling lives.”

5. The tyranny of AI design

Every day, we leave more of our daily lives to AI-enabled machines. This is problematic because, as Horowitz notes, “we do not yet have to fully pack our heads around the problem of bias in AI. Even with the best of intentions, the design of AI-enabled systems, both training data and the mathematical models, reflect the narrow experience. and the interests of the biased people who program them. And we all have our prejudices. ”

As a result, Lydia Kostopoulos, senior vice president of emerging tech insights at Clearwater, Fla.-based IT security firm KnowBe4, claims that “many AI-enabled systems do not take into account different people’s different experiences and characteristics.” Since artificial intelligence solves problems based on biased perspectives and data instead of each individual’s unique needs, such systems produce a level of consistency that does not exist in human society.

Even before artificial intelligence emerged, the design of common objects in our daily lives has often appealed to a particular type of person. For example, studies have shown that cars, handheld tools, including mobile phones, and even the temperature settings in office environments have been established to suit the average size man, putting people of varying sizes and body types, including women, at a great disadvantage and sometimes in greater danger to their lives.

When individuals who fall outside the biased norm are neglected, marginalized, and excluded, AI becomes a Kafkaesque gatekeeper who denies access to customer service, jobs, health care, and more. AI design decisions can limit people instead of freeing them from everyday worries. And these choices can also turn some of the worst human prejudices into racist and sexist hiring and mortgaging practices, as well as profoundly flawed and biased sentencing results.

6. Fear of artificial intelligence deprives humanity of its benefits

As today’s artificial intelligence runs on datasets, advanced statistical models, and predictable algorithms, the process of building machine intelligence is ultimately centered around mathematics. In that spirit, Murdock said, “linear algebra can do insanely powerful things if we are not careful.” But what if people become so afraid of AI that governments regulate it in ways that deprive humanity of the many benefits of AI? For example, the DeepMinds AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins, enabling researchers to identify the structure of 98.5 percent of human proteins. This milestone will provide a fruitful basis for rapid advances in the life sciences. Consider the benefits of improved communication and cross-cultural understanding made possible by seamless translation across any combination of human languages, or the use of AI-enabled systems to identify new treatments and cures for diseases. Kneeling regulatory action by governments to protect against AI’s worst-case scenarios can also backfire and have their own unintended negative consequences, where we become so afraid of the power of this enormous technology that we resist exploiting it for the actual benefit, the can do in the world.

This article appears in the January 2022 issue of “AI’s Real Worst-Case Scenarios.”

.

Give a Comment