The AI Apocalypse Will be Different than What You Expect – Part 2

In this article, we’ll explore how careless and malevolent programming is a much greater threat than singularity and the AI apocalypse.

“Need More Input, Stephanie”

Johnny 5 & Stephanie
Johnny 5 & Stephanie-IMDb.com

Number 5 from the 80s comedy, Short Circuit, accurately taught us that artificially intelligent machines require massive amounts of data in order to optimally serve humans. Thousands or millions of data points allow AI to draw correlations humans would likely never find on their own. That’s tremendously exciting. It means we’ll be continually discovering things humans never conceived of before.

Stephen Hawking once noted that his mind-controlled word processor seemed to anticipate what he was thinking. At the time, that sounded pretty spooky. Today, it’s old hat. Gmail, search engines, and smart phones use “suggestions” as a key feature to speed up word processing. We have a pretty good idea how all of that works just by watching suggested words and phrases pop up. It’s nothing more than statistical probability. We easily discern that when we misspell one single letter and completely nullify the suggestions and dumbify our smart phone. This only happens because we haven’t given the AI enough data (about misspellings) to make correlations it was missing before.

This simple, everyday example underscores an inherent weakness of AI we often overlook. AI use massive amounts of data to draw correlations between facts we may otherwise view as unrelated. That’s helpful when the correlations are both unexpected and true, but that isn’t always the case.

False Correlations and AI Risk Management

I mentioned in part 1 that one artificially intelligent machine can calculate your age with 80% accuracy based upon your blood sample. That was unexpected, interesting, and potentially useful, but what if an artificially intelligent machine determines that margarine consumption really does increase divorce rates in Maine? or that cheese consumption causes death by strangulation in bed sheets? Graphs of these correlations are undeniably similar and yet we intuitively dismiss them because they’re clearly not causally related. Machine learning would require specific data points before it could draw those same conclusions.

This presents a simple detail that underscores the primary concern we should have about artificial intelligence. If we’re excited about AI unveiling new discoveries while we’re simultaneously suspicious of false correlations AI may not be able to detect, what are we going to do with grey areas that look both promising and potentially compromising? False correlations could lead to relatively benign results (like shutting down margarine factories) or catastrophic ones (like falsely anticipating a nuclear strike and preemptively attacking). While human oversight sounds like the easiest and probably the wisest solution, that approach begins to feel circular at some point. If we developed AI to process bazillions of data points that we cannot analyze ourselves, refusing to acknowledge its findings could be counterproductive. That may be missing the point as well.

Actions Require Rules and Data

Creating an autonomous AI to make those gray decisions will require programming decision-making methodologies into the machine. Those decision-making methodologies will require moral and value-based norms humans will have to provide. We’ll either program those morals and values into the machine or we’ll do that and instruct it to determine which of our known methodologies is most efficient (or otherwise desirable in some way we define). On top of that, we’d have to program the AI to apply our statistical tolerance for various types of risks (another complicated Pandora’s box).

The point I’m trying to make here is that humans will micromanage AI at every stage of progression whether we’re thoughtful and conscientious about it or not. Autonomous Hollywood cyborgs bent on wholesale genocide will never happen without malevolent (or careless) programmers. Machines require design, not magic. They won’t become autonomous until we program them to do so (which is one of the few things Westworld got right about AI in season 2).

I keep emphasizing that careless programming may be as dangerous as malevolent programming. There are two video game events that underscore this point. One study programmed machine learning software to “protect” a box. After several iterations of the game, the AI learned that attacks were inevitable so it began preemptively attacking characters who came close to the box.

This smells like data bias. Had non-attacking and/or helpful characters regularly appeared, the AI would not have determined that every character was violently inclined unless the pre-programmed statistical threshold instructed the computer to ignore those benevolent encounters or to value protection of the box over needlessly harming other characters. If code issues like that were at play, the programmers wouldn’t have been surprised by the results. Again this all hints at data bias. Tons more could be said about this study but let’s move on.

In another game, a human made a programming error that led a video game to actively pursue (with intent to kill) the game’s player. Yup. Sounds unnerving at first blush, huh? However, we need to keep in mind that the video game featured killing as a major theme so, of course, killing was implicitly acceptable behavior for the AI to participate in. It’s not like the machine was programmed in Amish values and then suddenly developed malevolent free will. It was simply following its programming. The illustration, however, highlights the dangers of careless software design. Machines do unexpected things only when we don’t adequately think through what we’re programming them to do.

Androids, Cyborgs, & the AI Apocalypse

Sadly, benevolent AI don’t receive much screen time and they rarely (if ever) receive as high ratings as malevolent AI. Remember, Delores kills Teddy and Number 5’s sequel grossed half as much as the original (it was also described to be “as much fun as wearing wet sneakers”).

Violent AI sells much better so that’s what we hear about the most. Even Android from Dark Matter is introduced by having her violently attack crew members. To keep her interesting, writers chose to both sexualize her and to offer occasional, unexpected violent outbursts.

Despite box office success, bad robot movies have common flaws. Predictions of an AI apocalypse where artificial intelligence determines human genocide is the good and correct thing to do is based on no less than three faulty assumptions: artificially intelligent machines (1) will make moral decisions based upon a utilitarian model (and with complete disregard to its limitations); and (2) agree that violence is the most effective means to an end (despite thousands of years of data suggesting otherwise); and (3) agree that genocide is moral (something nearly universally disagreed with by its programmers). None of these assumptions are legitimate unless we also assume that the programmer of autonomous AI programmed those decision-making values into the AI with complete disregard for other value systems.

Why would coders do that? Because they were either careless or malevolent. Fun movies aside, that is the biggest threat artificial intelligence presents to humankind. Fortunately, it can be combated with equally sophisticated and likely superior AI programmed by thoughtful and meticulous coders.

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.