The AI Apocalypse Will be Different than What You Expect

How will the coming advances in AI integrate with your everyday life? Can the benifits outweigh the possible evils of an automated intelligent society?

Positive AI

Defining AI

The term “artificial intelligence” was first coined in 1956 and yet, despite growing scientific, media, and entertainment news coverage for all things AI, public understanding of artificial intelligence remains shockingly underdeveloped. Don’t believe me? Ask your average friend for a definition of artificial intelligence. Most people can’t offer a meaningful definition, but when they do, a few follow up questions will lead the vast majority of those friends to notably fumble. Scientific and academic disagreement over the proper definition for artificial intelligence remains healthy so the rest of us have little chance of conquering this linguistic conundrum.

Are thermostats a form of AI? Does voice recognition use artificial intelligence? Is a smart phone smarter than you or is that a misnomer? Even AI programmers disagree as to whether or not all of these things are artificially intelligent. To facilitate discussion, astute programmers identified four definitional subgroups for artificial intelligence (competitive models use three to ten). It may be worth observing that the lowest level AI programs defined within that four-category matrix were used to defeat our world champion chess and Go players. Other types of low-level machine learning led you to find this article, scroll through your social media feed earlier today, keep spam out of your inbox, and determine the best route to avoid traffic jams. It’s also used for ad targeting. Generally speaking, we’re happy about these forms of artificial intelligence, though smart ads occasionally make some of us crotchety.

The highest level of that subgroup matrix is what we commonly identify with Westworld, Ex Machina, the Terminator, and Minority Report. These problematic stereotypes are what feed our fears of an inevitable AI apocalypse or at least, a violent AI takeover that leaves Stalin-esque despots in control of the world. While that’s theoretically possible, so is a worldwide takeover by genetically modified primates. While brainiacs warn us about the dangers of artificial intelligent machines wreaking havoc upon our planet, they typically don’t envision an AI apocalypse the same way Hollywood does. Let’s look at why a healthy respect for the dangers of AI is probably a socially responsible thing to have while also investigating why it’s misplaced, misunderstood, and … too little, too late.

AI Applications are Limitless – and Desirable

Most useful forms of artificial intelligence that we’re excited about stem from machine learning where we input bazillions of data points into a computer and then ask it to analyze whatever we compiled for analysis. Humans could never do this on their own without computer assistance – there are simply too many things to remember and correlate. Machine learning is already improving every area of our lives. AI regulates insulin for diabetics, replaces lawyers, and in Japan, it’s helping police to capture criminals. KFC is planning to launch artificially intelligent robots to help take your orders. When you stand in line, it may say, “I’m guessing from your choices that you’d probably enjoy a side of coleslaw. We’re having a special on that today. Twenty-five percent off our large size. Shall I add that to your order?” Cooler than anticipating your culinary options, one machine learning algorithm figured out how to calculate your age with surprising accuracy by analyzing your blood (and 60,000 other samples). They’re cool like that. Artificially intelligent machines can save bazillions of man-hours pouring over data and analyzing it to offer us centuries worth of research in a small fraction of time.

Although bound to statistical probability, AI is being used to serve us in innumerable ways. Machine learning can help predict the best demographic and marketability of a book, the best nutrition for your body composition, life hacks most likely to lengthen your lifespan, methods of discouraging criminal activity, cost efficient marketing strategies to grow your business, and stocks on the verge to explode. Artificial intelligence is getting better at identifying when we lie as well. Who wouldn’t like that collar on our politicians and lawyers? It may only be a few short years before that’s a standard feature on cell phones… but it’s coming. It’ll probably cost extra and come with a four-page liability disclaimer but we’ll be able to install that power with the touch of a button.

Negative AI Applications are Already Available

As much as I’m hyped over the meaningful (and sometimes trivial) contributions of artificial intelligence, I’m keenly aware that Stephen Hawking’s and Elon Musk’s warnings about the dangers of AI were too late years before they were issued. We won’t be able to avoid irresponsible and despotic uses of artificial intelligence, but neither will that matter. Every technology we humans have developed over time has been used for both good and evil. Don’t get me wrong. I acknowledge that artificially intelligent machines present a magnificent threat to humanity if they’re developed with evil in mind. My point is that it’s already too late to hope we can stop it. The pressing questions are (1) will that threat be meaningfully different from dangers of the past? and (2) will we be able to counter malevolent AI with benevolent AI? I’ll answer the first question below and the second question in part 2.

Let’s consider the nuclear arms race as a small case study. That threat has been hanging over our head since 1945. Nukes could destroy the world at the push of a few buttons any minute now and yet few of us fret over that potential eventuality. After all, the Avengers, X-Men, and Stanislav Petrov have already taken care of that for us, right? In all seriousness, we’ve had artificially intelligent game theories available to guide us with the proper way to use nuclear warheads for world domination for more than a day or two as well. Shouldn’t we all be dead by now? Apparently not.

Whatever advantages game theories may have advanced for the Pentagon and other world leaders to consider, the risks remain too great to implement any aggressive nuclear strategies. Why? We as humans value things too highly to risk their loss. If we knew with near perfect statistical certainty that America could gain world domination at the risk of losing California and New York, would any politician be advocating nuclear war to gain global dominance? Probably not. Nuclear strikes don’t offer any possibility of success without massive sacrifice so no country has been willing to put their necks out to give it a try. It’s not that nuclear domination isn’t possible using AI enhanced game theory. It probably is possible. It’s just not statistically probable given all of the factors under consideration because potential losses are catastrophic and the environmental impact may be horrific even for the winner of the race. AI apocalypse theories won’t be meaningfully different.

More on that later.

Ed. Note: Look for Drew’s second part of this article coming soon!

Please follow and like us:

8 thoughts on “The AI Apocalypse Will be Different than What You Expect

  1. Two things:
    1) I’m one of the people you described at the beginning of this article. Therefore, I found it very informative! I look forward to the next one.
    2) Just how many balls CAN you juggle?

  2. Great! I’m glad it was helpful. I think the second article will be more informative.

    If I warm up well, I can juggle eight balls on a good day. Otherwise, I can juggle seven balls. I was once Utah State Champion and competed internationally when I had quite a bad case of juggling addiction back in 2006.

  3. Also two things:
    1) You make the argument that it would be utterly insane for any country/government to try taking over the world with nuclear weapons. Yep, it would be. But I don’t think we can forget that some among us are not sane, at least not what most would consider sane. And in my opinion some sort of insanity is gaining a much larger foothold, in this country and in many others around the world. Some, in fact, of the “leaders”, whether elected or by dint of strongarm tactics, seem to fall smack into that category.

    2) I can’t honestly comment about the pros and cons of new and improved technology – don’t know enough about it, and it mostly scares me. But I know it’s fascinating and captivating (I was dazzled and delighted when I saw the first “Star Wars” on the big screen with Sensurround or whatever they call it). I just think we need to be very careful.

    Anyhow I look forward to your follow-up article, and thank you!

    1. You’ll see more clearly in part 2 that it sounds as if you completely agree with me. My point overall is not that AI don’t represent any threat nor that they won’t be used for ill by malevolent leaders. My point is that our most common concerns about how they will become dangerous is not accurate.

      Outside of my article, I also agree that some sort of insanity is gaining force in our country. I was an adjunct professor for a brief few years. That stint made me so disenfranchised with our education system that I began homeschooling our children. Government run education is inefficient and too strongly biased in its political training of our children and, in my opinion, has led to much of this insanity you speak of. Had our core structure of logic and philosophy remained in our schooling, people would be able to think more critically and would be less persuaded by the ubiquitous rhetoric we see all around us. Best!

  4. Given all we’ve done to our environment, do you think AI might serve to save us from ourselves? If the need to relocate earthlings to other planets becomes inevitable, can the use of AI to explore options be critical?

    1. That’s a really interesting question and my initial knee jerk reaction is “yes,” but probably not for the reasons you may expect. Haha – let me explain.

      I think we, as humans, have course corrected ourselves over the course of history, though we’ve been slow to do so. I think AI will help us to identify more effective ways to course correct and may identify environmental changes that need our attention before we would notice them on our own. I think that AI could save us from ourselves because of reasons I’ll address in the next article. Namely, I’d say that the dangers of AI will most likely be counteracted and/or prevented by AI – it all depends on who’s programming the AI and what we’re telling it to do. That may make more sense in the next article.

      I personally don’t believe we’ll ever have to relocate to another planet. However, if we do, I have no doubt but that AI will play a crucial role in that endeavor. It’s applications are endless, it’s growth can be exponential, and it’s presence in our future is all but guaranteed.

      Best!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.