The term “artificial intelligence” was first coined in 1956 and yet, despite growing scientific, media, and entertainment news coverage for all things AI, public understanding of artificial intelligence remains shockingly underdeveloped. Don’t believe me? Ask your average friend for a definition of artificial intelligence. Most people can’t offer a meaningful definition, but when they do, a few follow up questions will lead the vast majority of those friends to notably fumble. Scientific and academic disagreement over the proper definition for artificial intelligence remains healthy so the rest of us have little chance of conquering this linguistic conundrum.
Are thermostats a form of AI? Does voice recognition use artificial intelligence? Is a smart phone smarter than you or is that a misnomer? Even AI programmers disagree as to whether or not all of these things are artificially intelligent. To facilitate discussion, astute programmers identified four definitional subgroups for artificial intelligence (competitive models use three to ten). It may be worth observing that the lowest level AI programs defined within that four-category matrix were used to defeat our world champion chess and Go players. Other types of low-level machine learning led you to find this article, scroll through your social media feed earlier today, keep spam out of your inbox, and determine the best route to avoid traffic jams. It’s also used for ad targeting. Generally speaking, we’re happy about these forms of artificial intelligence, though smart ads occasionally make some of us crotchety.
The highest level of that subgroup matrix is what we commonly identify with Westworld, Ex Machina, the Terminator, and Minority Report. These problematic stereotypes are what feed our fears of an inevitable AI apocalypse or at least, a violent AI takeover that leaves Stalin-esque despots in control of the world. While that’s theoretically possible, so is a worldwide takeover by genetically modified primates. While brainiacs warn us about the dangers of artificial intelligent machines wreaking havoc upon our planet, they typically don’t envision an AI apocalypse the same way Hollywood does. Let’s look at why a healthy respect for the dangers of AI is probably a socially responsible thing to have while also investigating why it’s misplaced, misunderstood, and … too little, too late.
AI Applications are Limitless – and Desirable
Most useful forms of artificial intelligence that we’re excited about stem from machine learning where we input bazillions of data points into a computer and then ask it to analyze whatever we compiled for analysis. Humans could never do this on their own without computer assistance – there are simply too many things to remember and correlate. Machine learning is already improving every area of our lives. AI regulates insulin for diabetics, replaces lawyers, and in Japan, it’s helping police to capture criminals. KFC is planning to launch artificially intelligent robots to help take your orders. When you stand in line, it may say, “I’m guessing from your choices that you’d probably enjoy a side of coleslaw. We’re having a special on that today. Twenty-five percent off our large size. Shall I add that to your order?” Cooler than anticipating your culinary options, one machine learning algorithm figured out how to calculate your age with surprising accuracy by analyzing your blood (and 60,000 other samples). They’re cool like that. Artificially intelligent machines can save bazillions of man-hours pouring over data and analyzing it to offer us centuries worth of research in a small fraction of time.
Although bound to statistical probability, AI is being used to serve us in innumerable ways. Machine learning can help predict the best demographic and marketability of a book, the best nutrition for your body composition, life hacks most likely to lengthen your lifespan, methods of discouraging criminal activity, cost efficient marketing strategies to grow your business, and stocks on the verge to explode. Artificial intelligence is getting better at identifying when we lie as well. Who wouldn’t like that collar on our politicians and lawyers? It may only be a few short years before that’s a standard feature on cell phones… but it’s coming. It’ll probably cost extra and come with a four-page liability disclaimer but we’ll be able to install that power with the touch of a button.
Negative AI Applications are Already Available
As much as I’m hyped over the meaningful (and sometimes trivial) contributions of artificial intelligence, I’m keenly aware that Stephen Hawking’s and Elon Musk’s warnings about the dangers of AI were too late years before they were issued. We won’t be able to avoid irresponsible and despotic uses of artificial intelligence, but neither will that matter. Every technology we humans have developed over time has been used for both good and evil. Don’t get me wrong. I acknowledge that artificially intelligent machines present a magnificent threat to humanity if they’re developed with evil in mind. My point is that it’s already too late to hope we can stop it. The pressing questions are (1) will that threat be meaningfully different from dangers of the past? and (2) will we be able to counter malevolent AI with benevolent AI? I’ll answer the first question below and the second question in part 2.
Let’s consider the nuclear arms race as a small case study. That threat has been hanging over our head since 1945. Nukes could destroy the world at the push of a few buttons any minute now and yet few of us fret over that potential eventuality. After all, the Avengers, X-Men, and Stanislav Petrov have already taken care of that for us, right? In all seriousness, we’ve had artificially intelligent game theories available to guide us with the proper way to use nuclear warheads for world domination for more than a day or two as well. Shouldn’t we all be dead by now? Apparently not.
Whatever advantages game theories may have advanced for the Pentagon and other world leaders to consider, the risks remain too great to implement any aggressive nuclear strategies. Why? We as humans value things too highly to risk their loss. If we knew with near perfect statistical certainty that America could gain world domination at the risk of losing California and New York, would any politician be advocating nuclear war to gain global dominance? Probably not. Nuclear strikes don’t offer any possibility of success without massive sacrifice so no country has been willing to put their necks out to give it a try. It’s not that nuclear domination isn’t possible using AI enhanced game theory. It probably is possible. It’s just not statistically probable given all of the factors under consideration because potential losses are catastrophic and the environmental impact may be horrific even for the winner of the race. AI apocalypse theories won’t be meaningfully different.
More on that later.
Ed. Note: Look for Drew’s second part of this article coming soon!