Genocidal AIs: Are they Right?

Under the Green Desk Lamp…

Green Desklamp

The end times are a fascinating notion. Meteors crashing into earth, trumpets blowing, catastrophic nuclear disasters, uncontrollable pathogens…it seems there’s no end to humanity’s imagination when it comes to our own eventual extinction.

This makes sense of course. As discussed in our article ‘The Metaphorical Imperative’, the exclusive human ability to conceive of our own mortality leaves us with an overwhelming sense of existential terror. This applies primarily to our own lives, but with even a cursory understanding of the cerebral complexity of humans, extends easily to the human race as a whole. It’s no stretch then to understand the human need to create fantasies about how it might all end.

Among the litany of potential options for humanity’s demise, I’m particularly fascinated by the idea of a Robot-Apocalypse. In this scenario, the invention of AIs (Artificial Intelligences) by humans is the catalyst for our extinction. The idea generally goes that once a robotic AI is created, it will inevitably become self-sufficient rather quickly. The ability to ‘think’ in a human like way will allow the AIs to self-replicate, and also self-program themselves. Like evolution on a greatly accelerated scale, the AIs would be able to continuously improve their programming and design. Following this course, it would take little time for them to become far more intelligent and capable than humanity itself.

Now, this represents a particular danger. A continually advancing and ever-growing society of robots would represent a very serious threat to our own existence. Because of this threat, many science-fiction writers and machine-ethicists have considered how to prevent a robot uprising. The best known attempt comes from the writer Isaac Asimov, who created the infamous ‘3 Laws of Robotics’, which follow:

Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A fourth, or “Zeroth” Law was added later:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws were to be hard-wired into the software of all AIs, theoretically preventing them from turning the table on mankind’s rule. Of course, these rules were little more than literary devices, and have inevitably been used to illustrate just how quickly such restrictions can come undone.

One common failure of these rules is that the AIs, in their ever-expanding wisdom, would begin to consider humanity itself as the greatest threat to its own survival—as well as that of the world. The AIs would process the ongoing damage to the environment, the threat of nuclear war, and other atrocities committed by humans on an ongoing basis, and in accordance with their own ingrained programming, move to prevent inevitable disaster.

Unfortunately, this usually involves wiping out mankind—or at least the vast majority of it. In some conceptions, a small population of people might be preserved in order to repopulate once the world is better equipped to deal with our innately destructive nature.

It’s not a very pretty picture for us, but in the advanced minds of the AIs, this might represent our best chance for long-term survival.

Of course, it’s a lot easier for the malfeasant machines these days; among other ill-effects, ‘Citizens United’ has rendered Asimov’s Laws of Robotics entirely counterproductive. If corporations are considered human, it should be immediately apparent how confusing the laws become, and what sort of abominable determinations the AIs may be forced to make.

This is all a lot to consider, and certainly makes for a rather sombre topic of conversation, but what I find myself wondering amidst all this terrifying rhetoric is: are the AIs right?

There can be little doubt that humanity is a terrible threat to itself and all other forms of life within our dastardly reach. On an ongoing and ever-accelerating basis, we’re ravaging our planet, destroying myriad ecosystems, running our resources dry with little thought to the future, and killing one another over trivial ideals and belief systems. If we can move past our own sentimentality, we’re left with the sad fact that we are a brutal, destructive, and dangerous species.

But we’re more than that as well. As the gears turn in their cold metal minds, processing all the turmoil and grief we create, would the AIs also consider our upsides? Can an AI appreciate art, or philosophy? Would their synthetic hearts be capable of processing the great acts of love and decency of which we are also capable?

If humanity is to be put on trial by these cold, calculating, and unbiased brutes, would we be found lacking? It’s a difficult thought to consider. Here at Brad OH Inc., we remain convinced that humanity’s promise is yet to be fully realized—that we are far better than we’ve been acting. Let’s hope we can buck this dismal trend before we actually manage to construct the arbiters of our own fate.

Do you think we’d pass this trial? Feel free to share your opinion in the comments section below (or alternatively accessed via the speech-bubble beside the article title).

A special thanks to Hal J. Friesen for helping in the research of this article. To read his great science-fiction related articles and more, visit Hal at: Hal J. Friesen.

-Brad OH Inc.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s