Rhapsody

Under the Green Desk Lamp…

Civil discourse these days has become pretty uncommon. You’ll rarely hear a debate that doesn’t soon slip into name calling and paranoid wailing.

It’s both sides.

Everyone is simply too afraid. Afraid of everything, yet somehow afraid of all the wrong things.

That fear is the problem, and it stunts any level of intelligent discourse by wheeling us into knee-jerk reactions and assumptions—making our conclusions for us. When angry and afraid, you go with what you know: Red or Blue.

That’s the thing about political thought however, it never quite fits into a single definition. Try as they may, there is no binary option that can capture the nuance of human belief—of our values.

Values, now there’s a word that’s thrown around a lot in politics, yet never really utilized the way it should be. Values, after all, are what it really comes down to. The truth of it is, I strongly suspect that a measure of fundamental values would show a far less divided picture of humanity than a typical measure of political preferences.

Behind the rhetoric and uproar, there do remain basic rights and wrongs, and obvious decencies which I still believe the vast majority of people can agree upon. These are values which go beyond culture and language.

They are innate to us, and are denied only by the most wretched of deviants, or those desperate souls who by poverty or avarice have found themselves denied entirely of their moral compass.

What would happen then, if people were to put aside their labels and colours—the brand names of political philosophy—and turn away from their hot button issues to discuss instead the basic values they hold dear.

No loose terms like freedom here. Tell me what that really means.

What do you love?

What do you fear?

What do you hate?

Do you realize the last answer is most likely the twisted spawn of some unknowable combination of the former two?

Or that the second closely follows the first?

Really though. If the world at large could manage such civil debate for a while—I mean really keep it going, get deep, and avoid falling back into the ‘yeah but’ type thinking which somehow convinces us that the forces of reality must in the end overwhelm the deepest of truths—what might be the result?

And what would you have to say?

-Brad OH Inc.

Advertisements

Genocidal AIs: Are they Right?

Under the Green Desk Lamp…

Green Desklamp

The end times are a fascinating notion. Meteors crashing into earth, trumpets blowing, catastrophic nuclear disasters, uncontrollable pathogens…it seems there’s no end to humanity’s imagination when it comes to our own eventual extinction.

This makes sense of course. As discussed in our article ‘The Metaphorical Imperative’, the exclusive human ability to conceive of our own mortality leaves us with an overwhelming sense of existential terror. This applies primarily to our own lives, but with even a cursory understanding of the cerebral complexity of humans, extends easily to the human race as a whole. It’s no stretch then to understand the human need to create fantasies about how it might all end.

Among the litany of potential options for humanity’s demise, I’m particularly fascinated by the idea of a Robot-Apocalypse. In this scenario, the invention of AIs (Artificial Intelligences) by humans is the catalyst for our extinction. The idea generally goes that once a robotic AI is created, it will inevitably become self-sufficient rather quickly. The ability to ‘think’ in a human like way will allow the AIs to self-replicate, and also self-program themselves. Like evolution on a greatly accelerated scale, the AIs would be able to continuously improve their programming and design. Following this course, it would take little time for them to become far more intelligent and capable than humanity itself.

Now, this represents a particular danger. A continually advancing and ever-growing society of robots would represent a very serious threat to our own existence. Because of this threat, many science-fiction writers and machine-ethicists have considered how to prevent a robot uprising. The best known attempt comes from the writer Isaac Asimov, who created the infamous ‘3 Laws of Robotics’, which follow:

Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A fourth, or “Zeroth” Law was added later:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws were to be hard-wired into the software of all AIs, theoretically preventing them from turning the table on mankind’s rule. Of course, these rules were little more than literary devices, and have inevitably been used to illustrate just how quickly such restrictions can come undone.

One common failure of these rules is that the AIs, in their ever-expanding wisdom, would begin to consider humanity itself as the greatest threat to its own survival—as well as that of the world. The AIs would process the ongoing damage to the environment, the threat of nuclear war, and other atrocities committed by humans on an ongoing basis, and in accordance with their own ingrained programming, move to prevent inevitable disaster.

Unfortunately, this usually involves wiping out mankind—or at least the vast majority of it. In some conceptions, a small population of people might be preserved in order to repopulate once the world is better equipped to deal with our innately destructive nature.

It’s not a very pretty picture for us, but in the advanced minds of the AIs, this might represent our best chance for long-term survival.

Of course, it’s a lot easier for the malfeasant machines these days; among other ill-effects, ‘Citizens United’ has rendered Asimov’s Laws of Robotics entirely counterproductive. If corporations are considered human, it should be immediately apparent how confusing the laws become, and what sort of abominable determinations the AIs may be forced to make.

This is all a lot to consider, and certainly makes for a rather sombre topic of conversation, but what I find myself wondering amidst all this terrifying rhetoric is: are the AIs right?

There can be little doubt that humanity is a terrible threat to itself and all other forms of life within our dastardly reach. On an ongoing and ever-accelerating basis, we’re ravaging our planet, destroying myriad ecosystems, running our resources dry with little thought to the future, and killing one another over trivial ideals and belief systems. If we can move past our own sentimentality, we’re left with the sad fact that we are a brutal, destructive, and dangerous species.

But we’re more than that as well. As the gears turn in their cold metal minds, processing all the turmoil and grief we create, would the AIs also consider our upsides? Can an AI appreciate art, or philosophy? Would their synthetic hearts be capable of processing the great acts of love and decency of which we are also capable?

If humanity is to be put on trial by these cold, calculating, and unbiased brutes, would we be found lacking? It’s a difficult thought to consider. Here at Brad OH Inc., we remain convinced that humanity’s promise is yet to be fully realized—that we are far better than we’ve been acting. Let’s hope we can buck this dismal trend before we actually manage to construct the arbiters of our own fate.

Do you think we’d pass this trial? Feel free to share your opinion in the comments section below (or alternatively accessed via the speech-bubble beside the article title).

A special thanks to Hal J. Friesen for helping in the research of this article. To read his great science-fiction related articles and more, visit Hal at: Hal J. Friesen.

-Brad OH Inc.