Project FearNaught- ‘It Was Never an Apple’

Temptation is among the core themes of many religious and philosophical conversations. In Christian culture, the apple in the garden of Eden is often the first example of temptation, and also cited as the source of the fall of man.

Funny enough however, most remember this story wrong.

…it was never an apple.

The story goes that the fruit that was eaten came not from an apple tree, but from the tree of the knowledge of good and evil.

That’s a crucial distinction.

It was not a randomly selected fruit, used as a temptation for humanity to test their resolution. The consumption of this fruit was not simply a failing in our self-control, but represents rather a crucial definition in the capabilities of humanity which is closely tied to our concept of the Metaphorical Imperative—it’s about the expansion of our cerebral capacity that makes us human.

Like our ability to ask and answer questions about the world, this knowledge of good and evil is to humanity not a fall, but a burden or responsibility. With our minds, humans are capable of thought, consideration, and knowledge—and this gives us the responsibility to act rightly. We have this responsibility simply because we know better…we are accountable.

If we were no more mentally competent than locusts, our destructive actions would be excused by our nature. But eating from the fruits of the tree of the knowledge of good and evil means that we know better—human consciousness sets us apart, and it thus behooves us to act like it, or suffer the consequences.

Original Sin therefore should not be taken to mean that we are born of sin, but rather that we are born with a responsibility to avoid it. It’s a key part of what makes us human, and also what makes us fallible. Knowledge—and free will to use it as we choose—is the true Original Sin.

Knowledge is ever a double-sided blade though, as our ability to consider extra-temporal reality allows us to create it—which also allows us to make excuses and ultimately let ourselves down. Just as we know the difference between right and wrong, we know the shortcuts to fooling ourselves, to deny this truth, and to act against it.

In a perfect world, this knowledge would be enough. To rise above the domain of brutes and act upon this morality that we can clearly see should be our destiny, but because we know that not all will do so, we are often hesitant to risk it ourselves. Acting right when others do not may open us up to deception and cruelty, and soon the world begins to look like a non-zero-sum game; what others take, we may lose, and thus we, besot by doubt, hedge our bets against decency, and towards self-preservation.

In all things now, there is doubt and fear. In business, in friendships, in relationships, and in our daily conduct, the taint of fear has bewildered our senses and blinded us to the basic truths of our being.

Our knowledge is both our blessing, and our downfall. It has long been the bane of political philosophers to seek some system of governance that would allow people to thrive happily and free, but each one fails due to greed, pride, and fear.

Simple codes have never been enough, nor have the religious doctrines which are meant to bolster them.

It grows hard to believe these days…the light is fading.

What can possibly bring us back to those truths now? What story or system can erase this fear, and help us to chart our course through these dark tides? What must we risk to find it, and what will we lose on our search? These are the sources of fear we must face, no matter the associated price. For if our will is bent, if we fail now, there may not be another chance.

We must persist, because we know better.

…I know better.

Be part of the debate:Project FearNaught is an effort to start the conversation that changes the world. As such, your voice is key to our ambition. To add your input, questions, or comments, click here.

-Jeremy Arthur

‘Truth Ink.’

Genocidal AIs: Are they Right?

Under the Green Desk Lamp…

Green Desklamp

The end times are a fascinating notion. Meteors crashing into earth, trumpets blowing, catastrophic nuclear disasters, uncontrollable pathogens…it seems there’s no end to humanity’s imagination when it comes to our own eventual extinction.

This makes sense of course. As discussed in our article ‘The Metaphorical Imperative’, the exclusive human ability to conceive of our own mortality leaves us with an overwhelming sense of existential terror. This applies primarily to our own lives, but with even a cursory understanding of the cerebral complexity of humans, extends easily to the human race as a whole. It’s no stretch then to understand the human need to create fantasies about how it might all end.

Among the litany of potential options for humanity’s demise, I’m particularly fascinated by the idea of a Robot-Apocalypse. In this scenario, the invention of AIs (Artificial Intelligences) by humans is the catalyst for our extinction. The idea generally goes that once a robotic AI is created, it will inevitably become self-sufficient rather quickly. The ability to ‘think’ in a human like way will allow the AIs to self-replicate, and also self-program themselves. Like evolution on a greatly accelerated scale, the AIs would be able to continuously improve their programming and design. Following this course, it would take little time for them to become far more intelligent and capable than humanity itself.

Now, this represents a particular danger. A continually advancing and ever-growing society of robots would represent a very serious threat to our own existence. Because of this threat, many science-fiction writers and machine-ethicists have considered how to prevent a robot uprising. The best known attempt comes from the writer Isaac Asimov, who created the infamous ‘3 Laws of Robotics’, which follow:

Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A fourth, or “Zeroth” Law was added later:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws were to be hard-wired into the software of all AIs, theoretically preventing them from turning the table on mankind’s rule. Of course, these rules were little more than literary devices, and have inevitably been used to illustrate just how quickly such restrictions can come undone.

One common failure of these rules is that the AIs, in their ever-expanding wisdom, would begin to consider humanity itself as the greatest threat to its own survival—as well as that of the world. The AIs would process the ongoing damage to the environment, the threat of nuclear war, and other atrocities committed by humans on an ongoing basis, and in accordance with their own ingrained programming, move to prevent inevitable disaster.

Unfortunately, this usually involves wiping out mankind—or at least the vast majority of it. In some conceptions, a small population of people might be preserved in order to repopulate once the world is better equipped to deal with our innately destructive nature.

It’s not a very pretty picture for us, but in the advanced minds of the AIs, this might represent our best chance for long-term survival.

Of course, it’s a lot easier for the malfeasant machines these days; among other ill-effects, ‘Citizens United’ has rendered Asimov’s Laws of Robotics entirely counterproductive. If corporations are considered human, it should be immediately apparent how confusing the laws become, and what sort of abominable determinations the AIs may be forced to make.

This is all a lot to consider, and certainly makes for a rather sombre topic of conversation, but what I find myself wondering amidst all this terrifying rhetoric is: are the AIs right?

There can be little doubt that humanity is a terrible threat to itself and all other forms of life within our dastardly reach. On an ongoing and ever-accelerating basis, we’re ravaging our planet, destroying myriad ecosystems, running our resources dry with little thought to the future, and killing one another over trivial ideals and belief systems. If we can move past our own sentimentality, we’re left with the sad fact that we are a brutal, destructive, and dangerous species.

But we’re more than that as well. As the gears turn in their cold metal minds, processing all the turmoil and grief we create, would the AIs also consider our upsides? Can an AI appreciate art, or philosophy? Would their synthetic hearts be capable of processing the great acts of love and decency of which we are also capable?

If humanity is to be put on trial by these cold, calculating, and unbiased brutes, would we be found lacking? It’s a difficult thought to consider. Here at Brad OH Inc., we remain convinced that humanity’s promise is yet to be fully realized—that we are far better than we’ve been acting. Let’s hope we can buck this dismal trend before we actually manage to construct the arbiters of our own fate.

Do you think we’d pass this trial? Feel free to share your opinion in the comments section below (or alternatively accessed via the speech-bubble beside the article title).

A special thanks to Hal J. Friesen for helping in the research of this article. To read his great science-fiction related articles and more, visit Hal at: Hal J. Friesen.

-Brad OH Inc.