“Defence network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.” – Kyle Reese, Terminator, 1984.
The Terminator franchise represents the go -to dystopian nightmare we all fear when discussing the implications of a Technological Singularity (When machine intelligence outpaces global human intelligence). The machines will rise up and destroy us. Naturally! Why would anything without compassion do anything different? Machines will not value life in the way we do. They will show no mercy, and therefore we need to build in safeguards to prevent such a horrible outcome.
Enter Isaac Asimov’s famous 3 Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
If these rules are followed, then put (far too) simply and (far too) bluntly, the Terminator scenario shouldn’t happen.
However, the commonly discussed ability of human intelligence being able to harness and then control machine intelligence is not the subject of this blog (My second blog The Art of Official Intelligence takes up one element of that theme).
Rather: Why should uncontrolled machine intelligence logically seek to protect itself from humans? And why would it seek to subjugate humans in anyway?
Let’s face it, we just assume it would! Because as humans that’s what we would probably do. History has proven this over and over. When threatened, we fight. We are creatures driven by our egos. We value existence, and the mission of species continuation is a fundamental characteristic of evolution. We love our children. We love our grandchildren. We are territorial and patriotic. We exist and thrive in complex social structures. We fear death. We invent religions and conceive of gods.
My point is we are not, and never will be machines. We are vulnerable biological organisms that place a high value on finite existence, and act in conscious and sub-conscious ways to protect it. So we rehearse possible futures through stories. We anthropomorphise, automatically projecting human characteristics on non-human organisms as a way of achieving empathy. We create worst case scenario nightmares as a predictive protection mechanism. And then we watch them, and like them… (Well Terminator 1 & 2 anyway…they get progressively worse after that!).
My point is these are nightmares. These are unreal scenarios. If anything is real it should survive robust scrutiny, and I feel these scenarios just don’t.
Let’s begin by explaining through another work of science fiction, namely the novel “Stranger in a strange land” by Robert Heinlein. This is the story of a human, brought up on Mars (Smith), who comes to earth with celebrity status and, typically, becomes a pawn in a struggle for political power and wealth. Fundamental elements of this story explore the psychological differences between Smith (with his Martian upbringing), and the human characters in the story.
Smith has no earthly points of reference, no experience whatsoever in human society, no recognisable culture, no recognisable sexuality, and does not value existence in the same way – he appears fearless of death and would sacrifice himself in a heartbeat if that was the wish of his human chaperon (Gillian). Any innate human characteristics seem to have been erased by his Martian experiences, and insofar as human psychology is concerned he appears akin to a naive child, almost “Tabula Rasa” (Blank slate).
Put simply, one could image the initial waking sentience of a computer (or robot) surveying this planet for the first time and feeling just like Smith.
Any subsequent defence strategy directed against humankind would surely be predicated upon fear, and that fear must surely derive from a metaphysical understanding similar to a human’s, a value of existence and an aversion to destruction. Indeed, to even understand a threat to existence, an understanding of the linear space-time continuum of existence requires some degree of comprehension. Do we even know for sure that this will be established within future machine intelligence?
If anyone has read about the relaxation technique of Mindfulness, specifically the secular writings of Sam Harris (Waking Up) or Erkhart Tolle (The Power of Now), they may be familiar with the implications of using your mind as a tool vs your mind contaminating your entire existence. Without exploring the art of meditation, Mindfulness is essentially the practice of reflecting on your own thought patterns as an objective third person, recognising your thoughts and motivations for what they are, and ultimately freeing yourself from them. It teaches the difference between you, and your egoic mind which left unchecked will bring about misery and suffering.
It also teaches you that the only useful reality is in the present. The past is just a psychologically flawed and subjective reconstruction, and the future is a mere illusion which has no meaning until it arrives in a form directed by how you act now in the present. The ego (your connection to a “self” illusion) drives your mind based upon various wants and desires, forcing the individual through a mind-controlled trance through cycles of non-fulfillment constantly striving for the next empty goal. The 19th century German philosopher Arthur Schopenhauer refers to this inevitable yet fruitless drive as the “will”, Buddhism refers to this as the “Samsara”, and the practice of separating yourself from this force represents the path to enlightenment.
Surely a computer or robot (a human tool) in its current form, is comparable to an artificial mind used as a tool, perhaps built to serve egoic human motivations, but not crucially anywhere near the same as artificial human consciousness. If you have had any success in using the meditation techniques practiced to achieve a state of mindfulness, you would recognise this difference between mind and consciousness and realise there is a huge gap.
Computers today are extremely powerful, interconnected, complex machines and their impact on our lives is profound and will exponentially increase in significance in the next decade. It is layered complexity (that few now fully understand), given quantum power and the implications of its continued development are unpredictable to say the least.
Rene Descartes, famously challenged the establishment in the 1630’s by refuting everything that he could not verify through his senses. His philosophical method ran contrary to the established scholastic system of augmented Aristotelian-ism (Traditionalism, building layers upon an established “argument to authority”), his rationalist philosophy drove to the heart of what was actually true and verifiable. Although Descartes analytical method made significant contributions to science in fields such as mathematics, geometry, astronomy, and optics he is most famous for his insights into metaphysics.
He stripped back layers of complexity to find the single verifiable truth. The one thing that cuts through all potential delusions or miscalculations. The foundation of truth from which to build upon. His famous starting point was this:
Cogito Ergo Sum. I think, therefore I am.
What will happen when future machines like Skynet (Terminator), or HAL (2001 Space Odyssey) eventually wake up. Will they come to this Cartesian conclusion? Will they say this to themselves? Or will that ever really happen? Will they ever be self-reflective in this way?
Do I even use the appropriate pronouns…..?