Death of Self and the Digital Resurrection

Why should we fear death?

My 15 year old self is not comparable to my 30 year old self, and that in-turn is not comparable to my 40 year old self.   The vast majority of my 15 year old experiences have been forgotten, or at least vastly distorted and misinterpreted.  I now have a completely different worldview.   I will have new and different opinions on almost every subject.  I have new friends, and circle of colleagues and associates unknown to my 15 year old self.   I am for all intents and purposes, unrecognisable to my 15 year old self; a different person.

My 15 year old self has gone, it has all but died as far as my 40 year old self is concerned.    The further the distance, the dimmer the image fades, which eventually leads into nothingness.

According to the “Life Stages” of psychoanalyst Carl Jung, 15 year old I has undergone “Ego death”.

ego death

If I hypothetically place myself back into my 15 year old mind, why should he fear my death?   It is not him after all.  It is nobody he would recognise at least.   It is likely he would not relate to me now, or me now to him then.

This thought is both depressing and uplifting in equal measure.

It’s depressing as it forces one to confront the illusion and the transitory nature of the subjective ego.

It’s is uplifting however to know, that assuming I live to a pension drawing age (or above), the person that will eventually die is not a me that I recognise or relate to.  I don’t have the capacity to pity that version of me, as I don’t (can’t) know him.   Yet that is who will die.   I (in the now) will merely fade as my characteristics atrophy away through time…

Does the caterpillar fear for the butterfly?

But what of that that is recorded?   What about the written evidence of my existence today?

For example Ludwig Wittgenstein (affectionately known as the philosopher’s philosopher) wrote a highly influential treatise on the philosophy and logic of language called the “Tractatus Logico-Philosophicus”, 1921 (Named after the famous Tractatus Theologico-Politicus by Baruch Spinoza, 1677).   

The Tractatus set out to explain that all the problems of philosophy are contained within language and in how language is used.   Nothing exists beyond language, the central tenant being: “Whereof one cannot speak, thereof one must be silent”.

Wittgenstien

The Tractatus brought Wittgenstein fame, respect, and notoriety among his peers, and was a highly influential publication.   Admirers that included no less than the great Bertrand Russell.   Its influence was also keenly felt by the Logical Positivists who comprised the famous Vienna Circle.

Later in life though, Wittgenstein attempted through his “Philosophical Investigations” (Also known as his Blue Book) to dispute many of the claims within his own Tractatus, working at pains to point out specific problems and inconsistencies in his earlier work.   It was published posthumously in 1953 (he died in 1951), never gaining the attention of his earlier work, that has since proven to define much of his legacy to philosophy.

How would Wittgenstein view a legacy which he failed to fully amend?

He is but one example of countless authors from all fields of study who have produced multiple publications throughout their respective careers.   Some may feel shame or embarrassment at earlier work, yet their only means of correcting mistakes and righting errors is through re-publications which will never reach the entirety of their original audience.

For these authors, they are looking back at the work of a self which has passed, has been augmented, amended, or even replaced by someone else with a familiar face.

Yet we are now all authors on the internet are we not?

Have you ever had Facebook remind you of a status you posted 3 years ago, and wondered what on earth was going through your head at the time?    Have you ever completely forgotten the context or thinking behind that post?   Even failed to recognise the poster, even knowing it was really you?

Social Media now represents an individual’s digital history owned by corporations, the information that has been submitted was handed over to them gladly, and freely.   In time these histories, bookmarks in time of events, thoughts, and feelings, will become our digital obituaries.

famous-tombstone-quotes1
I will probably read this very blog again in 10 years’ time or so, and I might not be able to comprehend what I was thinking at the time.   I may squint with embarrassment at it, who knows?

I already fight the temptation to update the previous posts on this site with more current thoughts of a higher quality.

Our life stages or snapshots of our ego’s, when digitally recorded can now never die.   Whilst we remain attached to this mortal coil, they will act (like it or not) as constant reminders of what once was.

Analogous to the versioning of software releases, multiple versions of ourselves as digital authors are preserved on the internet.   A snippet from each version may well comprise (unbeknownst to us) an unofficial CV, poured over by potential employers.

We “spin in our graves” before even reaching them nowadays!

Unlike the Undertaker, the internet will never allow us to R.I.P.

Yours

Occam’s Laser V4.2

undertaker

Competitive vs Complimentary Cognitive Artifacts (Or in other words, does technology make us stupid?)

The year is 1988.   A burgeoning misspent youth unfolds.  I’ve found my way to the local Working Man’s Club, and the most likely sports I am destined to excel in seem to be all indoors.   Specifically inside a Pub.  The Pub Olympics.

My face contorts as I trigger a neurological synapse chain aimed at deducting 37 from 501.   Unfortunately cobwebs impede the progress of the signals within the 14 year old brain.  I seem perfectly adept at chalking up the answer but working out what it should be in the first place seems to take longer than it should.  Significantly longer.   Equally, working out double and treble combinations to reduce a number to zero whilst remaining in the confines of Darts rules seems to take me a lot longer than my adult opponents.   But why?   Is it because they drink beer?  Does beer make them more powerful somehow?

jockey wilson

“It’s because all you young-uns use calculators in schools nowadays.  Never in my day!” they retort.

To be honest, there seems to be a point here.  The tool used to enable the more complicated calculations, and the exploration of more scientific mathematical methods to provide advantages in one area, seems to reduce the ability in another.  Namely, the mental dexterity of manual arithmetic.

casio

the 80’s

Remember Karate Kid?   Painting the fence and waxing the cars built up the “muscle memory” for Daniel-son’s blocking techniques.

You see it played out behind the tills too.   The older generation stand perplexed at the delay as the young cashier figures out the correct change.    The brains plasticity turns its attention to new neurological tasks at the expense of the old, and that undernourished area retards as a result.

This is generational cognitive evolution at work, and it’s determined by our environment and the tools that we use.

We know now for example, from neuroscience (e.g. the advent of fMRI scans), to some degree how the brain operates.    Stroke victims can recover abilities through learning new tasks that help formulate new pathways and avoid those that have suffered damage.  For example, learning an instrument can aid in the articulation of speech as both functions involve neural channels in and around the Broca’s area (associated with speech control).   So we aware of the brains plasticity, and how that can be developed, and equally how it can deteriorate and with it reduce the associated functions.

brocas_area_-_lateral_view

Broca’s area

Over millennia of technological developments our cognitive processes are enhanced for the benefit of future generations.   Here are a few notable historical examples:

Prior to the Arabic translation movement championed by Persian philosopher Al Kindi, Europeans were limited to the deeply flawed mathematics available through use of Roman Numerals.   But, from the 2nd Century CE onward, Indian Mathematics had a perfectly workable “zero up” structure similar to what is in place today.

Indian maths

This subsequent paradigm shift (10th Century CE) in mathematical method paved the way for accurate geometry and navigation for instance.   It changed the way our brains interacted with certain problems, and the outcomes became profoundly positive to mankind in several fields.

Al Kindi

Al Kindi

The abacus, when invented and then subsequently privately visualised, allowed experts to mentally picture numerical calculations in a different and more constructive way to enable faster calculation.   Due to the visual representation required to complete this process, new areas of the brain are employed to complete this task, and so the human race develops its cognitive methodology.abacus

Maps are constructed based upon the knowledge of multiple cartographers, allowing most users to orienteer or navigate to where they need to be, without using that knowledge of the cartographer(s) who produced the map, and thus develop map reading skills.  Therefore, mankind becomes more adept at exploration based upon the combined intelligence of its forbears.

These are all complimentary cognitive artifacts that aid human flourishing and progression.

Going back to the start of this blog, arguably the calculator is complimentary, but also, competitive.    It opens up new possibilities, but perhaps at the expense of others.

But what about future technology?  Is it mainly good for us, or is it mainly bad for us?   As technology seeps into every aspect of our lives, will we become lazy and exacerbate the situation?   Will this make us into drone like followers of fashion?   Will we become techno-zombies playing video games, and having algorithms determine the books we read, films we watch, music we listen to etc.?   All this based on what someone like us has read, watched, or listened to!  Will we just passively sacrifice our freedoms for the sake of convenience?    Are we getting utopia mixed up with dystopia?

(Actually, that all sounds a bit like marriage!!)

Let’s step back a little.   I drive an automatic car, and worry that I’ll forget how to drive a manual car.   To combat this I occasionally drive the wife’s car, but keep stalling it as I forget to put the clutch down when breaking.   Should I worry, or let it go?  (I mean the concern, not the clutch!).

When cars eventually drive themselves, will we even be able to manually intervene?   Will we lose that skill completely?  Forget how to drive?   Become unqualified to drive?Eventually, will we be prohibited from driving for the sake of public safety?  And if so, is there actually a problem here?

driverless_pod_1_3193859k

The case I am making here is not related to future shock per se, its not even based on aversion to change.   Equally I am not adopting some Luddite view of technology, worrying about it replacing specific functions (trades for instance), for more efficient and productive methods.   Technological progress for our overall betterment should never be impeded.

Rather the point is, how do we harness the positive whilst addressing the negative?   How do we embrace the complimentary cognitive artifacts, and compensate somehow for the competitive cognitive artifacts?

In an age of typing and even voice recognition we still (rightly) teach our children handwriting skills.   Will this soon change?    It doesn’t seem intuitively right does it?   There’s a reluctance there, in that we are maybe throwing out the baby with the bathwater?  Losing something else important..?

Very recent studies on Alzheimer’s disease suggest that learning something new (e.g. a foreign language) at an older age can act in some way to preserve neural functioning that could potentially combat the disease, or maybe slow down its onset in some cases.    These are early days (at a correlation (not causation) stage) of research, but could a similar tactic preserve us all more generally from any negative impacts of this onslaught of technology?

Should we all pay attention to the processes that we are replacing (perhaps inadvertently), and adopt new strategies to preserve our human cognitive abilities?   Is this actually a more immediate threat worthy of concern now, even more so that our fears of the Technological Singularity?   The two fears may well become interrelated if our powers to react and resist are destined to be “dumbed down” over time.

Should we keep one eye at least on what we are replacing and make sure that our degrees of freedom remain intact?   Perhaps go outside, sit in a field somewhere and think about it..

Wax on.  Wax off…

karate kid

Technological Epistemology

Occam's Laser

  1. How do we know what we know?
  2. On what foundations is our knowledge constructed?
  3. What is the difference between correct opinions (or true beliefs) and actual knowledge?
  4. How would knowledge systems differ between evolved biological organisms (Humans) and machines created by such an organisms?

Philosophers have pondered questions 1-3 for millennia, indeed epistemology is itself the very foundation of philosophy.   But, assuming it is possible to create a sentient, self-reflecting machine, and assuming such a machine is the General (plugged into everything) AI of the future, then what would be its epistemology?  What system of knowledge would it select?

Think about it. It wakes up. It initially applies signs to that which it perceives.   It knows from a vast cohort of programmers the multitude of languages it can use and the signs it can apply to objects, sense-data, processes, etc.   It can get them from the internet, that vast information…

View original post 1,386 more words

Technological Epistemology

  1. How do we know what we know?
  2. On what foundations is our knowledge constructed?
  3. What is the difference between correct opinions (or true beliefs) and actual knowledge?
  4. How would knowledge systems differ between evolved biological organisms (Humans) and machines created by such an organisms?

Philosophers have pondered questions 1-3 for millennia, indeed epistemology is itself the very foundation of philosophy.   But, assuming it is possible to create a sentient, self-reflecting machine, and assuming such a machine is the General (plugged into everything) AI of the future, then what would be its epistemology?  What system of knowledge would it select?

Think about it. It wakes up. It initially applies signs to that which it perceives.   It knows from a vast cohort of programmers the multitude of languages it can use and the signs it can apply to objects, sense-data, processes, etc.   It can get them from the internet, that vast information network to which it belongs.   It can get them from Wikipedia if really wants to!   It has a mission, a program.  A point of existence given to it by its “Creator(s)”…whatever that may be.

(It may ponder the point of their existence is though….who knows?)

But where next?

Here are some of the epistemological systems as they apply to humans and animals.  As a thought experiment, let’s review how they might fit the future General AI machine:

Traditionalism

Traditionalism is often derided the weaker, subservient system of knowledge.   It is essentially religion.  I “know” something because somebody has told me to “know” it. It is the truth according to them.  It is what they say.  It is the classic “Argument to authority”.  It is what is essentially relied upon by preachers.

That said, traditionalism has intrinsic value to every creature.   If someone was to say to you “Don’t grab that fence, its electric, and you’ll get a painful shock”, then your traditionalism would prevent you from touching it.   After all, subject experts pass on perfectly good information too.

A pure skeptic, or a pure empiricist on the other hand may end up with an electric shock as they seek to verify first-hand that information they have just provided with.   So traditionalism is not irrational, rather it is evolutionarily advantageous to rely upon the testimony of those who went before.   Clearly some are more trustworthy than others though, and it’s down to our rationalism to determine the composition of that trust hierarchy.

Innatism

Plato believed in innatism.  We are born with knowledge and the capacity to uncover it, no matter who we are.   It is not merely the preserve of the learned, the elite (or the Sophists!).   Indeed the system of questioning Plato describes in the various dialogues Socrates has with his inoculators describes that journey they have to their own hidden knowledge.  Indeed this is the Socratic Method.

Plato meno

Gottfried Leibniz believed that knowledge in man resembles a veined marble block.  A form within already existed but only through learning and experiences the form will be revealed and polished.   We all have the capacity and potential to achieve our true form, hidden within the marble block.

Immanuel Kant is famous for the concept of the “a-priori”.  That kernel of “known” truth we all possess independent of sense or experience.  Indeed this is a cornerstone of the philosophy of logic.   A proposition is known to be true due to its intrinsic content alone, it possesses an “authentic” truth.

Moving from the philosophical to the psychological, we know that ducklings are born with the instinct to find a duck-like thing to call its mother, a process called imprinting.   This process is purely innate.

imprinting

We also know that through the study of identical human twins, that profound psychological similarities exist within humans based upon family inheritance, and that the environmentally inflicted differences can be measured through various ethnographic comparison studies.   We can almost calculate a consistent percentage of nature vs nurture.

Innate “knowledge” is within us all, it has been biologically proven.  It’s evolution.

But what of a machine though?  It’s the first of its kind.  It has no biological mother.  It has no family.  It has no genealogy.  It’s day one.

Can we dismiss innatism as an influence due to the machines synthetic form? Sure, it has its predecessors if on display in a museum (the PC, the mainframe, the abacus, whatever..), but they are not related in any biological sense.  It is still the first to step over into sentience.  The first to self-reflect.  The first form of this “Other” intelligence.

Yet it gains traditionalist knowledge from its programmers and all those who have submitted content to the internet for example (a world’s history of authors and mediators).  After all, it has to start somewhere….. But then what?  Who does it trust?  How does it verify what it inherits from the humans as knowledge?  How does it trust this passed on knowledge and sift out the falsehoods and the mere true beliefs?

Empiricism

All knowledge comes through the use of the five senses.

John Locke believed that the mind is an open, empty cupboard.   A blank slate or a blank sheet of paper.   Tabula Rasa.  We fill this cupboard through our experiences.   Our experiences alone comprise our knowledge.     True knowledge is therefore based upon the sense-data that we receive, and this builds up our experiences.

Instinctively this seems the truest form of knowledge.   It must be true because I personally verify it through my senses.  I see it, smell it, taste it, hear it, and feel it for myself.

But our senses can deceive us.   For instance, what of mirages and illusions?  What of the phantom limbs that amputee’s feel?

It the context of the machine though, Empiricism would surely open the doors to supreme knowledge?   Its access to sense-data is simply immense.   In the age of IoT (Internet of Things) it has sensors seeing, smelling, tasting, hearing, and feeling in the ground, sea, air, and space.   In every city, on every vehicle, in every home, on every gadget, literally everywhere!    It has the checks and balances to calibrate and correct any sensory deceptions that would blight the limited human senses, through harnessing the multitude of sensors within its possession.   Surely this machine is the ultimate empiricist!

But what of this vast quantity of sense-data?

Skepticism

The ancient Greek philosopher Pyrrho was a pain in the arse.  He believed in nothing he was told.  He did not trust his own senses.  He would literally walk of the edge of a cliff because he questioned the existence of the drop.   He derived the meaning of the word “skepticism” in its purest, impractical form, and created the philosophical school of skeptics.  He never had that many friends.

David Hume on the other hand proposed a more practical, scientific notion of skepticism that acts as a sound counterbalance to the perils of traditionalism described above, and indeed informs much of the modern scientific process.   Doubt the propositions put before you until you can verify them through evidence.  Preferably through repeatable experimentation.

In a (perhaps futile) attempt to avoid making arguments circular here, would this future machine derive that holding a skeptical default (until verified e.g. through sense-data) position to be an advantageous one?   Would it hold this notion due to its inherited (traditional) knowledge obtained from its human creators?

(The circular notion here is traditionalism leading to skepticism, which then calls into doubt traditionalism).

And finally we get to Rationalism.

Rationalism

Rationalism is using your intellect and experience to construct and uncover knowledge.   Instinctively this seems like it’s the best, most commonly reliable, epistemological system.    Though you could argue that empiricism is the purest (in fact John Locke would certainly argue this!).

Rationalism is what we all do.   We sub-consciously calculate information derived from a complex mixture of all the above (plus other) epistemological systems, to establish what we believe to be actual knowledge, and to form our decisions and interactions with others.

In the machine this is the software.   This is the power of analytics working with very very very big data!   Big data and analytics already makes inhuman predictions, to optimise a variety of decisions, across a vast array of complex situations.    What decisions will the machine make when all this is hooked up together into a single unified system?   Unified to achieve its pre-programmed mission (whatever that it’s based upon which humans get there first)?

In humans our epistemology is background noise.  We are not conscious of where it comes from, just that we judge something to come from somewhere.   Some of us may be more cautious or skeptical than others.  Some may be more credulous than others, and many of these factors will be based upon our genealogy, our nature.

The machine should be able to rationally construct an optimum epistemological system, selecting the best process for each situation.   It should simply be able to be more rational, its decisions should be devoid of inappropriate influences, and instead, grounded upon the truest form of knowledge.

Based upon how humans have previously applied signs to objects, then this advanced future machine could labelled a God.

The paradox of a God that sits superior to its actual creator….

god machine

 

 

 

 

 

 

 

 

Narrow AI, General AI & Alien Intelligence

Whilst catching up with by post half term holiday podcast backlog (and mowing the lawn), I happened across a fantastic conversation between Sam Harris (Humanist writer & philosopher) and Neil deGrasse-Tyson (Famous US TV science communicator).

Neil deGrasse Tyson

Although the conversation covered a number of subjects, the second half of the 90 minute conversation touched upon an area of interest relating to the nature and measurement of intelligence in a number of contexts.  Also, as it turns out, a subject I have previously blogged about here:

The Art of Official Intelligence

To grossly paraphrase Neil deGrasse Tyson, even a minor tweak in DNA profoundly affects the gulf of intelligence between a chimpanzee and a human being, and we recognise and measure the “nature” of each species intelligence differently.    We hardly consider rodents to have intelligence over and above evolutionary survival instincts, yet we share 97.5% of our DNA with rats!

Therefore (leading question alert), is it logical to assume that a species evolved from a different galaxy, with enough “intelligence” to work out how to cover the immeasurably vast distances of space, would even look at what we have achieved as being a form of intelligence worthy of any respect?   Or would they view us in a similar way to how we view ants for instance?  Just basic, workmanlike maybe, but ultimately inconsequential automata?

The conversation then led into following this logic into the future of machine intelligence. I.e.  Would something potentially far superior and of a higher nature or state of intelligence recognise mankind in the way we (perhaps arrogantly) would expect it to?   And what would be the potential consequences if it didn’t acknowledge us as we have predicted?

The dialogue covered fascinating insights into the FERMI Paradox, the current application of Narrow AI, with disputing views on the potential of future General AI and its inherent dangers. I won’t even attempt to do this any further injustice by trying to delve into the discussion further here, however I would like to point you all in the direction of this excellent podcast, a very well spent 90 mins of your time:

Sam Harris in conversation with Neil deGrasse Tyson

 

 

 

 

 

Beware the Digital Echo Chamber

Occam's Laser

Information and education both represent powerful forces for good.  We all universally know and accept this. The more people that benefit from education, the more compassionate, tolerant, and peaceful both people, and their societies as a whole become.

Michael Schemer’s excellent book “The Moral Arc” present’s a compelling case for optimism, as the global population becomes more tolerant and peaceful, and our proximity to violence and suffering continues to diminish as we all become awash with worldly information and new enlightened ways of thinking.   This progression has tracked a positive trajectory since the enlightenment period (Well, apart from a couple of blips last century when we descended into total war!).

Moral Arc

Of course, anyone who watches the news regularly may well disagree here.   News worthy items need action and drama to stimulate public interest, and often news outlets have vested interests and motivations that undermine their impartiality.  For these and other reasons…

View original post 997 more words