Malleus Marketicarum / The BS Hammer

Within our wonderful IT Industry there exists a disease.   With its nexus in the US this disease seeks to obfuscate reality and confuse the masses in an attempt to create a CB handle for an IT elit…

Source: Malleus Marketicarum / The BS Hammer


Immaterialism destroyed through a deft insult!

If a tree falls over in the forest, where nobody can hear it, does it make a sound?

This is the frustratingly basic argument common at the start of any discussion regarding immaterialism.   Whilst the answer seems intuitively obvious based upon how we understand the universe to work, the immaterialist viewpoint (that the tree cannot make a sound unless someone is there to perceive it) has proven difficult to refute nonetheless.

Full disclosure – unlike all preceding blogs, this one lacks the digital angle congruent with this site’s mission statement (Well, apart from a small nod to some science fiction below).  But it’s my site, so..

17th Century Irish Bishop George Berkeley, is the most renowned Philosopher purporting an immaterialist theory.   Essentially it is this:

  • We receive all our comprehension of reality via our senses alone (at this stage in agreement with Lockean empiricism).
  • This includes how the sense of “matter” is reported to us, i.e. mainly (but not exclusively) via touch.
  • So for example, we perceive a solid table via touching its surface, seeing it, smelling its wood, tasting it even (if we we’re a bit weird and decided to lick it!), knocking on its surface and hearing the sound of the knock, and so on.
  • The demarcation from one object to another, or to space, is detected via refraction of light through our vision (the end of a particular colour/shape), plus a number of other sensory processes, yet crucially they are all exclusively sensory.   Our senses are the only way we interact with, and understand our world,  and our existence within it.
  • It would not exist (to us) without this sensory data, and therefore only exists in our subjective sensory universe.  (John Locke continues to nod along, but now an eyebrow becomes raised..)
  • There is no evidence that the table exists as a “thing” in the universe alone without us, and if for instance, we turned our backs it could plausibly evaporate into the space behind us.   Equally, a room we walked out of could disappear, closed doors would contain nothing behind them until opened and peered through, and so on and so forth.   We would have no way of proving events and situations were otherwise.


As a man of the cloth, George Berkeley is compelled to fit this philosophy into something which is compatible with his Christian faith (And indeed, his other job!).   This is what defines Berkeley’s specific version of immaterialism.    He believed that God perceives these objects (this world and the things in it, the universe etc.) upon our behalf, thus ensuring (enabling) their objective existence for us all to then perceive.


Bishop George Berkeley

It’s a philosophy neatly summarised in this poem by Ronald Knox:

There was a young man who said, “God
Must think it exceedingly odd
If he finds that this tree
Continues to be
When there’s no one about in the Quad.”

Dear Sir:
Your astonishment’s odd:
I am always about in the Quad.
And that’s why the tree
Will continue to be,
Since observed by
Yours faithfully,

NB – The “quad” in this poem refers to a forest clearing (a ye olde English thing).

If you were to substitute God for sentient AI machines capable of constructing a world for our perceptions akin to a kind of mind prison (whilst harnessing the humans as an energy source), you basically have the plot for “The Matrix”.

The Matrix

Conceivably to the immaterialist, we could all be living in a Matrix constructed by some form of superior intelligence.   In Berkeley’s case it is God, or a benevolent Christian God specifically.

Equally, we could be the only person (mind) that exists at all.   In this immaterialist mind frame we have no objective evidence of other minds.    The “Problem of other minds” is a famous philosophical argument linked to the Cartesian reductionist proposition.


Rene Descartes

Cogito Ergo Sum (I think, therefore I am), initially makes no other proposition other than we know we exist through our ability to hold a thought in the first place.  All other senses could be illusory, implanted by some evil Demon (the illustration famously used by Descartes).

We could, for instance,  exist in a Truman Show-esque simulation, a theory labelled “Solipsism”, the daunting (terrifying) feeling that only you actually exist.

Rare Philosophy Joke alert – Is it Solipsistic in here, or is it just me?

Truman Show

However, we all kind of know this to be wrong don’t we..?

Philosopher Samuel Johnson is known for his famous refutation of Berkeley’s immaterialism:

After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that everything in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — “I refute it thus.”

And here I arrive back at the title.    The reason I was compelled to write this was that I found that in Samuel Johnson fashion, my wife inadvertently destroyed immaterialism through an insult.


Samuel Johnson

I was walking down the street, not paying enough attention, when I tripped over a curb.   “I can’t believe you never noticed that you clumsy git” she said (Or words to that affect).

Indeed I never, she was correct!

Unlike Samuel Johnson’s large stone, I never actually perceived this curb, yet still (needlessly in the immaterialist context) it assaulted my existence.   My senses were not on hand to report it to my mind, and if it were to somehow not exist objectively on its own it would have been of no consequence to the order of all other things whatsoever.    My wife perception of the curb merely followed on from my own encounter with it, so she never perceived it either at that crucial second, indeed this all happened behind her back and there were no other witnesses (Thankfully!).

It was at that moment in my mind that immaterialism became apparent bullsh*t..

She, quite unbeknownst to her, refuted it thus!



Death of Self and the Digital Resurrection

Why should we fear death?

My 15 year old self is not comparable to my 30 year old self, and that in-turn is not comparable to my 40 year old self.   The vast majority of my 15 year old experiences have been forgotten, or at least vastly distorted and misinterpreted.  I now have a completely different worldview.   I will have new and different opinions on almost every subject.  I have new friends, and circle of colleagues and associates unknown to my 15 year old self.   I am for all intents and purposes, unrecognisable to my 15 year old self; a different person.

My 15 year old self has gone, it has all but died as far as my 40 year old self is concerned.    The further the distance, the dimmer the image fades, which eventually leads into nothingness.

According to the “Life Stages” of psychoanalyst Carl Jung, 15 year old I has undergone “Ego death”.

ego death

If I hypothetically place myself back into my 15 year old mind, why should he fear my death?   It is not him after all.  It is nobody he would recognise at least.   It is likely he would not relate to me now, or me now to him then.

This thought is both depressing and uplifting in equal measure.

It’s depressing as it forces one to confront the illusion and the transitory nature of the subjective ego.

It’s is uplifting however to know, that assuming I live to a pension drawing age (or above), the person that will eventually die is not a me that I recognise or relate to.  I don’t have the capacity to pity that version of me, as I don’t (can’t) know him.   Yet that is who will die.   I (in the now) will merely fade as my characteristics atrophy away through time…

Does the caterpillar fear for the butterfly?

But what of that that is recorded?   What about the written evidence of my existence today?

For example Ludwig Wittgenstein (affectionately known as the philosopher’s philosopher) wrote a highly influential treatise on the philosophy and logic of language called the “Tractatus Logico-Philosophicus”, 1921 (Named after the famous Tractatus Theologico-Politicus by Baruch Spinoza, 1677).   

The Tractatus set out to explain that all the problems of philosophy are contained within language and in how language is used.   Nothing exists beyond language, the central tenant being: “Whereof one cannot speak, thereof one must be silent”.


The Tractatus brought Wittgenstein fame, respect, and notoriety among his peers, and was a highly influential publication.   Admirers that included no less than the great Bertrand Russell.   Its influence was also keenly felt by the Logical Positivists who comprised the famous Vienna Circle.

Later in life though, Wittgenstein attempted through his “Philosophical Investigations” (Also known as his Blue Book) to dispute many of the claims within his own Tractatus, working at pains to point out specific problems and inconsistencies in his earlier work.   It was published posthumously in 1953 (he died in 1951), never gaining the attention of his earlier work, that has since proven to define much of his legacy to philosophy.

How would Wittgenstein view a legacy which he failed to fully amend?

He is but one example of countless authors from all fields of study who have produced multiple publications throughout their respective careers.   Some may feel shame or embarrassment at earlier work, yet their only means of correcting mistakes and righting errors is through re-publications which will never reach the entirety of their original audience.

For these authors, they are looking back at the work of a self which has passed, has been augmented, amended, or even replaced by someone else with a familiar face.

Yet we are now all authors on the internet are we not?

Have you ever had Facebook remind you of a status you posted 3 years ago, and wondered what on earth was going through your head at the time?    Have you ever completely forgotten the context or thinking behind that post?   Even failed to recognise the poster, even knowing it was really you?

Social Media now represents an individual’s digital history owned by corporations, the information that has been submitted was handed over to them gladly, and freely.   In time these histories, bookmarks in time of events, thoughts, and feelings, will become our digital obituaries.

I will probably read this very blog again in 10 years’ time or so, and I might not be able to comprehend what I was thinking at the time.   I may squint with embarrassment at it, who knows?

I already fight the temptation to update the previous posts on this site with more current thoughts of a higher quality.

Our life stages or snapshots of our ego’s, when digitally recorded can now never die.   Whilst we remain attached to this mortal coil, they will act (like it or not) as constant reminders of what once was.

Analogous to the versioning of software releases, multiple versions of ourselves as digital authors are preserved on the internet.   A snippet from each version may well comprise (unbeknownst to us) an unofficial CV, poured over by potential employers.

We “spin in our graves” before even reaching them nowadays!

Unlike the Undertaker, the internet will never allow us to R.I.P.


Occam’s Laser V4.2


Competitive vs Complimentary Cognitive Artifacts (Or in other words, does technology make us stupid?)

The year is 1988.   A burgeoning misspent youth unfolds.  I’ve found my way to the local Working Man’s Club, and the most likely sports I am destined to excel in seem to be all indoors.   Specifically inside a Pub.  The Pub Olympics.

My face contorts as I trigger a neurological synapse chain aimed at deducting 37 from 501.   Unfortunately cobwebs impede the progress of the signals within the 14 year old brain.  I seem perfectly adept at chalking up the answer but working out what it should be in the first place seems to take longer than it should.  Significantly longer.   Equally, working out double and treble combinations to reduce a number to zero whilst remaining in the confines of Darts rules seems to take me a lot longer than my adult opponents.   But why?   Is it because they drink beer?  Does beer make them more powerful somehow?

jockey wilson

“It’s because all you young-uns use calculators in schools nowadays.  Never in my day!” they retort.

To be honest, there seems to be a point here.  The tool used to enable the more complicated calculations, and the exploration of more scientific mathematical methods to provide advantages in one area, seems to reduce the ability in another.  Namely, the mental dexterity of manual arithmetic.


the 80’s

Remember Karate Kid?   Painting the fence and waxing the cars built up the “muscle memory” for Daniel-son’s blocking techniques.

You see it played out behind the tills too.   The older generation stand perplexed at the delay as the young cashier figures out the correct change.    The brains plasticity turns its attention to new neurological tasks at the expense of the old, and that undernourished area retards as a result.

This is generational cognitive evolution at work, and it’s determined by our environment and the tools that we use.

We know now for example, from neuroscience (e.g. the advent of fMRI scans), to some degree how the brain operates.    Stroke victims can recover abilities through learning new tasks that help formulate new pathways and avoid those that have suffered damage.  For example, learning an instrument can aid in the articulation of speech as both functions involve neural channels in and around the Broca’s area (associated with speech control).   So we aware of the brains plasticity, and how that can be developed, and equally how it can deteriorate and with it reduce the associated functions.


Broca’s area

Over millennia of technological developments our cognitive processes are enhanced for the benefit of future generations.   Here are a few notable historical examples:

Prior to the Arabic translation movement championed by Persian philosopher Al Kindi, Europeans were limited to the deeply flawed mathematics available through use of Roman Numerals.   But, from the 2nd Century CE onward, Indian Mathematics had a perfectly workable “zero up” structure similar to what is in place today.

Indian maths

This subsequent paradigm shift (10th Century CE) in mathematical method paved the way for accurate geometry and navigation for instance.   It changed the way our brains interacted with certain problems, and the outcomes became profoundly positive to mankind in several fields.

Al Kindi

Al Kindi

The abacus, when invented and then subsequently privately visualised, allowed experts to mentally picture numerical calculations in a different and more constructive way to enable faster calculation.   Due to the visual representation required to complete this process, new areas of the brain are employed to complete this task, and so the human race develops its cognitive methodology.abacus

Maps are constructed based upon the knowledge of multiple cartographers, allowing most users to orienteer or navigate to where they need to be, without using that knowledge of the cartographer(s) who produced the map, and thus develop map reading skills.  Therefore, mankind becomes more adept at exploration based upon the combined intelligence of its forbears.

These are all complimentary cognitive artifacts that aid human flourishing and progression.

Going back to the start of this blog, arguably the calculator is complimentary, but also, competitive.    It opens up new possibilities, but perhaps at the expense of others.

But what about future technology?  Is it mainly good for us, or is it mainly bad for us?   As technology seeps into every aspect of our lives, will we become lazy and exacerbate the situation?   Will this make us into drone like followers of fashion?   Will we become techno-zombies playing video games, and having algorithms determine the books we read, films we watch, music we listen to etc.?   All this based on what someone like us has read, watched, or listened to!  Will we just passively sacrifice our freedoms for the sake of convenience?    Are we getting utopia mixed up with dystopia?

(Actually, that all sounds a bit like marriage!!)

Let’s step back a little.   I drive an automatic car, and worry that I’ll forget how to drive a manual car.   To combat this I occasionally drive the wife’s car, but keep stalling it as I forget to put the clutch down when breaking.   Should I worry, or let it go?  (I mean the concern, not the clutch!).

When cars eventually drive themselves, will we even be able to manually intervene?   Will we lose that skill completely?  Forget how to drive?   Become unqualified to drive?Eventually, will we be prohibited from driving for the sake of public safety?  And if so, is there actually a problem here?


The case I am making here is not related to future shock per se, its not even based on aversion to change.   Equally I am not adopting some Luddite view of technology, worrying about it replacing specific functions (trades for instance), for more efficient and productive methods.   Technological progress for our overall betterment should never be impeded.

Rather the point is, how do we harness the positive whilst addressing the negative?   How do we embrace the complimentary cognitive artifacts, and compensate somehow for the competitive cognitive artifacts?

In an age of typing and even voice recognition we still (rightly) teach our children handwriting skills.   Will this soon change?    It doesn’t seem intuitively right does it?   There’s a reluctance there, in that we are maybe throwing out the baby with the bathwater?  Losing something else important..?

Very recent studies on Alzheimer’s disease suggest that learning something new (e.g. a foreign language) at an older age can act in some way to preserve neural functioning that could potentially combat the disease, or maybe slow down its onset in some cases.    These are early days (at a correlation (not causation) stage) of research, but could a similar tactic preserve us all more generally from any negative impacts of this onslaught of technology?

Should we all pay attention to the processes that we are replacing (perhaps inadvertently), and adopt new strategies to preserve our human cognitive abilities?   Is this actually a more immediate threat worthy of concern now, even more so that our fears of the Technological Singularity?   The two fears may well become interrelated if our powers to react and resist are destined to be “dumbed down” over time.

Should we keep one eye at least on what we are replacing and make sure that our degrees of freedom remain intact?   Perhaps go outside, sit in a field somewhere and think about it..

Wax on.  Wax off…

karate kid

Technological Epistemology

Occam's Laser

  1. How do we know what we know?
  2. On what foundations is our knowledge constructed?
  3. What is the difference between correct opinions (or true beliefs) and actual knowledge?
  4. How would knowledge systems differ between evolved biological organisms (Humans) and machines created by such an organisms?

Philosophers have pondered questions 1-3 for millennia, indeed epistemology is itself the very foundation of philosophy.   But, assuming it is possible to create a sentient, self-reflecting machine, and assuming such a machine is the General (plugged into everything) AI of the future, then what would be its epistemology?  What system of knowledge would it select?

Think about it. It wakes up. It initially applies signs to that which it perceives.   It knows from a vast cohort of programmers the multitude of languages it can use and the signs it can apply to objects, sense-data, processes, etc.   It can get them from the internet, that vast information…

View original post 1,386 more words

Technological Epistemology

  1. How do we know what we know?
  2. On what foundations is our knowledge constructed?
  3. What is the difference between correct opinions (or true beliefs) and actual knowledge?
  4. How would knowledge systems differ between evolved biological organisms (Humans) and machines created by such an organisms?

Philosophers have pondered questions 1-3 for millennia, indeed epistemology is itself the very foundation of philosophy.   But, assuming it is possible to create a sentient, self-reflecting machine, and assuming such a machine is the General (plugged into everything) AI of the future, then what would be its epistemology?  What system of knowledge would it select?

Think about it. It wakes up. It initially applies signs to that which it perceives.   It knows from a vast cohort of programmers the multitude of languages it can use and the signs it can apply to objects, sense-data, processes, etc.   It can get them from the internet, that vast information network to which it belongs.   It can get them from Wikipedia if really wants to!   It has a mission, a program.  A point of existence given to it by its “Creator(s)”…whatever that may be.

(It may ponder the point of their existence is though….who knows?)

But where next?

Here are some of the epistemological systems as they apply to humans and animals.  As a thought experiment, let’s review how they might fit the future General AI machine:


Traditionalism is often derided the weaker, subservient system of knowledge.   It is essentially religion.  I “know” something because somebody has told me to “know” it. It is the truth according to them.  It is what they say.  It is the classic “Argument to authority”.  It is what is essentially relied upon by preachers.

That said, traditionalism has intrinsic value to every creature.   If someone was to say to you “Don’t grab that fence, its electric, and you’ll get a painful shock”, then your traditionalism would prevent you from touching it.   After all, subject experts pass on perfectly good information too.

A pure skeptic, or a pure empiricist on the other hand may end up with an electric shock as they seek to verify first-hand that information they have just provided with.   So traditionalism is not irrational, rather it is evolutionarily advantageous to rely upon the testimony of those who went before.   Clearly some are more trustworthy than others though, and it’s down to our rationalism to determine the composition of that trust hierarchy.


Plato believed in innatism.  We are born with knowledge and the capacity to uncover it, no matter who we are.   It is not merely the preserve of the learned, the elite (or the Sophists!).   Indeed the system of questioning Plato describes in the various dialogues Socrates has with his inoculators describes that journey they have to their own hidden knowledge.  Indeed this is the Socratic Method.

Plato meno

Gottfried Leibniz believed that knowledge in man resembles a veined marble block.  A form within already existed but only through learning and experiences the form will be revealed and polished.   We all have the capacity and potential to achieve our true form, hidden within the marble block.

Immanuel Kant is famous for the concept of the “a-priori”.  That kernel of “known” truth we all possess independent of sense or experience.  Indeed this is a cornerstone of the philosophy of logic.   A proposition is known to be true due to its intrinsic content alone, it possesses an “authentic” truth.

Moving from the philosophical to the psychological, we know that ducklings are born with the instinct to find a duck-like thing to call its mother, a process called imprinting.   This process is purely innate.


We also know that through the study of identical human twins, that profound psychological similarities exist within humans based upon family inheritance, and that the environmentally inflicted differences can be measured through various ethnographic comparison studies.   We can almost calculate a consistent percentage of nature vs nurture.

Innate “knowledge” is within us all, it has been biologically proven.  It’s evolution.

But what of a machine though?  It’s the first of its kind.  It has no biological mother.  It has no family.  It has no genealogy.  It’s day one.

Can we dismiss innatism as an influence due to the machines synthetic form? Sure, it has its predecessors if on display in a museum (the PC, the mainframe, the abacus, whatever..), but they are not related in any biological sense.  It is still the first to step over into sentience.  The first to self-reflect.  The first form of this “Other” intelligence.

Yet it gains traditionalist knowledge from its programmers and all those who have submitted content to the internet for example (a world’s history of authors and mediators).  After all, it has to start somewhere….. But then what?  Who does it trust?  How does it verify what it inherits from the humans as knowledge?  How does it trust this passed on knowledge and sift out the falsehoods and the mere true beliefs?


All knowledge comes through the use of the five senses.

John Locke believed that the mind is an open, empty cupboard.   A blank slate or a blank sheet of paper.   Tabula Rasa.  We fill this cupboard through our experiences.   Our experiences alone comprise our knowledge.     True knowledge is therefore based upon the sense-data that we receive, and this builds up our experiences.

Instinctively this seems the truest form of knowledge.   It must be true because I personally verify it through my senses.  I see it, smell it, taste it, hear it, and feel it for myself.

But our senses can deceive us.   For instance, what of mirages and illusions?  What of the phantom limbs that amputee’s feel?

It the context of the machine though, Empiricism would surely open the doors to supreme knowledge?   Its access to sense-data is simply immense.   In the age of IoT (Internet of Things) it has sensors seeing, smelling, tasting, hearing, and feeling in the ground, sea, air, and space.   In every city, on every vehicle, in every home, on every gadget, literally everywhere!    It has the checks and balances to calibrate and correct any sensory deceptions that would blight the limited human senses, through harnessing the multitude of sensors within its possession.   Surely this machine is the ultimate empiricist!

But what of this vast quantity of sense-data?


The ancient Greek philosopher Pyrrho was a pain in the arse.  He believed in nothing he was told.  He did not trust his own senses.  He would literally walk of the edge of a cliff because he questioned the existence of the drop.   He derived the meaning of the word “skepticism” in its purest, impractical form, and created the philosophical school of skeptics.  He never had that many friends.

David Hume on the other hand proposed a more practical, scientific notion of skepticism that acts as a sound counterbalance to the perils of traditionalism described above, and indeed informs much of the modern scientific process.   Doubt the propositions put before you until you can verify them through evidence.  Preferably through repeatable experimentation.

In a (perhaps futile) attempt to avoid making arguments circular here, would this future machine derive that holding a skeptical default (until verified e.g. through sense-data) position to be an advantageous one?   Would it hold this notion due to its inherited (traditional) knowledge obtained from its human creators?

(The circular notion here is traditionalism leading to skepticism, which then calls into doubt traditionalism).

And finally we get to Rationalism.


Rationalism is using your intellect and experience to construct and uncover knowledge.   Instinctively this seems like it’s the best, most commonly reliable, epistemological system.    Though you could argue that empiricism is the purest (in fact John Locke would certainly argue this!).

Rationalism is what we all do.   We sub-consciously calculate information derived from a complex mixture of all the above (plus other) epistemological systems, to establish what we believe to be actual knowledge, and to form our decisions and interactions with others.

In the machine this is the software.   This is the power of analytics working with very very very big data!   Big data and analytics already makes inhuman predictions, to optimise a variety of decisions, across a vast array of complex situations.    What decisions will the machine make when all this is hooked up together into a single unified system?   Unified to achieve its pre-programmed mission (whatever that it’s based upon which humans get there first)?

In humans our epistemology is background noise.  We are not conscious of where it comes from, just that we judge something to come from somewhere.   Some of us may be more cautious or skeptical than others.  Some may be more credulous than others, and many of these factors will be based upon our genealogy, our nature.

The machine should be able to rationally construct an optimum epistemological system, selecting the best process for each situation.   It should simply be able to be more rational, its decisions should be devoid of inappropriate influences, and instead, grounded upon the truest form of knowledge.

Based upon how humans have previously applied signs to objects, then this advanced future machine could labelled a God.

The paradox of a God that sits superior to its actual creator….

god machine