Literally augmented virtual reality

We come equipped with just 5 senses.   We perceive reality by processing the data collected from those 5 senses alone.  Our brains act as meat computers predisposed to recognise, and discern patterns through top-down processing of data collected through those 5 senses.  Bayesian inference works with this sense data, augmenting it with experience (both evolved and learned) to produce a near-time (there is a lag!) construction of a reality.   It simultaneously remembers and anticipates, whilst combining the sense data to construct a picture that we can use.

bayesian inference

Bayesian inference working with sense data to construct reality

This constructed reality is something we can safely navigate to survive long enough to procreate (in most cases) as evolution demands.  What we perceive as a result of these processes constitutes our “Umwelt”, a German word roughly translated as “Tunnel of Reality”.

We are different from other animals.   Dogs for instance have one less visual cone than humans, and if they were to look to a rainbow, they’d just see blue, green and some of the yellow.   They lack the magenta cone that we possess.   A butterfly has more than us, so they’d see a concerto of colour compared to our small band of colour.   A mantis shrimp on the other hand is off the scale with 16 cones to our 3!  We simply cannot imagine what the world is like for these creatures.   We cannot perceive the colours  (and the intensity of those colours) that they would see.   It would be impossible.

I’ve stolen this description from an excellent RadioLab report which uses a choir to illustrate the ranges of colour different animals will perceive as they process protons through their respective light sensors if they were to stare at the same rainbow.    I attach a link here, which is well worth 15 minutes of your time, a perfect way of illustrating this concept.

RadioLab – Rippin the rainbow a new one!

Of course, this just relates to vision.   What about the senses that animals have that we don’t have at all?   Some birds for instance sense the magnetic poles in order to navigate.   Bats are known to sense their world, their “umwelt”, by using echolocation.   Philosopher Thomas Nagal produced a highly influential essay illustrating how we don’t have the apparatus to even understand what “it is like” to be a bat, because we have no conception of a bats sensory apparatus.  We cannot access this bat qualia, we just cannot know what a bat perceives in order to understand it’s subjective experience!

Nagel

Thomas Nagel

But we do now create sensors that sense data that we simply cannot perceive through our own biology.   Back in recent history we began borrowing sensors from the animal kingdom, like the canary that we took into the coalmine, or the hounds we took out hunting.   Today though, we’d stick an IP address on a virtual canary to sense for poison gas and no animals would be harmed.   IoT is what we now call our world of sensors.   Imagine what Skynet or Colossus would sense through its umwelt if hooked up to all the world’s IoT!?

Our mobile phones also ship with a host of sensors that we humans don’t have.   As an example, did you know your smart phone probably has a barometer?   Going beyond weather forecasting a highly localised app “Dark Sky” will tell you if it will rain in your street in the next half hour using your phone barometer!

Dark sky

Dark Sky Local Weather Predictions!

But what if we could use IoT and the development of new sensors to hack our umwelt?

What if we can manufacture new sensors that could be integrated with human senses?    (Though I’m not sure I want to see dead people!)

What if we could use Virtual Reality, with new peripherals to present topologies and scenarios that can enhance the affect of these new senses!?

I was impressed by this TED Talk by David Eagleman (which this blog is ostensibly a long winded introduction to) because in neatly summarises the opportunities we have to augment, or create new senses.    Its also a great non-technical summary of the complicated world of perception.

David’s work began with sensory replacement, i.e. creating new or developing existing senses for those deprived of sight or hearing for example.    This lead to development of bracelets and vests to be worn to trigger responses to stimuli to help the disabled interact with the world better, and diminish the affect of their disabilities.

In the safety market, sensors can be worn to detect radiation, or for other potentially hazardous signals.   You can also imagine how such sensors can be used in a future military context.  In the VR market, peripherals can be worn to provide an immersive experience that may even go beyond our current corporeal realities.   The mind boggles!  (No prizes for guessing which market segment might add impetus to developing new VR peripherals!).

VR peripherals

I cannot blog it justice, so I will now stop trying and simply direct you to David Eagleman’s excellent TED Talk!

Can we create new senses for humans? David Eagleman

The future may be bright, it may be dark, but it will certainly be interesting..

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s