To expand on the below a new blog and podcast introduction relating to these subjects can be found here:
Narrow AI, General AI & Alien Intelligence
The concept of “other” opposed to “artificial” intelligence, and the potential cognitive relativity of Aliens and future Machine sentience is discussed. Moreover though, this short post is an intro to a fascinating podcast on this and other subjects.
This post is designed in a way to compliment and expand upon my previous blog regarding the nature of sentience in respect to awakening machine intelligence.
Rather than focus on the nature of consciousness (as that did), this blog will focus on the nature of intelligence, or more simply what is intelligence in the machine context?
A Socratic start to a blog if ever there was one, and if I was one of Socrates’ inculcators I might respond with “well, intelligence is surely how clever something is”. Socrates would respond by questioning what I meant by “clever”, and before long I’d be left scratching my head and reflecting upon how annoying Socrates can be, and on the appeal of Hemlock.
I have previously encountered through study in psychology the history of IQ testing, which through its development now focuses on measuring intelligence distance (or proximity to advantage) rather than strength. This is largely due to the nature of what intelligence is now perceived to be, and also to devise a system which can be applied universally across populations.
A useful working definition of intelligence would be “an organism’s ability to establish an advantage over its environment in order that it can flourish”. However, how would this be applied in a machine or cyber context?
In order to explain what I mean by this it is worth a quick skip through the history of the IQ test. Here is a blog I had written on this subject some 18 months ago:
Intelligence measured as distance rather than strength refers to an understanding that intelligence is deeply interwoven with the cultural and societal context to which each individual belongs. Therefore the distance refers to the proximity of each individual to the priority of abilities and attributes that their respective society values most in their own notions of intelligence. These values are expressed through the population averages in the demographic area from which the relative IQ test originates.
For example, a person from a western developed society requires different skills and attributes to succeed in business to the skills needed by a third world farmer whose knowledge and experience of weather indicators and local geology are of far more use than any numerical agility.
After all, intelligence broadly defined is the ability of an organism to successfully adapt to its environment.
In the development of intelligence testing attempts to remove cultural and social bias from tests in an attempt to identify the innate intellectual abilities of individuals has proven to be problematic and controversial.
When John C Raven conceived the Progressive Matrices Test in the 1930’s he set out with the objective of creating a universally applicable test by eliminating western notions of testing for language, arithmetic, and object recognition abilities for a fairer system of culturally neutral matrices of problem solving tasks.
Raven’s Progressive Matrices was an attempt to progress from the politically and racially biased work of Robert Yerkes (US Army Mental Test 1917), and regardless of subsequent criticism, was a valiant effort of significant merit.
Unfortunately for Raven the test suffered a number of flaws, notably:
- It was a written paper, which naturally favoured those with experience of formal education.
- Problems were based upon left/right analysis as opposed to right/left or top/bottom, and therefore favoured the wiring of western brains opposed to oriental or Arabic for example (i.e. an Arabic version of the Raven’s test was subsequently developed. QED).
- The test elements were based upon priorities within western society (e.g. processing speed), priorities not globally shared across cultures whereby other attributes not measured in the test held sway (e.g. the ability to relate to others and communicate confidently, intelligence traits prized in Taiwan for example).
Therefore, although there was increased utility in this test method over the Stanford-Binet test prevalent at the time (Indeed elements of the Raven’s Matrices Test form a sub-set of the modern version of the Wechsler test; WIAS IV), it failed in its ultimate objective of being culturally neutral.
It failed because it didn’t recognise the inescapable relationship between intelligence and environment. These aspects cannot be separated, and more importantly; why should they be? What is the value in attempting to isolate a notion of raw intelligence (or strength) removed from any context of culture? Given the broad definition to which we understand intelligence to be, I would argue that there is none, and therefore intelligence measurement by distance (or proximity to cultural advantage) has far greater utility.
OK, so my argument here is that intelligence is only useful when given environmental context. But what of Artificial Intelligence or Machine Intelligence? And how should we measure this?
If we stick with this working definition, then AI must surely be judged on its ability to complete that task it was conceived and then designed to achieve. No matter how complicated and sophisticated the technology, whether it learns by itself to complete a task using complex and innovative processes or algorithms, its intelligence must surely be measured through its ultimate utility.
When you widen the scope of that utility in an isolated system, the strength of the intelligence focused and directed must surely diminish? For example you could design a home help robot that can perform numerous, yet mundane tasks, or some sophisticated self-learning program that can defeat the Grand Chess Master, and both of these objectives have been famously achieved.
We can now design and create either a jack of all trades, or the master of one.
“Narrow AI” is a relatively new term in the industry that is starting to grow legs. Narrow AI is sophisticated technology focused on a specific task or a finite set of tasks. A good example of this would be AI designed to automatically schedule all your appointments picking up on your email cues, as described here:
Its utility is therefore that of a super-efficient and cost effective PA. How intelligent it is will be measured upon how smoothly your business life runs as a result of its help.
Narrow AI is hot on the heels of IoT (Internet of Things) as a new wave of excitement and opportunity, and I expect more and more of these applications to become market realities in the very near future.
But is Narrow IA the first wave of AI tangibly manifest in the ICT market, or does is it merely concede defeat on a concept of true universal machine intelligence as expressed in the Technological Singularity trope? I.e. Are we now focused on what we know we can achieve through machines, forsaking (or even running away from) a grander (yet much less predictable) plan?
Personally, I believe the latter hypothesis is overly pessimistic. This is due to the capabilities we will soon be able to harness through the ubiquitous inter-connectivity we can expect through IoT and the growing power of analytical software. We are at a stage where we don’t even know what can be achieved in the near future, and it seems that the most significant impediment to progress is not the technology but our imaginations.
The nature of these future applications will surely surprise us all. To measure the success of these innovations however, it is useful to remind ourselves and evaluate what we mean when we express the notion that a machine is intelligent, and how that differs at all from our notions of human intelligence.