Should the “I” in “Artificial Intelligence (AI)” need a reboot?
Three events took place a few days ago which, at first glance may look inconsequential, but are they? Read on…
The morning ritual begins with commanding the wrongly comprehending Alexa to play the morning melodies on flute. Alexa, the voice assistant obeys after a couple of attempts and the soothing strains waft through the expanse of the living room.
The rhythmically cooing pigeons swoop in into the terrace, listening to the familiar whistling. They train their bobbing heads to the rustling sound of seeds strewn on the terrace floor. Some coo and invite their mates, and the others strut and fan their tails to protect their territories. The sumptuous and timely breakfast gets underway. Pigeons decide when to eat, how much to eat.
Shortly after, the news of “Going bananas over Artificial Intelligence” catches the attention. The headline is a robot trained to peel the humble banana. The news is from the venerable University of Tokyo lab.
Alexa is intelligent, the pigeons are clever, and the robot is dextrous (and human-like)!! Or is it so?
Did we use the words portraying “intelligence” rather loosely here?
From the deep recesses of my mind comes alive the doomsday prophecy warning that soon there would be no distinct difference between what can be achieved by a biological brain versus a computer (aka AI). AI is on its way to emulate human intelligence and soon after will exceed it, rule it and at its peak will replace the humankind.
The primacy of humankind is getting threatened!
Luckily, I had just completed reading the brilliant book: “The Book of Why” by Turing awardee Judea Pearl along with the seminal article: “Human-Level Intelligence or Animal-Like Abilities” of Adnan Darwiche (UCLA, 2018). They came to my rescue in dousing my fear of humankind being usurped by AI!
Alexa uses natural language processing and speech recognition software. Large amounts of audio training data are used as inputs. The raw data is cleaned and labelled. With the aid of the algorithms, voice assistants understand and fulfil user commands. Is intelligence being drilled into Alexa or is Alexa simply mastering imitation through continual training and learning?
Pigeons are among the smarter birds. Their homing abilities have been effectively used as carrier birds. This cognitive skill could be a blend of innate trait and committed training. But can it be called intelligence?
Now let us dwell on the robot and the banana. A robot peeling a banana is trained by a deep imitation (learning) process to learn and perform this deceptively effortless process. Media coverage makes this an exciting headline, and readers brim with positivity. However, the headlines of the robot’s prowess could be misleading. The banana peeling robot’s success rate after thirteen hours of training maxes out at 57%. That is forty-three times in one hundred attempts, it failed the task by squishing the banana. Can this be dubbed as intelligence or is simply imitation trying to be perfected?
John McCarthy (Stanford University) coined the world of Artificial Intelligence in 1955. The pithy acronym AI has gained immense ground, with technology breakthroughs like parallel computation, big data and better algorithms propelling its massive growth.
There is heightened speculation surrounding AI; that human will be replaced by machines. This has been, however, tempered by the fact that humans can leverage AI and AI could augment human capabilities. Attempts have been made to redefine Artificial Intelligence as Augmented Intelligence. Machines have advantages that humans do not: speed, repeatability, consistency, scalability, and lower cost, humans have advantages that machines do not: reasoning, originality, feelings, contextuality, and experience.
The triumph of neural networks in applications like speech recognition, vision, autonomous navigation has let the media coverage to be less thoughtful and at times go overboard in describing automation of tasks to be quickly equated with human intelligence. This excitement is mixed with an ample dose of fear. So, is the word “intelligence” the misnomer here?
Intelligence refers to one’s cognitive abilities, which would include capacities to
1. Comprehend and reason and imagine,
2. Bring in original, at times abstract thoughts,
3. Be able to evaluate and judge,
4. Adapt to the context and environment,
5. Acquire knowledge and store and use as experience
So, if Machine Learning is the way AI is powered to meet only the last point of acquiring knowledge and storing it for use later, then will this not be “incomplete intelligence”?
At the risk of sounding like a non-conformist, Pearl argues that Artificial Intelligence is handicapped by an incomplete understanding of what intelligence really is. AI applications, as of today, can solve problems that are predictive and diagnostic in nature, without attempting to find the cause of the problem. Never denying the transformative and disruptive, complex, and non-trivial power of AI, Pearl has shared his genuine critique on the achievements of Machine Learning and Deep Learning given the relentless focus on correlation leading to pattern matching, finding anomalies, and often culminating in the function of “curve”-fitting.
The significance of the “ladder of causation” i.e., progressing from association to intervention and concluding with counter factuality has been the contribution of immense consequence from Pearl.
Pearl has been one of the driving forces who expects that the correlation-based reasoning should not subsume the causal reasoning and the development of causal based algorithmic tools. If, for example programmers of driverless car want to react different to new situations, they should add the new reactions explicitly, which is done through the understanding of cause and effect
Furthermore, the concern echoed by Darwiche of the current imbalance between exploiting, enjoying, and cheering the current AI tools based on correlation should not be at the cost of representation and reason based causal tools to build cause and effect.
Only causal reasoning could provide machines with human level intelligence. This would be the cornerstone of the scientific thought and would make the human–machine communication effective. Hitherto, areas like explainable AI (xAI), moralities and biases in AI, should be gainfully addressed.
Till then the spectre whether AI would usurp the human intelligence is a non-starter. Should we agree that the field of Artificial Intelligence have a more apt title of Artificial Ability or Augmented Imitation? Will reboot of the acronym help dissuade the apocalypticist from painting a grim picture about the impending demotion of humankind?
Somjit Amrit