Sharing my partial professional calendar in the early spring…
February 10–12, 2023: I was monitoring the developments at the 11th Inter-IIT Tech Meet, hosted by IIT Kanpur.
February 26, 2023: I taught a course on AI/ML and its implications in HR Tech at a reputed management school, a task I have relished over the last couple of years.
March 04, 2023: Visited Lake Parashar and Lake Rivalsar with our guests who graced the IIT Mandi Foundation Day celebration.
These events may seem disparate but let me bring out the one surprising aspect which will be common to them.
Getting on the leaderboard.
The news of the prestigious, Inter IIT Tech Meet (the 11th edition was held in IIT Kanpur in mid-February 2023) piqued my interest. I am a keen observer of these Tech fests because of the emergence of tech talent in hackathons. These fests are a reliable conduit for identifying crucial technical talent. Corporate sponsors are keen to keep themselves visible among the emerging technical talent and there is a veritable competition to grab sponsorship titles.
The much-awaited results came in. I speculated the first-generation IITs to bag the top positions. But to our pleasant surprise, the new generation IITs had pipped the older and much-venerated IITs, with the IIT Mandi finding itself in the top ten! A satisfying feat to be on the much-vaunted leaderboard.
Like almost everyone, I was afflicted by the “anchoring” bias, “anchored” to the reference point that the first-generation IITs, given the resources and the reputation would mow down the competition. But I was proven wrong and felt good at that!
Talent in this competitive forum is not the domain of the select institutes. These events, rightly so, celebrate the aptitude and gumption of the individual, irrespective of the institute they represent.
The afflictions in HRM.
The launch of ChatGPT in November 2022 has taken the world of AI/ML by storm. I enjoy teaching the AI/ML course to young students of management because it helps me stay abreast with these wildly galloping technologies. It is a biennial exercise. The course is about the relevance and application of AI/ML in the area of Human Resources Management. “Artificial” Intelligence may appear to be an inapposite concept to teach in the world of “Human” Resource Management (HRM) and that is exactly what makes it interesting.
The lecture sections dwell on the proliferation of HR Tools and their rationale, Startups in the HR Tech space, and the interest of Venture Capitalists in HR Tech. As one would guess, the recruitment process is the one that is the most amenable to embracing technologies like AI/ML in the world of HRM. If the recruitment process is there, the study of bias cannot be far, whether it is human-based or the bias through algorithms-created AI tools. Biases like “confirmation”,” halo-effect,” “similarity,” “groupthink,” and “anchoring” afflict the talent acquisition (recruiting) process in a significant manner.
Pristine may not be that colour, blue.
Any visitor to Mandi is encouraged to take a trip to the stunning freshwater lakes which ring the outer ridges of the district; namely the Parasher, Rewalsar, Kamrunag, and Servalsar. What colour comes to our mind when we talk about these pristine lakes? Blue, isn’t it? thanks to the ubiquitous portrayal of this colour in water bodies depicted in photographs, artworks, and book illustrations.
What colour makes the aforementioned lakes in Himachal Pradesh?
We assume that blue is the classic colour of a pristine lake and not “green.” We attribute certain qualities and characteristics to a lake solely based on the colour it presents. The streak of bias inevitably slips in.
However, the perception of the colour of a lake depends on a range of factors such as the depth of the water, the angle of the sunlight, the presence of algae and other aquatic plants, and the chemical composition of the water. These factors can cause the lake to appear to be different shades of blue or green. The water in these “green” lakes of the Himachal is considered to be sacred and pristine and the locals vouch for its potability.
Biases slip in unnoticed passively.
These anecdotes present one’s unconscious bias(es) in a typical day in the life. Life is rife with biases. These examples may seem innocuous. But are they trivial to be ignored?
Unconscious bias is so subtle that it slips in and resides in us, and we may not be aware of it. Most of us may not even realize that our thinking is biased. And it does not get remediated by taking humans out of the equation and replacing the human brain with AI algorithms. Why? Because the AI tools are devised from archived human data. Machine Learning involves learning from the archived data but not it is not understanding.
Are we creating a wild horse — a mustang?
While writing this column calls for a moratorium on the ChatGPT’s successor have come in, pleading for a pause in developing more powerful AI systems! Possibly trying to make the horse canter and not gallop!!
The Large Language Models (LLMs) with generative algorithms are bolting ahead to accept text and image input simultaneously and drafting a “learned” reply, remember not a “thought-through” (or understood) one.
Will the outputs be factually, correct? Do we know the inner workings of the thought process, when the technical report does not provide the pertinent details?
We all know that the training of the data is from the data that is “already” there to be mined from the internet. Are the harmful biases and stereotypes getting incorporated at a humongous scale per the “Large Language Models? While explanations may be provided that the larger the data set, the more diverse and representative it is, but who is in charge of defining what is “large.” Are we creating a technological “mustang” through the rules imposed on the machine?
Can we tame the mustang into a workhorse?
Are we dealing with lazy methods of working on larger data sets? Is that putting a blinkered view towards working on smarter methods that look for meaning and train on curated data sets? Can the mustang be ever tamed to be a workhorse?
Managing technology as powerful as Generative AI used in ChatGPT could be addressed if we do the following consciously prior to use and not as an afterthought. The cardinal rules should be:
To bring in diversity in teams and competencies. Diverse and interdisciplinary teams could help identify biases that homogenous teams would not be able to capture and act upon.
Next would be the importance of data cleaning and curation. Need to ensure that data used to train algorithms is clean and representative of the population, to reduce the impact of bias.
Continuous testing and evaluation are an absolute must. This is how the new biases which could creep can be nipped in the bud.
Implicit bias begins as early as childhood. This is when the human brain is developing its nerve centres and training its visual, auditory, and olfactory faculties. We become favourably disposed to the known and the familiar patterns and recognize them as “good” (refer back to the first-generation IIT as the brand) while the unknown and alien are treated with suspicion (remember the “green” lake).
While the anecdotes presented are innocuous, the roots of social maladies can be traced to unconscious bias exhibited in individuals in isolation or exacerbated by social media in collected groups. The hate speeches, the sickening violence, and the social stigma for a community, all manifest behaviour of implicit bias, to a large extent.
Let us shed the bias and engage with openness, do not let technology exacerbate it. Though this is easier said than done, we should persevere.