Is AI Biased? Its repercussions and possible solutions.

Imagine walking towards your hotel in which you already have a booking and the front door, seemingly electronic, refuses to open — meanwhile, another person steps in front of it, and voila! The same door opens like it was never meant to be closed. 

As a reader, you must be wondering, what does a hotel’s front door have to do with AI, or to this topic? Well, the technology that enables an electronic door to open for certain individuals and remain closed for others is powered by AI. It matches your identity with the hotel’s booking records, upon a successful match, the door opens, otherwise, you need to head towards the booking window or book the room online. Drastic times require drastic measures, and the times we currently live in (hint: COVID19) require such measures to ensure the safety of staff and guests. This also goes to show the extent to which AI has become an integral part of our lives no matter where we go.

 

AI has seen a massive boom in recent years — thanks to the advances in computational power and algorithmic knowledge. It has impacted our lives in such a way that many are calling this age of machines or digital age where everything is interconnected like a web of information, providing enough fuel for AI algorithms to perform their magic. Despite AI’s unparalleled potential, it’s not all moonlight and roses, and we have slowly begun to realize the shortcomings and drawbacks of systems heavily reliant on AI, especially for decision making.

The problem of biases in AI systems started to surface when a number of persons having dark complexion began complaining about the state-of-the-art facial recognition systems failing to recognize or even detect their faces.

 
 

“When we talk about bias, it can be towards a certain color, race, gender, ethnicity, or a certain trend in the data.”

 

An AI-powered system unable to recognize a dark-complexioned person might not sound that big of an issue or a matter of life and death, at worst, it can lead to the problem discussed at the beginning of the article, but there are several other cases that can lead to consequences far greater than being stranded outside your hotel. We discuss some of these issues in the following discussion.

When we hand the power of decision-making to the machines, be it, the decision of opening the door or not, which candidate to hire for a job, which individual is at a greater risk of being radicalized, or which suspect to convict for a crime. Believe it or not, AI is involved in the decision-making of all the above-mentioned scenarios and many more in one way or another.

An AI-based solution was developed by Amazon which would help the recruiters in shortlisting the best candidates out of hundreds and thousands of probable candidates. On the upside, this system would save a lot of time for recruiters who would otherwise need to go through each application individually. What they didn’t expect was that due to the male-dominant workforce in the majority of companies, the AI system learned to ignore any female applicant or the applications with even a hint of words related to females, because of an existing trend of higher retention of ‘male’ employees in the companies. This led to Amazon scraping the multi-million dollar project [1], but there are many similar or even less sophisticated systems in place, making the decisions about who is the right candidate for the listed position.

Many courts in the US have employed AI-based solutions that assist the judges in deciding on the sentence of a convict based on the past judgments and nature of the crime. Independent analysts fear that due to the US judicial system being relatively harsh towards African Americans or the people of color in the past due to a history of the divide in the nation based on color, this AI system might also be influenced by the ethnicity and locality of the person instead of judging purely on merit. Several law enforcement agencies around the world make use of AI systems to catch suspicious individuals or convicts in a case, and they are constantly in search of individuals who can turn towards violent behavior in the future based on their past data. Considering the kind of biases we saw in AI systems, it is a very dangerous route to convict a suspect based on what AI has suggested and this is where we start to witness the drawbacks of AI as a decision-making tool in its current form.

 

As we have thoroughly discussed the biases in AI and how it is impacting us in our daily lives, let’s quickly go through ‘why’ it is happening. I can assure you that the researchers behind this technology had no intention of it behaving in a way that can be harmful or disadvantageous for certain individuals. So, where exactly is the problem?

 
 

“The problem lies in the data, not the algorithms!”

 

The AI algorithms are designed in a way that they find patterns in the data, be it visual, textual, or numerical data — just how a human brain works and exactly how a child learns to differentiate between a cat and a dog. It finds patterns in the cat image, the shape of the ear, size of the eyes, and facial features, etc. so when a new cat appears, it instantly recognizes it to be a cat. Keeping that in mind, when we look at some of the most popular datasets available to train these AI models, they are oftentimes skewed towards a certain class. In the case of facial recognition, this class is of light-colored individuals, in the case of criminal cases in the US it is African Americans, in case of individuals with high retention in IT jobs, the class is of males, in case of terrorist activities in recent years, the class is of Muslims, and so on. The data in almost every case is highly unbalanced which is hard to notice with hindsight and due to its enormity. The factors as to why we always end up with such biased data is a debate for another day and better discussed by the sociologists. What we can discuss is how to overcome this problem.

A short-term solution would be for developers and researchers to have better checks on their data and to ensure that no such discrimination occurs in their provided solutions. Easier said than done! There are many issues that arise from the side of developers in ensuring this, a few of which are, they are always short on time which makes it difficult and almost unreasonable to spend time on ensuring system neutrality, and there is simply not enough data available for equal representation of every class. This leads us to a more robust and long-term solution, which should come from the governments to enforce big corporations who are milking big bucks through their AI systems to ensure no such discrepancy is present in their systems as they can afford to spend time and effort in ensuring that. Also, a proper legislative framework addressing these issues is the need of the hour to nip the evil in the bud before it transforms into something much harder to control since AI is here to stay and it will be an even more integral part of our lives in future.

 

References

  1. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Our vision is to lead the way in the age of Artificial Intelligence, fostering innovation through cutting-edge research and modern solutions. 

Quick Links
Contact

Phone:
+92 51 8912223

Email:
info@neurog.ai