Artificial Intelligence (AI) is quickly changing how we view jobs and productivity. Previously, we’ve written about 6 practical applications of artificial intelligence that are impacting major industries. In the article, we spoke about how businesses, researchers or even general workers gained productivity and information that could be hidden from the human eye. With AI supplanting most work, businesses could focus on the more creative side of things like strategy or product design.
However, artificial intelligence is nothing but a tool. Just because automation removes direct human control, it does not mean it will not have the flaws that humans have. So before businesses or researchers fully embrace artificial intelligence, let’s ask ourselves these two ethical questions.
Are Our Algorithms Biased?
Source: Comet Labs | Chinese authorities are quick to adopt Facial Recognition Technology (FRT) for the purpose of security and surveillance
If an AI is tasked with detecting red coloured trucks on a highway, coders have to first create a framework to identify what red coloured trucks are. The algorithm is only as good as the person who wrote it. But the application of artificial intelligence does not stop at inanimate objects. As most of us who have been auto-tagged on Facebook before, artificial intelligence is also applied to humans. This means when there are errors, there will be a social and human cost.
Joy Buolamwini, computer scientist and graduate student researcher at the MIT Media Lab coined the term “Coded Gaze” to describe the biases algorithms may have when performing tasks. She defined it as “algorithmic bias that can lead to social exclusion and discriminatory practices.”
Buolamwini realised the issue when she was working on an art project called “The Aspire Mirror” that uses Facial Recognition Technology (FRT). She realised that the software she was using couldn’t detect her face but could only detect her when she put on a white mask. Buolamwini wasn’t alone in her experience as a black person dealing with bad FRT. In 2015, Brooklyn resident Jacky Alciné found out that his photo was labelled as “Gorillas” by the Google Photos app on his phone.
Source: MIT.edu | Joy Buolamwini speaking about FRT at a TedTalk
Buolamwini, working with University of Toronto researcher, Deborah Raji found that FRT from Microsoft, IBM, and Amazon have trouble assessing the gender of darker skinned women. “In light of this research, it is irresponsible for the company to continue selling this technology to law enforcement or government agencies,” Buolamwini argued against Rekognition, Amazon’s FRT product. In a separate blog post, Buolamwini also pointed out that according to a Big Brother Watch UK report, there were false positive match rates of over 90% for FRT deployed by the UK Metropolitan Police.
Are Our Databases Biased?
Source: Kochie’s Business Builder
In 2017, according to a survey by talent software firm CareerBuilder, 55% of U.S. human resources managers said artificial intelligence would be a regular part of their work within the next five years.
However, that claim might not materialise as people cast their doubts over artificial intelligence in recruitment. Amazon recently shut down their attempts at mechanising their hiring process when they realised their system was rejecting female applicants. The same study also found that artificial intelligence was downgrading graduates from women’s only colleges.
Amazon’s team for the project comprised of machine learning specialists, trained their artificial intelligence using a database of resumes of people submitted to the company over a 10-year period. Unfortunately, because of the tech industry’s bias for male employees in the past, the data the artificial intelligence based its decisions on reflected the bias. In selecting resumes, the artificial intelligence preferred male qualities and downplayed the viability of resumes with female qualities. The only way we could have a fair assessment from machine learning is from creating unbiased data. If this goes unchecked, it is possible for any biased user to wash their hands off any bias and claim that those decisions were made or recommended by a non-human entity.
What Can We Do About This?
Source: Campbell Law Observer
Antony Cook, Microsoft’s Associate General Counsel for Corporate, External and Legal Affairs for Asia said that “Having access to large and diverse data sets helps to train algorithms to maintain the principle of fairness.” Amazon’s machine learning data above has shown how limited in range was their 10-year data.
Fei-Fei Li, chief artificial intelligence scientist at Google Cloud who is currently on leave from her position as director of the Stanford Artificial Intelligence Lab pointed out that she was the only female scientist in her Stanford lab. She argued that if AI builders don’t come from a diverse background, artificial intelligence will make questionable decisions. For example, artificial intelligence would unfairly reject loan applications if minorities have been unfairly singled out in the past. Or without the presence of non-white engineers, neural networks for FRTs will only be trained with white faces.
This does not mean that we should discard data science or artificial intelligence as decision-making tools. It just means we should remind ourselves that artificial intelligence is as good as the person who writes an AI’s algorithm. It also means the data we collect are only as useful as the parameters we impose that renders the data usable.
In order for artificial intelligence to be useful in the future, we need to make sure artificial intelligence reflects the many, rather than few.
Thinking of embracing ethical AI principles in your workforce? We have 3 HRDF SBL-Khas claimable courses you could explore. Work your way through our AI courses; Introduction to Artificial Intelligence for Business Executives, Machine Learning for Business Intelligence and Introduction to Deep Learning with NVIDIA GPUs.
[Just in: Now with our Maybank ZERO% interest-free 18-month instalment plan, training costs just got a whole lot more affordable! Get the exact cost breakdown here.]