Bias in AI

·

10 min read

Artificial intelligence, as big and technical it might sound, the fact that how seamlessly it blended in our life, and how we can take it for granted when we use these technologies is impressive. We sometimes don’t even realize we are using these sophisticated algorithms every single day. From binge-watching Netflix series to asking Siri or Cortana to play your favorite song, Getting online recommendations of the type of dress you would like to buy to pages you want to visit, unlocking the phone using face recognition to using iris biometry for unlocking safe in banks, even using software like Grammarly to write this proposal! AI is in every field, blended like its part of our life which is great! Unless something so wrong happens that it might not cost us billions of dollars but might do worse. Learn biasing.

AI doesn’t have to be evil to destroy humanity - Elon musk rightly said.

The machine doesn’t have to, on its own drift to the so-called “The dark side” it only learns based on what we want it to learn and based on the data it's fed.

We as humans are biased, we have a lot of prejudices, some that we know about, and some we might not even be aware of. Artificial intelligence is an idea that, if you turn over decision-making skills to computers instead of humans we might be able to avoid some of the bias. For example, maybe if a human had to read every single feedback of the product to give a verdict saying “the product is useful” it won’t make it efficient and give scope for bias like may the person who is reading the feedbacks might not like the product himself and might manipulate the data, so final verdict might be affected, whereas this can be eliminated by a machine. But what if we are learning algorithms that can be just as biased as the humans that created them. Why does this issue matter? AI are becoming more and more involved in major decisions in many fields, in some places, they help us who gets a loan, who should be hired or fired, even decides how the defendants or criminals must be treated and how long they go to jail! Now if these algorithms are flawed or biased it could actually amplify injustice and inequality! And that's a major set back for having technology being part of our daily lives. The roots of the problem might be many, but to light upon one, AI is only as sharp as the data it learns from.

For instance, a study at MIT Lab found that leading facial recognition systems correctly identified white male's faces 99 % of the time, but with dark-skinned female faces it made mistakes up to 35% of the time! Very likely that the data used to train the software is often overwhelmingly white and male. One widely used data set was estimated to be more than 75% male and 80% of them were white.

ProPublica recently investigated the algorithms used in courtrooms to predict the risk of criminals committing future crimes and these “risk assessments” can be used to help them make critical decisions, like who can be released from prison and how much the bail amount should be. And shockingly they found that the algorithm falsely predicted black defendants to be future criminals at almost twice the rate of white defendants. This isn't a new problem, researchers are shining a light on these systems, the people that create them, and the data that train on them, they are part of a movement calling for “algorithmic accountability”. On the bottom line as algorithms get more and more complicated, as we start using machine learning and artificial intelligence, even as they become more powerful they are becoming less transparent, which means it's hard for us to really tell why an algorithm is behaving the way it is, why it made a decision the way it did. Transparency is becoming more difficult, but it's very important in order to make sure that these algorithms are not behaving in a biased way and find a feasible solution to solve this problem.

Today the United States crumbles under the weight of two pandemics: coronavirus and police brutality. Racism has always existed in this world, though conditions might be better now, to eradicate this pest will take years! But it doesn't have to be the same in AI. There's plenty of evidence where bias is baked into artificial intelligence. The machine can’t learn discrimination until it taught. Faulty algorithms and methodology can hinder justice. We’ve designed and implemented facial recognition systems that target criminal suspects on the basis of the color of the skin. We’ve trained machines that calculate the probability of future crimes done by a defendant against dark-skinned and that disproportionately identifies Latinx people as illegal immigrants. Not to mention, the Beautify option in photo editing apps brightens the skin color!

Analyzing the case of 2017, a video went viral about the soap dispensers which released soap only on white hands, soap dispensers do not use very sophisticated sensors like other high-end technology, it uses reflection back from infrared rays it emits to dispense soap. The problem is dark colors do not reflect enough rays for it to get triggered. Another similar study has shown that sensors on driverless cars have less chance of stopping for a dark-skinned pedestrian and the reason is the dataset that was used to test were mostly white men. Seemingly, dark-colored skin was not considered why making these products.

Another viral incident where Nikon cameras while recognizing facial features gave out a comment “Is your friend blinking” to a young Asian woman, whose eyes naturally shrink when they smile. Joy Buolamwini recently discovered that the robot recognized her face when she wore a white mask, while her actual skin color is dark. Also to mention the AI slip-up that tagged an African American couple ‘gorillas’ in their photos. Well, this incident gives us a clear picture that algorithms we use aren't tested on diverse datasets, which comes down to the main aspect, that's lack of diversity in the team. To support this, the tech and computer industry in the countries that develop these high-end machines and distributed all over the world is still overwhelmingly dominated by white men

In 2016, there were ten large IT companies in Silicon Valley that did not employ a single black woman. Three companies had no black employees at all and barely had colored professionals.

We need to first make sure our data is diverse, which will help in reducing inequality and injustice. The algorithm in fact can learn too much! An example is Tay, the 2016 chatbot that learned to retweet racist messages. But it applies to any algorithm. Worse, this can lead to inappropriate generalizations. AI works on generalization predicates .biased thoughts often have a foundation in generalization. That is, making general assumptions about a given type of people. When it comes to racism in technology, though, generalization is a core part of how machines learn it makes no sense to add it in every part of its field Because practical circumstances cannot be generalized they are spontaneous. Skin color in the first place shouldn't have been the feature to identify a criminal, but what another feature that could be added to improve the results is the life history of the crime after committing a crime and giving points for certain kinds of crimes.

Another case where we see about the soap dispensers, instead of using cheap sensors like infrared ones we can implement it uses a higher grade sensor because it's not the problem of sanity, sports wearable like Fitbit, heart rate monitors don't respond well to dark skin. In such cases, one way we can teach the algorithm is by punishing it, every time it doesn't respond the way it is supposed to we cut down the score during the testing phase which it will remember next time. Just like how we teach naughty kids. But for this main requirement is having a dataset that's diverse.

In the above cases, it's not hard to find out what might be causing this. First, when the product doesn't work for a certain class of people we can say that there's a lack of diversity in the testing process for there to be diversity in testing, the researches working must be diverse and from different backgrounds.When there is no diversity in the same room, it means the machines which are being taught are learning the same discrimination, biases, and internal prejudices of the majority class that are developing them. Hence having diversity in the team will help reduce the bias as it lets them know in the testing phase where the algorithm might fail. Second, biased data. AI needs data for testing purposes, to learn how to do tasks, which involves lots of data of course. And if the data used for testing is biased it goes without saying that the output will be biased. The problem of facial recognition described above can be due to insufficient diversity in training data, where diversity includes people of different color and facial features.if out of 100 images fed only 10 are colored AI will learn about identifying white faces than it does any other race. Hence if the training set is free from bias we can expect better results. Third, faulty algorithms can be harmful. Recommendation driven browsing uses algorithms that control what information we see. These then create what is known as “filter-bubbles”, which renders content that you would rather agree with rather than the actual side which, unfortunately, makes these algorithms and implementations recommend radical content promoting biased views.

Coming to a conclusion on this problem statement we can reduce bias by creating better and more diverse data sets with which to train the algorithms, sharing best ethical practices among software vendors, and implementing algorithms which explain their decision making, or be more transparent so that any bias can be understood. We change AI from ACTING HUMANLY to ACTING RATIONAL. Diversity is incredibly important among the groups of the people that are actually designing and creating the algorithms because without diversity prejudice can be baked into these algorithms and so they'll behave in a biased way just like a biased person would. Technology that we design, create, and deploy, whether it’s contact tracing, facial recognition, or social media recommendations, has caused more injustice than a human can do. We’ve invented credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or getting hired ..So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy.