Bias in Artificial Intelligence

We frequently interact with Artificial Intelligence in our daily lives, whether it is Siri a virtual assistant capable of aiding iOS users, or Amazon’s Alexa a virtual assistant that can respond to requests with a simple, “Hey Alexa.” The development of Artificial Intelligence is certainly impressive, but there are also instances of bias that we must be aware of. 

 For example, in court, there was a machine learning program called “COMPAS” used to determine who was likely to commit a crime again after being booked.  Reporters that investigated the machine learning program only to discover that the software rated black people at a higher risk than whites. There is another algorithm called “PredPol,” which is used to predict when and where different crimes may take place. It frequently sent police officers to areas mainly consisting of minorities regardless of what the actual crime-rate in that area was. 

Additionally, there was also an instance in 2015, where Google’s Image Search showed bias. In an image search for “CEOs,” it only showed that 11% of women were CEOS, even though 27% of chief executives in the United States were female at the time. 

Artificial Intelligence is not perfect, and while it is convenient to use to complete some simple everyday requests, various examples show it is still not capable of acting completely objective. 

Author: Jeremiah Daniels

Jeremiah Daniels is a graduate from Rowan University with a bachelor's degree in Journalism. He is currently working for IEEE as a summer intern for Educational Activities.