The Impact of Racial Bias in Facial Recognition Technology
A Comprehensive Review Discussing the Harmful Shortcomings of Facial Recognition Technology on Minority Groups
There was a time when artificial intelligence (AI) was subtly entering our lives. Now, technology and tools utilizing artificial intelligence are everywhere, turning our personal data into statistics on a graph to, first and foremost, increase profitization through data-driven insights. What’s more, these machines are only becoming more powerful and capable, as illustrated by Moore’s Law[i].[1] This has led to growing concerns among adults in the United States, a majority of which, according to a 2021 study conducted by the Washington Post,[2] do not trust various large technology companies with their data, most notably including, but not limited to, Facebook, Instagram, Tiktok, and Whatsapp, as shown in Figure 1. It is of worth mentioning that these technology giants, which are among the world’s most impactful, feed their algorithms with colossal amounts of data that are evidently not properly secured, as Facebook’s litigation history regarding data-breaches shows.

In 1986, Melvin Kranzberg coined the six “Kranzberg Laws” of technology — a series of truisms that Kranzberg developed in summarizing his studies of technology’s sociocultural influences — the first of which stated the following: “Technology is neither good nor bad; nor is it neutral”[3]. As humans continue progressing in an age that rewards advances in computer intelligence and strives toward doing current jobs better — with the help of powerful artificial intelligence tools — it is imperative that we take initiative on actualizing exactly how these systems affect us. What’s more, the ways in which human-computer interaction is to be used and understood remains contradictory and elusive by many, necessitating the clarification of its impact on society and, more importantly, dangers. While artificial intelligence tools have made remarkable advancements in many sectors of society — including automation in virtually all impacted sectors, increased profitization in the financial sector[4], increased efficiency in pharmaceutical development[5], safer and more efficient transportation,[6] quicker and more accurate diagnoses of disease than even healthcare professionals[7] — it does not go without its shortcomings; artificial intelligence that utilizes facial recognition technology has unintentionally, but substantially, disenfranchised minority groups due to the implicit racial bias embedded in its code.
When biases that advantage one group over another are employed by an AI and remain inconspicuous for extended periods of time, these effects negatively harm the lives of persons of color who aren’t as fairly represented by these systems. Noone expects that on a hot, summer day, on his front lawn with his wife and children, an innocent Black man will be arrested on grounds of a computer claiming that it identified him as the perpetrator of a recent robbery at a store which he hadn’t visited in nearly five years; and yet, that’s precisely what happened to Robert Julian-Borchak Williams in Farmington Hills, Michigan[8].
On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.
An hour later, when he pulled into his driveway in a quiet subdivision in Farmington Hills, Mich., a police car pulled up behind, blocking him in. Two officers got out and handcuffed Mr. Williams on his front lawn, in front of his wife and two young daughters, who were distraught. The police wouldn’t say why he was being arrested, only showing him a piece of paper with his photo and the words “felony warrant” and “larceny.”
During questioning, an officer showed Williams a picture of a suspect. His response, as he told the ACLU[9], was to reject the claim. “This is not me,” he told the officer. “I hope y’all don’t think all black people look alike.” He says the officer replied: “The computer says it’s you.”
Mr. Williams is one of the many affected by biased facial recognition technology. Joy Buolamwini, one of the leading researchers at the MIT Media Lab who focuses on exploring the issues of bias in facial recognition software, argues that algorithms are usually written “by white engineers who dominate the technology sector… These engineers build on pre-existing code libraries, typically written by other white engineers,” ultimately further perpetuating racism throughout various sectors of society.[10] But these algorithms aren’t perpetuating these issues out of sheer will.[ii]
In fact, when AI algorithms are trained, they are done so in a way that optimizes their pattern-making abilities, and only their pattern-making abilities. This is typically done in one of several ways: supervised machine learning, unsupervised machine learning, reinforcement learning, and semi-supervised learning. The facial recognition technology that affected Mr. Williams utilized the supervised-learning technique. As explained by Julianna Delua, member of the Analytics department at IBM, one of the world’s leading companies in AI research, “supervised learning is a machine learning approach that’s defined by its use of labeled datasets.”[11] This approach works by allowing researchers to supervise an algorithm as it attempts to successfully predict certain outcomes based off of previous attempts that it makes. In doing so, it learns over time, iterating on its predictive capabilities from the results of the previous trials, and getting better at guessing[iii].
Facial recognition technology is best understood with the following analogy: if a machine-learning researcher gives an algorithm the ability to classify an image as either a cat or a dog — i.e., two degrees of freedom by which to act — and the model incorrectly guesses that the animal in a photo is a “dog,” the algorithm will receive immediate feedback that it was incorrect because the dataset was pre-labeled by the machine-learning researchers. Knowing that its guess was incorrect, the algorithm will then use that result to influence its future predictions, minimizing its cost-function[12]. After only a few trials, the algorithm would indubitably be predicting the presence of a “cat” or “dog” in an image almost randomly. However, after several thousands of trials, or even millions of trials — which is possible due to the versatile nature of these machines and their immense computing power — the algorithm might eventually be able to reach, for instance, a 70% accuracy in its classification abilities. This means that, based on the dataset provided, the algorithm would correctly predict if a dog or a cat was in a photo 70 times out of 100.
One of the most crucial things to realize is that the results of any algorithm is heavily dependent on the data it is given. According to Emilio Bazan, if you give an algorithm 500 labeled pictures of black cats, and 500 labeled pictures of white dogs, chances are that when you present the algorithm with a white cat, it’ll incorrectly classify the cat as a dog. “The algorithms aren’t willfully making these decisions,” says Bazan, emphasizing the algorithm’s lack of consciousness. “They’re just doing what their programmers told them to look for, and optimizing for those parameters accordingly.” Sometimes these lead to disastrous consequences, as in the case of the wrongfully accused Mr. Williams.
Mr. Williams is not an outlier. Many more like him suffer similar circumstances, including a man named Willie Lynch who was misidentified by another facial recognition software that classified him as a dangerous criminal that had been recently involved in a drug deal. “It’s considered an imperfect biometric,” said Georgetown University researcher Clare Garvie, who took up the question of the validity of facial recognition technology following Mr. Lynch’s circumstances. In a study he conducted in 2016, called ‘The Perpetual Line-Up,’ Garvie wrote: “There’s no consensus in the scientific community that [facial recognition technology] [actually] provides a positive identification of somebody [of a minority group].”[13] Further studies conducted by M.I.T. and the National Institute of Standards and Technology, or NIST, share similar viewpoints: facial recognition technology is known to work relatively well on white men; but the results are much less accurate and skewed for other demographics, especially those coming from minority backgrounds. This is partly believed to be due to a lack of exhaustiveness and diversity of minority groups in the images that comprise such datasets.[14] In other words, our society’s artificial intelligence uses algorithms with predominantly white people as its training data, and are simply not representing minority groups as much they should be. This creates situations that immensely favor the circumstances of white people over minority groups.
Another instance of a system that has drastically influenced the lives of minority groups is a software called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS was a tool designed to predict a criminal’s likelihood of re-offending, according to creative technology journalist, Alex Fefegha, for the People of Color in Tech publication.[15] Following the results of the computer-generated prediction — which analyzed the answer choices of a 137-question survey, a preview of which is shown in Figure 2 — judges would increase or decrease the severity of the punishments they were alloting (ranging from bail-out fines to sentences) to offenders. The results, once more, were heavily skewed to favor one group of people, and it was not those of minority backgrounds.

ProPublica, a nonprofit news organization, discovered that, according to COMPAS, “black offenders were seen almost twice as likely as white offenders to be labeled a higher risk, but not actually re-offend. While the COMPAS software produced the opposite results with whites offenders: they were identified to be labeled as a lower risk more likely than black offenders despite their criminal history displaying higher probabilities to re-offend…” This is exemplified in Figure 3, which shows the risk assessments for various offenders, with Black people having higher risk, on average, despite significantly more minor offenses. According to The Atlantic, ProPublica’s investigation reported that COMPAS was just 65% accurate. This is shocking considering the fact that the United States has imprisoned over 1.5 million individuals using COMPAS software, illustrating its unethicalness and unchecked power over minority groups that are unfairly disadvantaged by such algorithms.

According to the image, Dylan Fugett had a previous history of two armed robberies, an attempted armed robbery, and grand theft, whereas Bernard Parker had just four juvenile misdemeanors. When COMPAS was asked to evaluate the risk of both individuals in reoffending, Fugett obtained a risk score of just 3 — regarded as “LOW RISK” — while Parker obtained an astonishing 10 — labeled “HIGH RISK.”
When Northpointe, the developer of COMPAS, was confronted about the details of their algorithm, they refused to disclose any information, making it hard to pinpoint exactly where such tools are going wrong. The lack of transparency in systems like this which have such powerful implications on someone’s life is incongruous with our sixth amendment[17] rights as human beings[iv]. Yet, current legislation still does not require that computers justify the decisions and accusations that they make when labeling a person as high-risk, suggesting room for improvement with regards to the legal system. Once again, these computers have the power to extend the sentence of an incriminated person, but even more worrisome is their bias in labeling Black people as higher risk despite significantly lesser offenses.

The harmful implications of AI bias does not stop at the level of law enforcement, but affects even people in their everyday lives and activities. While we have come a long way since the Jim Crow laws[18] were still in effect, I argue that minority groups are currently living in age of digital discrimination, as exemplified by the case studies to follow which have precluded certain members from certain opportunities, solely on the basis of their skin. This is firstly exemplified when the video-conferencing platform, Zoom, was unable to detect a black person’s face and erased them entirely when the virtual background feature was activated.[19] This hindered the black man’s professionalism as he lacked the phenotypes necessary to properly use common workplace features — phenotypes that unfairly advantage white people and disadvantage minority groups.
When situations like these disadvantage specific groups of people, it’s disconcerting to think about how these problems will continue to amplify and perpetrate throughout society if left neglected. In another similar situation, as reported by Reuters, an automated New Zealand passport system wrongfully misconstrued that the subject — of Asian background — had his eyes closed, when this was not actually the case. As seen in Figure 5, the system, in bolded red letters, mistakenly claimed that the subject’s eyes were closed, leaving the man unable to upload his photo and acquire his Visa. This example illustrates one of the key issues with facial recognition technologies today, which is their lack of phenotypical inclusivity — i.e., the ability to make identifications based on a wide variety of phenotypes that people present, not just features that are easily identifiable among white people. As facial recognition systems are increasingly employed in society’s many sectors, it’s imperative that algorithms have more phenotypically-inclusive datasets that account for the immense variability in phenotypes across all racial backgrounds.

Another major instance of racial discrimination that permeates throughout society today occurs in healthcare, one of the most fundamental needs of all human beings. According to a study that dissected racial bias in an algorithm used to manage the health of populations, an algorithm was “less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs.”[21] By analyzing the algorithm’s assignment of risk scores to individual patients — which would be used to determine which patients needed ‘additional help and care’ — machine learning and health care management researcher, Ziad Obermeyer, and his colleagues, were able to determine that Black people were generally assigned lower risk scores than equally sick white people. That is to say, Black people needed to be sicker than white people in order to get the help that they needed. According to Oberymeyer, only 17.7% of patients that the algorithm assigned to receive extra care were Black. The researchers of the study believe that that number would rise to 46.5% if the algorithm weren’t racially biased. A article published in Nature, reflecting on the results of the study, added that it estimates that “hospitals and insurers use” similar algorithms every year in the United States to provide care for upwards of “200 million people.”[22] When such racial bias permeates even the field of healthcare, it only goes to show how poorly minority groups are afflicted by such algorithms in circumstances that may ultimately be a matter of life or death. In sectors that have the most influence on the physical circumstances of human beings, like law enforcement and healthcare, it is evident that further neglect of these AI systems and the data which they are trained on will only exacerbate the inequities across minority groups. Only by increasing awareness to the issue and encouraging legislation to pass such that these algorithms operate more equitably, through phenotypically-inclusive and widely diverse datasets, can we pave the way for change on a mass scale.
One of the forefathers of computer science, John Von Neumann, once posed: “Can we survive technology?” It certainly goes to show that certain groups of people in our society are far more disadvantaged than others, as in the case of the wrongfully accused Robert Williams and Willie Lynch who were impacted by two faulty facial recognition systems, Bernard Parker and his unfair high-risk assessment by the COMPAS system in comparison to a white person with more aggressive offenses, a Black man who just wanted to use Zoom without disclosing his private surroundings, and an Asian man that was looking to enroll for a Visa. In all of these cases, the systems which affected these people were not inclusive of people from their racial backgrounds. Moreover, these individuals only represent a select few of the many who are affected by racial bias in facial recognition technology and, more broadly, AI in general.
In a Ted Talk titled “How I’m Fighting Algorithmic Bias,” Joy Buolamwini preaches simply that “who codes matters…” but also “how and why we code matters.” As artificial intelligence tools and software become more ubiquitous throughout society, both our algorithms and legislature must be ready. Not only do more people from underrepresented groups need to get involved with the code-writing, but we also need to put effort in creating datasets that are more inclusive and representative of minority groups in our society — especially accounting for the variability in the phenotypes that they exhibit, as in the case of the Asian man who was denied his photo submission because the software incorrectly flagged his eyes as being closed. Additionally, we need systems in place to constantly check for algorithmic bias in software such that minimal damage is afflicted on those of less represented minority groups who do not deserve such discrimination. By increasing representation and awareness on the potential dangers of bias in facial recognition technology and other AI algorithms, engineers can make algorithms more equitable for all parties, and not just those who embody the traits of the white and male engineers who write most of the code.
Works Cited
ACLU. “Wrongfully Arrested because of Flawed Face Recognition Technology.” YouTube, 24 June 2020, www.youtube.com/watch?v=Tfgi9A9PfLU&ab_channel=ACLU. Accessed 15 Feb. 2022.
Ahmed, Arooj. “A Survey Shows That Big Tech Companies Are Facing Trust Issues from Their Users, with Facebook at the Top of the List.” Digitalinformationworld.com, 31 Dec. 2021, www.digitalinformationworld.com/2021/12/a-survey-shows-that-big-tech-companies.html. Accessed 15 Feb. 2022.
AJL. “Spotlight — Coded Bias Documentary.” Ajl.org, 2016, www.ajl.org/spotlight-documentary-coded-bias. Accessed 15 Feb. 2022.
“Benefits of Artificial Intelligence | Top 6 Key Benefits of Artificial Intelligence.” EDUCBA, Oct. 2019, www.educba.com/benefits-of-artificial-intelligence/. Accessed 15 Feb. 2022.
“BLM Activist Ayo Tometi Challenges Racial Bias in AI.” TheTalko, 8 Dec. 2021, www.thetalko.com/blm-activist-ayo-tometi-challenges-racial-bias-in-ai/. Accessed 15 Feb. 2022.
Buolamwini, Joy Adowaa. “Gender Shades : Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers.” Mit.edu, 2017, dspace.mit.edu/handle/1721.1/114068, http://hdl.handle.net/1721.1/114068. Accessed 15 Feb. 2022.
Cavazos, Jacqueline G., et al. “Accuracy Comparison across Face Recognition Algorithms: Where Are We on Measuring Race Bias?” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 1, Jan. 2021, pp. 101–111, www.ncbi.nlm.nih.gov/pmc/articles/PMC7879975/, 10.1109/tbiom.2020.3027269. Accessed 15 Feb. 2022.
Conde, Maria, and Ian Twinn. “How Artificial Intelligence Is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets.” Nov. 2019.
Corbett-Davies, Sam, et al. “A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear.” Washington Post, The Washington Post, 17 Oct. 2016, www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/. Accessed 15 Feb. 2022.
Data. “Databite №106: Virginia Eubanks.” YouTube, 22 Jan. 2018, www.youtube.com/watch?v=v01_--OiHGo&ab_channel=Data%26SocietyResearchInstitute. Accessed 15 Feb. 2022.
Delua, Julianna. “Supervised vs. Unsupervised Learning: What’s the Difference?” Ibm.com, March 12, 2021. https://www.ibm.com/cloud/blog/supervised-vs-unsupervised-learning.
Electronic Frontier Foundation. “EFF Podcast: From Your Face to Their Database : Electronic Frontier Foundation : Free Download, Borrow, and Streaming : Internet Archive.” Internet Archive, 2020, archive.org/details/eff-podcast-episode-5-facial-recognition. Accessed 15 Feb. 2022.
Fefegha, Alex. “Racial Bias and Gender Bias Examples in AI Systems.” POCIT. Telling the Stories and Thoughts of People of Color in Tech., 26 Nov. 2018, peopleofcolorintech.com/articles/racial-bias-and-gender-bias-examples-in-ai-systems/. Accessed 15 Feb. 2022.
Guardian staff reporter. “How White Engineers Built Racist Code — and Why It’s Dangerous for Black People.” The Guardian, The Guardian, 4 Dec. 2017, www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police. Accessed 15 Feb. 2022.
Hill, Kashmir. “Wrongfully Accused by an Algorithm (Published 2020).” The New York Times, 2020. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.
“Jim Crow Laws — Martin Luther King, Jr. National Historical Park (U.S. National Park Service).” Nps.gov, 2018. https://www.nps.gov/malu/learn/education/jim_crow_laws.htm.
Kranzberg, Melvin. “Technology and History: ‘Kranzberg’s Laws’” Jstor.org, 1986, www.jstor.org/stable/3105385. Accessed 15 Feb. 2022.
Ledford, Heidi. “Millions of Black People Affected by Racial Bias in Health-Care Algorithms.” Nature, vol. 574, no. 7780, 24 Oct. 2019, pp. 608–609, www.nature.com/articles/d41586-019-03228-6, 10.1038/d41586–019–03228–6. Accessed 15 Feb. 2022.
Liu, Xiaoxuan, et al. “A Comparison of Deep Learning Performance against Health-Care Professionals in Detecting Diseases from Medical Imaging: A Systematic Review and Meta-Analysis.” The Lancet Digital Health, vol. 1, no. 6, Oct. 2019, pp. e271–e297, www.thelancet.com/journals/landig/article/PIIS2589-7500(19)30123-2/fulltext, 10.1016/s2589–7500(19)30123–2. Accessed 15 Feb. 2022.
Madland, Colin (Colinmadland). “Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious.” 18 Sep, 2020, 5:18 PM. Twitter. twitter.com/colinmadland/status/1307111818981146626/photo/1. Accessed 15 Feb. 2022.
Martin, Nicole. “Artificial Intelligence Is Being Used to Diagnose Disease and Design New Drugs.” Forbes, 1 Oct. 2019, www.forbes.com/sites/nicolemartin1/2019/09/30/artificial-intelligence-is-being-used-to-diagnose-disease-and-design-new-drugs/?sh=57c3b60344db. Accessed 15 Feb. 2022.
Maruti Techlabs. “5 Ways AI Is Transforming the Finance Industry — Maruti Techlabs.” Maruti Techlabs, 26 Sept. 2017, marutitech.com/ways-ai-transforming-finance/. Accessed 15 Feb. 2022.
MLK. “Dummies Guide to Cost Functions in Machine Learning [with Animation].” MLK — Machine Learning Knowledge, July 23, 2019. https://machinelearningknowledge.ai/cost-functions-in-machine-learning/.
Moore, Gordon E. “Cramming More Components onto Integrated Circuits,” 1998, https://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf.
Nitasha Tiku. “Google Fired Its Star AI Researcher One Year Ago. Now She’s Launching Her Own Institute.” Washington Post, The Washington Post, 2 Dec. 2021, www.washingtonpost.com/technology/2021/12/02/timnit-gebru-dair/. Accessed 15 Feb. 2022.
Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science, vol. 366, no. 6464, 25 Oct. 2019, pp. 447–453, www.science.org/doi/abs/10.1126/science.aax2342, 10.1126/science.aax2342. Accessed 15 Feb. 2022.
Paul, Debleena, Gaurav Sanap, Snehal Shenoy, Dnyaneshwar Kalyane, Kiran Kalia, and Rakesh K. Tekade. “Artificial Intelligence in Drug Discovery and Development.” Drug Discovery Today 26, no. 1 (January 2021): 80–93. https://doi.org/10.1016/j.drudis.2020.10.010.
“Spotlight — Coded Bias Documentary.” Ajl.org, 2016, www.ajl.org/spotlight-documentary-coded-bias. Accessed 15 Feb. 2022.
Tate, Ryan-Mosley. “The New Lawsuit That Shows Facial Recognition Is Officially a Civil Rights Issue.” MIT Technology Review, MIT Technology Review, 14 Apr. 2021, www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/. Accessed 15 Feb. 2022.
USC Annenberg. “Algorithms of Oppression: Safiya Umoja Noble.” YouTube, 28 Feb. 2018, www.youtube.com/watch?v=6KLTpoTpkXo&ab_channel=USCAnnenberg. Accessed 15 Feb. 2022.
“U.S. Constitution — Sixth Amendment | Resources | Constitution Annotated | Congress.gov | Library of Congress.” Congress.gov, 2022. https://constitution.congress.gov/constitution/amendment-6/.
Yong, Ed. “A Popular Algorithm Is No Better at Predicting Crimes Than Random People” The Atlantic, 17 Jan. 2018, www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/. Accessed 15 Feb. 2022.
Footnotes
[1] Moore, Gordon E. “Cramming More Components onto Integrated Circuits,” 1998.
[2] Ahmed, Arooj. “A Survey Shows That Big Tech Companies Are Facing Trust Issues from Their Users, with Facebook at the Top of the List.” 2021.
[3] Kranzberg, Melvin. “Technology and History: ‘Kranzberg’s Laws.’” 1986.
[4] Maruti Techlabs.“5 Ways AI Is Transforming the Finance Industry — Maruti Techlabs.” 2017.
[5] Paul, Debleena, Gaurav Sanap, Snehal Shenoy, Dnyaneshwar Kalyane, Kiran Kalia, and Rakesh K. Tekade. “Artificial Intelligence in Drug Discovery and Development.” 2021.
[6] Conde, Maria, and Ian Twinn. “How Artificial Intelligence Is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets.” 2019
[7] Liu, Xiaoxuan, et al. “A Comparison of Deep Learning Performance against Health-Care Professionals in Detecting Diseases from Medical Imaging: A Systematic Review and Meta-Analysis.” 2019.
[8] Tate, Ryan-Mosley. “The New Lawsuit That Shows Facial Recognition Is Officially a Civil Rights Issue.” 2021.
[9] ACLU. “Wrongfully Arrested because of Flawed Face Recognition Technology.” 2020.
[10] Buolamwini, Joy Adowaa. “Gender Shades : Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers.” 2017.
[11] Delua, Julianna. “Supervised vs. Unsupervised Learning: What’s the Difference?” 2021.
[12] In essence, the model is optimizing for the correct behavior and minimizing incorrect behavior, as if it knew it was being scored. As further explained by MLK, “cost functions in machine learning are functions that help to determine the offset of predictions made by a machine learning model with respect to actual results during the training phase” (MLK. “Dummies Guide to Cost Functions in Machine Learning [with Animation].”)
[13] Guardian staff reporter. “How White Engineers Built Racist Code — and Why It’s Dangerous for Black People.” 2017.
[14] Hill, Kashmir. “Wrongfully Accused by an Algorithm.” 2020.
[15] Fefegha, Alex. “Racial Bias and Gender Bias Examples in AI Systems.” 2018.
[16] Yong, Ed. “A Popular Algorithm Is No Better at Predicting Crimes Than Random People.” The Atlantic. 2018
[17] “U.S. Constitution — Sixth Amendment | Resources | Constitution Annotated | Congress.gov | Library of Congress.” Congress.gov, 2022. https://constitution.congress.gov/constitution/amendment-6/.
[18] “Jim Crow Laws — Martin Luther King, Jr. National Historical Park (U.S. National Park Service).” 2018.
[19] Madland, Colin (Colinmadland). “Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious.” 18 Sep, 2020, 5:18 PM.
[20] Buolamwini, Joy Adowaa. “Gender Shades : Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers.” 2017.
[21] Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” 2019.
[22] Ledford, Heidi. “Millions of Black People Affected by Racial Bias in Health-Care Algorithms.” 2019.
Endnotes
[i] Moore’s Law states that computing processing power exponentiates by approximately a factor of two every two years (Moore).
[ii] It’s the capability of AI algorithms to commit harm which is the problem — as distinguished MIT Professor and computer scientist Max Tegmark explains in his book, Life 3.0 — and not the question of whether AI could turn against us as most science-fiction literature speculate (Tegmark).
[iii] This can be best understood as a linear function (y=mx+b) that is constantly changing and correcting itself (modifying its slope) after each trial to become more alike a function that is better at guessing the circumstances of an event. Obviously, the function denoting a facial recognition algorithm is far more complex, and encompasses an extraordinary amount of weights and biases that enable it to make more rational predictions based on the input data given, but is built upon a similar framework (AI — Wiki).
[iv] “In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.” (“U.S. Constitution — Sixth Amendment | Resources | Constitution Annotated | Congress.gov | Library of Congress.”)