Another key source of racial discrimination in face recognition lies in its utilization. In 18th century New York, “lantern laws” required enslaved people to carry lanterns after dark to be publicly visible.
Advocates fear that even if face recognition algorithms are made equitable, the technologies could be applied with the same spirit, disproportionately harming the Black community in line with existing racist patterns of law enforcement. Additionally, face recognition can potentially target other marginalized populations, such as undocumented immigrants by ICE, or Muslim citizens by the NYPD.
The issue of racial bias in facial recognition algorithms is a significant concern. These algorithms are trained on large datasets, which can inadvertently perpetuate biases present in society. When it comes to recognizing Black faces, several factors contribute to the disparities:
Data Bias: The training data used to create these algorithms often lacks diversity. If the dataset primarily includes lighter-skinned individuals, the algorithm may perform poorly on darker-skinned faces.
Feature Extraction: Facial recognition algorithms extract features from images (such as distances between eyes, nose shape, etc.). These features might be less accurate for certain racial groups due to variations in facial structures.
Lighting and Contrast: Algorithms can struggle when lighting conditions vary or when there’s low contrast. Darker skin tones may not reflect light as well, leading to misclassifications.
Historical Biases: The algorithms inherit historical biases present in the data. For instance, if historical criminal databases are used for training, they may disproportionately include Black individuals.
Lack of Diverse Developers: The lack of diversity among developers and researchers can lead to blind spots in algorithm design and evaluation.
Efforts are underway to address these issues. Researchers are working on more diverse training data, improved feature extraction methods, and fairness-aware algorithms.
Comments