But are they? What if, beneath the surface, some AIs are really doing things that are centuries old: reinforcing prejudice, setting human against human, making assumptions about vulnerable groups, and denying others opportunity – while claiming to do the opposite?
A new book, ‘Hidden in White Sight: How AI Empowers and Deepens Systemic Racism’, says they are doing exactly that – and worse. Published this April, it is poised to become a work of epochal importance: a well-timed warning from inside the system itself.
Across more than 200 pages of personal stories, compelling evidence, and case studies, author Calvin D. Lawrence shows that, all too often, AI can be an engine for exclusion, automating and perpetuating deep societal problems.
More, it is giving racism a veneer of computer-generated trust and veracity, he suggests, which sit atop the decades of historic data that AI needs to function. Data pulled from systems that – often without us realising – may have been trained on years of biased human behaviour, by teams that lack any diversity at all.