AI has a Bias Problem. Could Art be the Solution?

Researchers and activists are finding creative ways to advocate for technological justice.

Joy Buolamwini, a dark-skinned woman with short dark hair, stands in front of a demonstration of facial recognition software.
Photo courtesy Algorithmic Justice League

Scientist, artist, and activist Joy Buolamwini.

When Joy Buolamwini was a student, she had to interact with a social robot that couldn’t detect her face. This was because the robot’s artificial intelligence had only been trained on white skin tones and facial structures, and Buolamwini is Black. Repeated experiences of not being “seen” by AI inspired Buolamwini — a self-described “warrior artist, computer scientist” — to take creative action in addressing this problem. And she’s not alone. As artificial intelligence creeps further into our lives, artists and scientists are working to creatively expose the ways it favours white people and harms people of colour.

A 2019 photography exhibit, Training Humans, laid bare the source of the problem with how facial-recognition AIs have been trained to see and categorise the world since the 1960s. The exhibit featured thousands of photographs taken from “training sets” used to teach AI what faces look like. “We wanted to engage with the materiality of AI, and to take those everyday images seriously as a part of a rapidly evolving machinic visual culture,” stated data scientist Kate Crawford, who put the exhibition together with artist Trevor Paglen. “That required us to open up the black boxes and look at how these ‘engines of seeing’ currently operate.”

Ruha Benjamin, a race, justice, and technology researcher, says AI can’t be impartial because machines — even so-called intelligent ones — do not exist in a vacuum. “Social norms, values, and structures all exist prior to any given tech development,” she explained in a 2019 episode of the podcast Data & Society.

As artificial intelligence creeps into our lives, artists and scientists are exposing the ways it harms people of colour.

One of those social inputs is the systemic racism and gender bias that comes from a history of colonialism, imperialism, white supremacy, and patriarchy. These biases, which inevitably favour the rich and powerful, show up in the AI gadgets produced and sold by Amazon, Google, IBM, Microsoft, and others. One of the most glaring examples are the ways facial recognition softwares discriminate against people of colour, and anyone who isn’t a cisgender man.


Buolamwini’s experience with the robot AI inspired her to found the Algorithmic Justice League (AJL), which uses research and art to promote equity and accountability in AI. But what could the subjectivity of art possibly have to do with the cold, hard logic of machines and algorithms? As it turns out, everything, says Buolamwini. “Art can explore the emotional, societal, and historical connections of algorithmic bias in ways academic papers and statistics cannot,” she wrote in a Time magazine opinion piece.

Her experiences of being invisible to facial recognition led to Coded Bias, a film that explores the dangers of unchecked AI and the threats it poses to civil rights and democracy. The film tells individual stories of everyday harms caused by technology, especially to women and people of colour, and of Buolamwini’s transformation from scientist to activist.

We often talk about science as purely objective, but not about how it is produced and who produced it.

There is another way artistic expression and artificial intelligence intersect: how we imagine AI and its role in our future, as seen in pop culture and in the media. When researchers Stephen Cave and Kanta Dihal studied this, they found that the representation of AI in the public imagination is overwhelmingly white. This, they said, points not to just the whiteness of the AI industry in general, but also represents attributes that are ascribed to being white, those of intelligence and power. They go on to argue that, “AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary.” The fallout is representational harms, including amplifying racial prejudice, and distortion of perceptions of risk and benefit.

Ruha Benjamin’s Ida B. Wells Just Data Lab explores these intersections between “stories and statistics, power and technology, data and justice.” The lab uses social and scientific research along with art to visualise the complexities that exist when human beings interact with AI. Its projects span mental health, urban housing, surveillance, and the prison system. For example, in Digital IDs & Smart Cities, an interactive graphic of a woman walking down a street shows the many ways in which AI tracks our movements and behaviours.


The connections between art and technology, however, are not always obvious — a gap that the Poetry of Science public art installation in Cambridge, MA, set out to fill. A collaboration between the Cambridge Arts Council and the People’s heArt Project, the project paired scientists of colour at the Massachusetts Institute of Technology with local poets of colour. The aim: to create “positive associations between people of color, the arts, the sciences, how nature is perceived, and what it means to generate knowledge.”

One of the pairs consisted of poet Rachel Wahlert and MIT PhD student Huili Chen, an AI scientist who studies how humans and robots interact. Chen said the project was an opportunity to make AI less of an enigma to non-scientific folks. “In daily life, when we talk with each other, we often talk about science…as purely objective, neutral facts or knowledge,” Chen said in an interview. “But we barely pay attention to the process of scientific production, how [it] is produced and who produced [it].”

The representation of AI in the public imagination is overwhelmingly white.

Chen and Wahlert’s partnership was driven by the shared belief that demystifying the nebulousness of science (in this case, AI) must lie at the heart of an inclusive future, and that art was one medium to achieve that.

Wahlert’s poem based on her interactions with Chen was called “To be Understood,” it contains the following stanza:

I have 10 light sensors for eyes
But can’t perceive tricks or lies
I’m designed in pink, orange and blue
All my undertones are decided by you

The exhibition, comprising 13 poems and photographic portraits of the poets and scientists, will be on display at the Mass General Cancer Center until the end of November 2021, then moves to the MIT Rotch Library for exhibition through January 2022.

“It’s like creating a narrative that is alternative to how sci-fi portrays [artificial intelligence],” said Chen. “Only by doing that demystification and democratisation, [can we] empower the public to know more…and [enable] young generations to shape the future of this field.”

Asparagus depends on readers.

Support our work by subscribing, donating, or buying sustainable swag.