
Dr. Joy Buolamwini drew a passionate crowd of students and community members.
Every superhero has a great origin story, and for Dr. Joy Buolamwini, it started early.
Even as a grade-schooler, she knew she wanted to pursue robotics and computer science. When she arrived at Georgia Tech, she was "enamored by computer science and building robots." She expanded, "it started me on my path towards algorithmic justice, and I was really into this notion of showing compassion through computation."
A superhero also has to have a great team, and as the founder of the Algorithmic Justice League, Buolamwini is a pioneering computer scientist, poet, and advocate, and has dedicated her work to exposing the biases embedded in artificial intelligence (AI) and advocating for more equitable technology. She delivered Sanford's Spring 2025 Rubenstein Lecture combining personal stories, research findings, and poetry to illuminate the urgent need for AI accountability.
Buolamwini is also the author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines, a widely acclaimed book that delves into the ethical implications of artificial intelligence and the fight for algorithmic justice. Her research and advocacy were further highlighted in the Netflix documentary Coded Bias, which explores the societal impacts of biased AI systems and has brought global attention to the need for more transparent and accountable technology. She has shared her expertise in congressional hearings, and provides guidance to world leaders on the ethics of AI.
The Journey to Algorithmic Justice
Buolamwini traced her path from Georgia Tech to her groundbreaking research at MIT. She described how, as a graduate student, she encountered what she calls “the coded gaze” when a facial recognition system failed to detect her face until she put on a white mask. "I had to literally wear a white mask to have my dark skin detected," she explained, a moment that catalyzed her research into algorithmic bias.
Exposing AI Bias in Facial Recognition

Sharing findings from her MIT research, Buolamwini revealed alarming disparities in how AI recognizes faces. "Some companies didn’t detect my face at all, and the ones that did misgendered me," she noted. Running tests on facial recognition software from major companies, she discovered significant racial and gender biases. "Michelle Obama, Serena Williams, Oprah Winfrey—these iconic women were either misclassified or not detected at all," she said, illustrating how AI systems fail those who do not fit the default datasets.
Her concerns go beyond academic findings. Buolamwini pointed to real-world harms, including wrongful arrests due to flawed AI. She recounted the case of Portia Woodruff, "an eight-months-pregnant woman falsely arrested due to facial recognition errors," and Robert Williams, "who was misidentified and arrested in front of his young daughters." She questioned, "How many more people have to be harmed before we take these issues seriously?"
Statistical Evidence of Bias
Buolamwini presented findings from her research, demonstrating the stark disparities in AI accuracy across race and gender. "Overall accuracy seemed high," she said, referencing results from the Pilot Parliament Benchmark, "but when we broke it down, we saw a different story." She found that facial recognition models performed significantly better on lighter-skinned male faces than darker-skinned female faces. "For Microsoft, accuracy for darker-skinned females was as low as 80%, while lighter-skinned males saw nearly perfect results," she revealed.
The problem was not just with one company. "IBM and Face++ also showed these disparities," she explained. When she informed the companies of her findings, the responses varied. "Some took responsibility and worked to improve," she noted, citing IBM’s efforts to replicate her study and invite her to speak with their engineers. Others dismissed the results outright. "One company simply said, ‘We already know about bias, but thanks anyway.’"
Imagine a recalled car—when one model is defective, they take them all off the road, but with AI, companies announce they've fixed bias, yet their flawed models remain in use.
Dr. Joy Buolamwini
The Risks of Flawed AI in the Real World
Buolamwini emphasized the dangers of deploying AI systems with known biases, particularly in law enforcement. "Imagine a recalled car—when one model is defective, they take them all off the road," she said. "But with AI, companies announce they've fixed bias, yet their flawed models remain in use." She illustrated this with her own experience testing a supposedly improved Microsoft model. "I uploaded a new photo, and it still labeled me as male," she recalled. Even a young photo of Michelle Obama was misclassified, showing that bias remained deeply embedded.
Industry Blowback and the Fight for Accountability

Buolamwini’s work has not always been welcomed. "When we included Amazon in our follow-up study, we were shocked to find that their AI was performing at the same level as the worst systems from a year earlier," she said. Amazon pushed back against the findings, demonstrating how corporate interests often resist scrutiny. "It was a critical moment for me," she admitted. "Would I continue this work if it meant risking career opportunities?" Despite opposition, she found support among leading AI researchers who defended the necessity of accountability in AI research.
Her persistence paid off. "By 2020, every U.S.-based company we audited had stopped selling facial recognition technology to law enforcement," she said. "And when my book Unmasking AI became an Amazon Editor’s Pick, it was a full-circle moment I never expected."
The Ethics of AI and Inclusion
Buolamwini explored the ethical complexities surrounding AI inclusion and exclusion. "Inclusion into what?" she asked. "We must interrogate what we are being included into and whether we have a choice." She discussed the case of Worldcoin, a controversial initiative collecting biometric data in exchange for cryptocurrency, particularly in Africa. "Are people truly consenting, or is this coercive consent disguised as opportunity?" she questioned.
She described the problematic nature of unchecked data collection, referencing companies like Clearview AI, which scrapes billions of images without consent. "It's not just about playing with technology—we are not playing in a vacuum," she warned.
AI in Law Enforcement: A Tool for Harm?
Buolamwini addressed a critical question from students: Should facial recognition be used by law enforcement? Her response was unequivocal. "Accuracy is not the end goal," she stated. "Even an accurate system can be abused." She warned against the potential for AI-driven mass surveillance and racial profiling. "Better accuracy doesn’t mean better outcomes—it just makes it easier to target vulnerable communities."
She referenced troubling reports of tech companies collecting diverse datasets under questionable circumstances, such as Google subcontractors targeting homeless individuals for facial data collection. "Who benefits from these systems? And at whose expense?" she asked.
We must continuously fight for the vision of the world we want to see. This is not just about technology—it’s about power, policy, and people.
Dr. Joy Buolamwini
Robyn Caplan (left) moderated a portion of the event. She conducts research at the intersection of platform governance and media policy. Her most recent work examines the history of the verified badge (the blue checkmark) at platforms.
A Conversation with Robyn Caplan
Following her lecture, Buolamwini was joined by Robyn Caplan, Assistant Professor in the Sanford School of Public Policy and Senior Lecturing Fellow in the Duke Initiative for Science & Society, for a discussion and audience Q&A.
Hope for a More Just AI Future
Despite the challenges, Buolamwini expressed hope. "We are at a moment where AI regulation is being seriously considered," she said, pointing to global discussions on AI governance. She referenced the AI Bill of Rights, an initiative led by Dr. Alondra Nelson, as a step toward embedding protections in AI policy.
Reflecting on her meeting with President Biden in June 2023, she acknowledged that political shifts can undo progress but emphasized the importance of legislation. "Executive orders can be reversed—that’s why we need laws to institutionalize protections," she explained.
The Fight Continues
Buolamwini closed with a call to action. "We must continuously fight for the vision of the world we want to see," she urged. "This is not just about technology—it’s about power, policy, and people." As AI becomes more embedded in everyday life, she emphasized that justice must be at the core of its development, ensuring that technology serves humanity rather than exploits it.
Student Experience