Skip to main
Loading...
Image
Woman with long hair smiling at camera.
Dhivahari Vivekanandasarma (MPP'25)

It was a true honor to attend Sanford’s distinguished lecture with Dr. Joy Buolamwini. After reading her book, Unmasking AI, which I and many of my friends read as students, I was truly excited to see her come to Sanford.

I admire Dr. Buolamwini’s work to force us all to confront “the coded gaze” and the role of algorithmically biased models that make incorrect assumptions about someone’s identity to make decisions that hold real social, economic, and even life-threatening consequences.

Dr. Buolamwini’s book begins: “I am the daughter of art and science”. It is one of my personal and professional goals to bridge my experience in technology with public policy to study the ways that technology perpetuates injustice. And so it was truly a remarkable experience to hear directly from someone who so intentionally and expertly weaves technology with art and poetry to combat the devastation that can happen to someone’s life because of encoded discrimination.

Yes, it was one thing to read about her experiences and research process in her book. However, Dr. Buolamwini’s presentation of her poem “AI, Ain’t I A Woman?” against a backdrop of facial recognition models that incorrectly classified her gender, wrongly estimated her age, or failed to detect her face is one that will stay with me for a long time. 

In the Q&A that followed her lecture, Dr. Buolamwini shared with us how she wrestled with the tension of different approaches to addressing algorithmic bias. Mitigating bias is not just a technical challenge, but a greater question needs to be asked: How should machine learning be used — in some situations, should it be used? What are the costs of inclusion versus the costs of exclusion? Do we stop using a software altogether once we identify that bias is present, or work towards minimizing bias? On the other hand, what are the costs of excluding minoritized (and often darker colored) faces from these systems? In her dialogue with Professor Robyn Caplan, Dr. Buolamwini left me with a lot to think about, and a lot to take with me as I do my best to tackle these issues in my own advocacy and research.

As both a former technologist and a current policy researcher interested in studying the social implications of machine learning models, it was also interesting to hear about her experience building her own training dataset from scratch — an often tricky task when you are intentional about excoding bias.

As Dr. Buolamwini herself told us in her lecture (and in her book), “It is one thing to critique other datasets and point out the shortcomings of prior research; it’s another to try to create one for yourself.”

At the time she was conducting this research, existing data collection methods that did not result in skewed data were scant at best. As she went on to demonstrate to us during her lecture, Dr. Buolamwini’s novel approach to creating a balanced dataset — the Pilot Parliaments Benchmark — was one that only a combined technologist and artist could devise, and displayed a type of thinking that many in the tech sector do not use when building these machine learning models.

She not only gave me hope that the combination of technology and art could lead to such powerful responses to injustice, but confidence that I myself made the right choice to apply my knowledge in technology to public policy.

Dhivahari Vivekanandasarma MPP'25

She not only gave me hope that the combination of technology and art could lead to such powerful responses to injustice, but confidence that I myself made the right choice to apply my knowledge in technology to public policy. In presenting to us her thought process that led her to the Pilots Parliaments Benchmark, Dr. Buolamwini had shown me how.

While reading her book was undoubtedly monumental to my growth as a budding researcher studying these issues, I learned so much from her distinguished lecture. Dr. Buolamwini’s lecture is one I will remember for a long time, and has made an indelible mark on my own path to fighting algorithmic injustice.

 

Dhivahari Vivek (MPP'25) is a Colorado local with Sri Lankan Tamil heritage. She graduated from the University of Colorado Denver with degrees in computer science and political science. Professionally, she comes from a data-heavy background – she served as a data developer consultant for multiple U.S. state/quasi-state agencies, and as a research fellow for a national non-profit. Her experiences have greatly contributed to her interest in cyber civil rights/liberties issues both at home and abroad. She is invested in helping close a severe gap in legislation addressing technological harms. With her MPP, Dhivahari hopes to work alongside tireless advocates to defend cyber civil rights and digital privacy at home, as well as give back to Sri Lanka and address the harmful technological impacts on its people. In her spare time, Dhivahari enjoys never winning at the video game Hades, taking probably too long to read very good Sci-Fi books, and struggling to lift forty pounds over her head.

Loading...

More about Rubenstein Lecture with Dr. Joy Buolamwini

As the founder of the Algorithmic Justice League, Dr. Joy Buolamwini is a pioneering computer scientist, poet, and advocate, and has dedicated her work to exposing the biases embedded in artificial intelligence (AI) and advocating for more equitable technology. 

Event Recap + Video