Technologists and policymakers often function within isolated communities, despite the desperate need for collaboration and cooperation. Policymakers sidestep critical tech issues due to their lack of technical expertise while technologists operate within dangerously gray areas.
As a tech-enthusiast and public policy major, I saw an opportunity to become an expert in two seemingly different fields. I joined various campus organizations such as Ethical Tech, Cyber Club/Team, and the Duke Cyber Policy Program. My academic research involved topics such as medical artificial intelligence, consent, the Sepsis predictive model, real world data, and AI-enabled software.
Soon, I no longer wanted to learn about these emerging technologies alone - I felt a strong need to share my passion. So, I began organizing events at the intersection of technology, policy, and education.
On January 31st, Duke University’s Sanford School of Public Policy and Pratt School of Engineering co-sponsored a conference on Designing Ethics into AI. This conference was held to celebrate Data Privacy Day, an event with Duke roots.
As part of the event team, I sincerely hoped that the conference would reach different groups of students, faculty, and professionals. I believe that the diversity of thought and intentional engagement is key to advancing both our technologies and our policies.
Sanford Deans Ravi Bellamkonda and Judith Kelley made opening remarks. Both deans emphasized the need for collaboration between engineers and policymakers to maximize the benefits and mitigate malicious or unintended uses. They also spoke of Duke’s tremendous investments in developing campus opportunities on these topics and to equip students with the best knowledge and tools.
Personally, Duke University has provided the ideal spaces to consider the ethics and science behind creating, using and regulating emerging technologies. Sanford and Pratt have provided several opportunities for me to train as a future leader in tech-policy and continuously stay informed on the latest cutting-edge innovations.
The conference keynote speaker was Leonardo Cervera-Nevas, the Director of the European Data Protection Supervisor’s Office for the European Union (EU). Cervera-Navas co-founded Data Privacy Day with Sanford Professor David Hoffman and Duke Law School Professor Jolynn Dellinger in 2008. He spoke about the need for technology to advance society and the EU’s history of regulation and innovation while also protecting human rights. His emphasis on privacy as an ethical issue for AI was personally intriguing, as much of my former work explored the best methods for integrating ethics generally, and consent specifically, into AI models.
The first panel was on engineering AI solutions with an ethical lens, moderated by Duke Computer Science Professor Ashwin Machanavajjhala and included panelists from Microsoft, Intel, and IBM. The industry professionals discussed the large array of privacy technologies within AI models, such as homomorphic encryption and federated learning.
Homomorphic encryption can help organizations process and store information without creating privacy and security vulnerabilities. Teams at IBM and Microsoft are working to increase the speed of homomorphic encryption to make it a more applicable tool.
Federated learning is a machine learning technique that trains algorithms across multiple servers holding local data samples without exchanging the data or storing user data in the cloud. However, federated learning cannot secure data that has already been shared in the cloud.
The second panel focused on the legal and policy implications of designing ethics into AI. The panel participants included a privacy advocate, a law firm partner and a technology policy consultant, moderated by Dellinger. They had a vigorous debate about the degree to which the privacy harms and ethics implications from AI models are likely to be properly mitigated without substantial regulation. Some advocated corporate responsibility, while others supported more government regulation. The General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) were discussed as possible models; however, there was no clear agreement on the best approach.
“On AI regulation, I don’t think we’re going to have a global consensus,” said Pam Dixon.
The panelists closed with remarks on the growing significance of cross-functional teams and the importance of reflecting on society’s historical reactions to technology.
The audience raised important questions on the ethics of AI and tangential topics, such as how to build a profitable business model that practiced healthy user-privacy policies or the GDPR and its global implications.
I was encouraged by the large scale of participation and discussions. While neither panel settled on one technical or one policy solution for designing ethics into AI, there was general agreement that more collaborative work was necessary to advance both innovation and human rights. I personally felt charged with the responsibility of tackling the ethical issues to be pioneered and the technical solutions to be engineered.
“Consumers should not have to pay for privacy, it should be guaranteed,” said Hoffman in his closing remarks.
The conference also thoughtfully engaged Duke’s science and policy leadership by raising important ethical issues related to these emerging technologies. As an audience member, I felt rewarded with new information and new ways of thinking about tech and policy. As an organizer, I was pleased to see the relationships fostered and the conversations which continued even after the conference ended.
Most importantly, as a Duke student, I was encouraged to continue paving my own path in tech-policy and inspired to help bridge the intersectionality of the sciences and the humanities
Joanne Kim is a public policy major and co-vice president of Ethical Tech. She is also vice president of the Sophomore Class Council.