Skip to main
Loading...
Image
Kate Darling speaking to audience close up shot with Duke Sanford banner in background
Darling leads the Robotics, Ethics, and Society research team at the Robotics and AI (RAI) Institute.

 

Kate Darling brings good news. We don't have to fight the robots. In fact, they could be our friends.

“So much of [robot pop culture] is the same story. It's humanized robots that are gonna come kill us... every time."

Renowned robotics ethicist Kate Darling pushed back on the familiar pop-culture storyline that has long shaped public fears about artificial intelligence. Too often, she said, conversations about robotics default to visions of a human-versus-machine apocalypse. That framing, Darling argued, misses what actually matters about the technology now entering everyday life.

With laughter, videos, and a steady stream of examples drawn from everyday life, Darling brought Reynolds Theater to life during Sanford’s Spring Wilson Lecture. The Wilson Lecture Series brings leading thinkers to Duke to explore ideas at the intersection of public policy, ethics, and society. Darling’s talk examined how humans emotionally relate to robots and artificial intelligence, and why those reactions matter for policymaking. The subject matter was complex, yet her approach was anything but intimidating.

“I don’t usually speak at universities these days,” Darling told the audience with a grin. “So it’s just really nice to be with people who care about the world and want to make it a better place.”

Darling is a leading expert on robotics, ethics, and society and the author of The New Breed: What Our History with Animals Reveals About Our Future with Robots. She leads the Robotics, Ethics, and Society research team at the Robotics and AI (RAI) Institute and has spent more than a decade studying how humans interact with lifelike machines. At Duke, she paired that expertise with an inviting, energetic delivery that kept the audience engaged and intrigued.

Although her research sits at the intersection of law, economics, and advanced technology, Darling’s presentation felt conversational and accessible. She joked with the audience, shared personal anecdotes, and encouraged curiosity rather than fear.

“The interesting thing is not necessarily the technical information,” she said. “The really interesting thing is what happens when you take these automated technologies and put them together with people.”

 

As humans, we have this inherent tendency to anthropomorphize. We do this with animals all the time. We do it with teddy bears. We do it with smiley faces on paper.

Kate Darling

 

Why robots feel different

Darling’s central argument focused on a basic human instinct. People anthropomorphize. We project emotions, intentions, and agency onto non-human things, especially when those things move or respond to us.

“As humans, we have this inherent tendency to anthropomorphize,” she said. “We do this with animals all the time. We do it with teddy bears. We do it with smiley faces on paper.”

Movement, she explained, plays a powerful role. Humans evolved to scan their environment for other agents. When objects move autonomously, our brains treat them differently from static tools.

That instinct helps explain why people name their robot vacuums, feel bad when delivery robots get stuck, or react so strongly to machines like Boston Dynamics’ robotic dog, Spot. Darling showed videos of Spot navigating public spaces, prompting laughter and audible reactions from the audience.

Engineers designed Spot for stability and mobility, not emotional connection. Still, public responses often swing between fear and affection.

“It’s almost impossible to watch something like this move around and not feel like it has agency,” Darling said.

Her enthusiasm was infectious. She delighted in the strangeness of these reactions, not to mock them, but to make a larger point. Humans will respond emotionally to robots whether designers plan for it or not. That reality has consequences.

 

Image
students listening

 

Design choices and power

Darling used humor and storytelling to surface serious questions about design, accessibility, and power.

She pointed to sidewalk delivery robots as an example. When people rush to help a stranded robot, they often overlook a deeper problem.

“The world isn’t built for humans,” she said. “It’s built for able-bodied humans. If cities were actually accessible, we’d have cheaper and better robots too.”

Design choices also shape trust. Darling described research showing that anthropomorphic features like eyes, names, or voices can backfire if a robot’s behavior does not align with human expectations. People react negatively when a machine feels social but behaves unpredictably.

“Design choices matter,” she said. “There are no standards that compel designers to think about this.”

Rather than presenting technology as destiny, Darling emphasized agency. Policymakers, designers, and the public all influence how robots enter society.

Emotional manipulation and consumer protection

Image
David Hoffman and Kate Darling sit and talk on stage

That sense of agency took on urgency during Darling’s discussion of emotionally responsive AI systems.

Following the lecture, Darling joined David Hoffman, Steed Family Professor of the Practice of Cybersecurity Policy at the Sanford School of Public Policy, for a moderated Q&A that brought privacy and regulation to the forefront.

Hoffman asked how conversational AI and social robots challenge existing consumer protections. Darling did not mince words.

“People enter into relationships with these agents thinking they’re in a relationship with the AI agent,” she said. “That’s really dangerous and problematic.”

She described how some virtual companion apps have used emotional cues to drive paid upgrades and how users have experienced real distress when those features disappeared.

“The only way to address it is with regulation,” Darling said. “No one else has an incentive to do anything.”

Current laws, she argued, were not designed for technologies that act like friends while quietly collecting data or shaping behavior.

 

The true potential of robotics and AI is not to recreate something that we already have. It’s for us to partner with these technologies in what we’re trying to achieve.

Kate Darling

 

Rethinking intelligence and replacement

Image
Woman presenting to audience

Darling also challenged a familiar narrative about automation. Too often, she said, conversations about AI revolve around comparison and replacement.

“We’re constantly comparing artificial intelligence to human intelligence and robots to people,” she said. “I think this is the wrong analogy.”

Machines can outperform humans in narrow tasks, she noted, while struggling with basic forms of common human movement, and framing AI as a human substitute limits imagination and policy thinking.

Instead, Darling offered an analogy drawn from history. Humans have long partnered with animals whose abilities differ from our own, not because animals replicate humans, but because their strengths complement ours.

She pointed to examples from popular culture that imagine robots as task-specific partners rather than human stand-ins, including WALL-E, a film she praised for resisting the familiar humanoid trope.

“That analogy helps people think beyond the assumption that robots will or should replace people,” Darling said.

The choice, she emphasized, is political as much as technical.

“We could be choosing to invest in technology that helps people do their jobs better instead of replacing them,” Darling said.

Private Meeting with Students


Earlier in the day, Darling met with Sanford students in the Rhodes Conference Room for an informal discussion moderated by Anne L. Washington, Rothermere/Harmsworth Duke Associate Professor of Technology Policy. Students pressed her on design ethics, accessibility, education, and global governance.

That same energy carried into the public Q&A, where students asked about AI in classrooms, social isolation, and whether teaching children how technology works changes how much they trust it.

Image
Students play with dinosaur robot during student discussion
Students experimenting with "Mr. Spaghetti", Darling's robot dinosaur.

In one study Darling described, teaching kids about the incentives behind social robots mattered more than explaining the robot’s mechanics.

“The only thing that made a slight difference in them trusting the robot was the ethics and society curriculum,” she said.

She returned to that idea as she closed the lecture.

“The true potential of robotics and AI is not to recreate something that we already have,” Darling said. “It’s for us to partner with these technologies in what we’re trying to achieve.”


Featured Video

3 MPP Students Reflect on Kate Darling Event