To improve understanding of how artificial intelligence should be managed, digital leaders at the Digital Policy Leadership Retreat 2025 — hosted by the IAPP and Berkman Klein Center for Internet and Society — offered a new perspective. They argued people should reflect on how various AI applications make them feel.
Presenters spoke at length about how governance will shape the future of the digital landscape, including the ever-advancing nature of AI. While governance debates often touch on data management, privacy, trust and safety, the societal expectations of what technology is meant to do in our lives is of equal importance.
Emotions, and subsequent responses, stemming from the perceived impacts of AI are becoming increasingly important to public perception.
A survey found workers, often worried about job displacement or frustrated by tools they feel do not fit their work needs, are sabotaging their company’s AI rollout efforts. According to Stanford University’s 2025 AI Index Report, optimism about the technology is unevenly increasing worldwide, even as investment is rising.
The debate can be seen in companion AI rollouts like ElliQ, a robot aimed at curbing the elderly loneliness epidemic, said Jonathan Zittrain, a professor at Harvard Law School and the co-founder of the Berkman Klein Center.
During a keynote presentation, he surveyed reactions from the audience to a video of ElliQ talking to a grandmother. Creepy, sad, and freaky were among the responses, demonstrating how these feelings provide initial signposts on how a technology should be managed.
Underneath those reactions are questions about what it means if ElliQ starts making product recommendations or endorses a political candidate. Those interactions might be expected from a website or news channel, but they become less comprehendible when coming from a large language model that provides companionship, Zittrain said.
That complexity may not be wrangled by standard means of transparency, such as simple disclosures.
“At the moment, we don’t have the frameworks to think about these things that can be so subtle,” Zittrain said.
But the initial feelings of the public might not align with those of the people developing the technology or its target audience, according to Future of Privacy Forum CEO Jules Polonetsky, CIPP/US.
Speaking during a separate conversation, Polonetsky said ElliQ will likely inspire different feelings in caregivers or activists thinking about the role technology can play in our daily lives. It shows the importance of having an AI governance team that does not just think about the law, he said, but also how the media, consumers, stakeholders and civil society might think.
“Although we have laws and evolving law, there’s this whole set of issues where we don’t understand what the norms are,” Polonetsky said.
To tackle the more amorphous elements of AI, Meta Vice President and Deputy Chief Privacy Officer for Policy Rob Sherman said his company spent time hashing out holistic rules for how its AI products should operate, arguing a business cannot make good decisions if it does not think about how elements like privacy, free expression, safety and security all interact. Those rules were then distilled into automated forms product managers can fill out to understand where they are on answering those fundamental questions.
"It avoids the need for humans to spend a lot of time programmatically applying the same decisions and instead focusing on those bigger societal or ambiguous, hard questions,” Sherman said.
How much time developers think about these issues does not escape the notice of regulators. Irish Data Protection Commissioner Des Hogan said the thoroughness of an impact assessment — requirements which appear in various degrees in the General Data Protection Regulation, Digital Services Act and EU AI Act for certain technologies — that looks at a product’s effect on society is a clue to how serious a company is in ensuring their technology is well designed.
“We know very quickly whether the design of the product has been thoughtful enough to answer those questions,” he said.
Caitlin Andrews is a staff writer for the IAPP.