Organizations have never faced a more complex moment in digital regulatory compliance. Governance professionals are working to navigate the evolving technology landscape, which includes a swarm of impactful data protection enforcement, emerging initiatives to safeguard artificial intelligence development and use, and an expansion of digital rulebooks worldwide.

"We live in this new world where governments are saying, 'How do I enable data-driven innovation? How do I enable economic growth through this AI layer? How do I get there before any other country does," Centre for Information Policy Leadership President Bojana Bellamy, CIPP/E, said during a breakout session at the IAPP Global Privacy Summit 2025 in Washington, D.C. "So, companies who are also in this race are doing the same, and so we have to be aware of this. It's not geopolitics; it's just the new reality."

There is no simple solution to the growing balancing act, according to a panel of long-tenured privacy professionals speaking at GPS 2025. However, reconstructing privacy governance and accountability programs with the aim of broader applicability across disciplines is proving to be a common starting point.

Ethical concerns

Practicality is a top priority with compliance and organizational standards, but ethics cannot be overlooked. More specifically, ethical concerns around data security and consent obligations, and their trickledown effects in a broader digital governance ecosystem if unaddressed.

While organizations often look to regulators for specific requirements for consent, Dun and Bradstreet Chief Ethics and Compliance Officer Hilary Wandall, CIPP/E, CIPP/US, CIPM, FIP, indicated organizational focuses must include transparency and privacy protective practices that maintain flexibility.

"If we are tied to these very specific rules, we can't solve for these possible greater future scenarios where there's a wonderful outcome that could have been had and can be had if we challenge ourselves to think differently," said Wandall, the former associate vice president of compliance and chief privacy officer at pharmaceutical manufacturer Merck.

Risk assessments continue to be the most reliable tool for the evaluation of legal or ethical concerns. They help organizations determine how data collection is helpful for their individual business needs while keeping in mind the potential impacts to customers. Assessments can also help get ahead of potential questions from digital regulators.

"What we try to do in our updated governance program is to think about outcomes," Meta Vice President of Policy and Deputy Chief Privacy Officer Rob Sherman said.

Meta has invested USD8 billion into its data privacy program since 2019 to bolster its digital security practices. Sherman said organizations should make assessments of data use and technology offerings more frequent and habitual.

"The people who we serve expect us to handle their data thoughtfully and be accountable around them. They expect us to be transparent with them about what's happening," he said. "So, when we make decisions internally, we think in terms of those structured expectations about how data will be handled."

In addition to its data security efforts, Sherman indicated Meta has worked to improve safety online through systems that are able to "automatically analyze people's content and their behavior to see if they're doing something that's harmful." He noted, while these types of systems are "obviously a very sensitive thing from a data protection standpoint," they have proven valuable and necessary toward protecting individuals' safety.

Innovation

AI advances in recent years have paved the way for integration into daily lives. The popularity of generative and agentic AI tools are driving calls for innovation with the aim of achieving AI's full potential.

According to Sherman, increased innovation is "really a paradigm shift in the way that we think about data and governance and risk, because the role that technology is playing in society is changing."

Organizations are also implementing AI tools to provide consumers with relevant data collected about them, including smartwatches which can use AI to summarize relevant health information for consumers. While the insights gleaned by wearables are beneficial to the consumer, companies must "think about intended use of data, especially in the AI systems context, versus unintended uses," Wandall said.

Wandall added secondary use is just one additional consideration amplified by AI product development. Considerations around training data sources are also key, especially with how data protection and transfer rules might apply.

"We are really having to rethink and force ourselves out of the mindset of, 'I do this data.' I'm thinking bigger, spherical, with all of this regulation coming at me all at once," Wandall said. "How do I, together with my colleagues, help support the best business decision-making when I'm looking at the evolving tech stack?"

Lexie White is a staff writer for the IAPP.