Suddenly, everyone's afraid of artificial intelligence. Geoffrey Hinton, the godfather of AI, used to shrug off concerns about the breakneck pace of AI innovation by quoting Oppenheimer: "When you see something that is technically sweet, you go ahead and do it." Now, though, he's left Google to speak out against the technologies he helped develop while tech bigwigs, including Elon Musk and Steve Wozniak, called for a "pause" in generative AI innovation to give regulators a chance to catch up.
Such concerns reflect the widespread view that innovators and regulators are fundamentally at odds when it comes to AI: one spurs us forward and the other slams on the brakes. There's necessarily a degree of opposition: to avoid tech becoming a Wild West, we need lawmen to maintain order. But the deeper reality is that by preventing disorder and defining the boundaries of what's permissible, smart regulation supports innovation and paves the way for more durable tech breakthroughs.
We've seen this in the broader world of data privacy. Of course, there have been clashes between privacy regulators and tech companies, but as the regulatory rulebook evolves, it becomes easier for organizations to innovate without coloring too far outside the lines. Developing a framework for AI innovation requires a similar approach: clear rules anchored in fairness, transparency and accountability.
We'll also need a commitment to ensuring the implementation of those rules isn't left solely to technical teams: to support innovation, we need the technical wizardry to be guided by seasoned data privacy practitioners who understand how to ensure compliance, and how to earn and retain consumers' trust. Here are a few key principles that should guide both regulators and innovators as we chart a path forward.
Regulate outcomes, not algorithms
AI isn't a single monolithic technology: there are massive differences between the generative AI used to power a chatbot, the machine vision used to power facial recognition technologies, or the neural networks that flag suspicious transactions on your credit card. That makes creating a single rulebook for AI incredibly challenging and attempting to define exactly how AI can be deployed risks stifling the development of new AI tools.
The solution is to stop trying to regulate AI in terms of its underlying technology and to focus instead on the things we can regulate more easily. Not the algorithms themselves, but the data that goes into them, the purposes for which they're used, and the results they generate. The EU AI Act offers one such approach, with separate rules for low-stakes AI applications, mid-tier applications, and critical applications such as defense and health care. Meanwhile, existing rules that ban fraudulent outcomes can be applied to the outcomes of AI technologies irrespective of the underlying algorithms. By focusing on context and results, it's possible to create AI rules that are flexible enough to allow innovation while still providing the protections consumers deserve.
Support technical solutions
Arguably the biggest challenge when it comes to regulating AI is the fact that AI algorithms are literally built from data — both the training datasets used to sculpt algorithms and the data on which trained algorithms then act to deliver value. Regulators can't simply ban organizations from using data, but they do need to ensure that existing data privacy rules are updated and enforced effectively as organizations begin making broader use of AI technologies.
This will significantly depend on supporting technical solutions — from programmatic privacy tools that ensure training datasets are properly permissioned to federated learning and differential privacy technologies that make it possible to train and deploy machine learning algorithms without disclosing the underlying data. Regulators will need to incentivize the development and adoption of such technologies, and privacy leaders will need to work closely with data scientists and developers to ensure that AI inputs and outputs are managed responsibly.
Focus on fairness
Privacy leaders have been seeing a shift away from rigid rules tailored to specific technologies and situations and toward more flexible regulatory stances based on ethical standards and concepts of fairness. The U.S. Federal Trade Commission, for instance, is already using rules banning unfair and deceptive business practices to clamp down on data-tracking in sensitive areas such as health care.
Such an approach is powerful in fast-moving sectors because it doesn't require regulators to spell out precisely what is acceptable — instead, it puts the onus on companies to do what's right. A similar approach will likely come to the fore in AI regulation, with regulators holding companies accountable for "unfair and deceptive" practices without waiting to write a whole new rulebook each time some bright young startup founder devises a new way to use AI technologies.
Demand real transparency
To support innovation, regulators need to ensure that consumers clearly understand how their data is used and retained in AI algorithms. Companies shouldn't expect to be allowed to simply notify consumers that their data will be used "for AI." Instead, they will be required to obtain consent about the specific purpose (such as marketing or personalization) for which the AI tool will ultimately be used.
As privacy leaders know, this kind of transparency goes beyond simply ensuring regulatory compliance. Without responsible consent and disclosure practices, consumers lose trust and wind up refusing to share their data — and over time, that risks starving companies of the data they need to fuel AI research and development. Transparency, in other words, is the only way to inoculate your company against a crisis of consumer confidence that otherwise risks hamstringing AI innovation.
The role of privacy practitioners
With regulators likely to prioritize function and fairness and promote both technological solutions and transparency, it will become incumbent upon organizations to rise to the challenge and elevate their AI innovation processes. That will require privacy practitioners to take a leadership role from day one and ensure that AI innovations are developed with a clear commitment to responsible data practices.
Understanding what "fairness" means to regulators and consumers isn't something technical specialists and engineers should be expected to figure out on their own. To bake ethical data practices and basic fairness into their processes of innovation, organizations will have to break down the silos between technical experts and privacy leaders and work to give all stakeholders — including innovators, regulators and consumers — the clarity and confidence needed to unleash the power of AI at scale.
There's no need to fear artificial intelligence. To drive innovation and realize the technology's full potential, however, we need privacy leaders to step up and work alongside regulators and technical innovators to ensure that AI innovations are founded upon real transparency, privacy and accountability.