The last two years have been a sprint to do something about artificial intelligence. In 2026, that sprint will start to look more like a relay where the baton isn’t a new AI comprehensive regulation, but the steady handoff of accountability to the people putting AI to work. For those advising companies that buy, configure and deploy AI rather than build foundation models, this is your year to get very practical, very fast.
This shift doesn’t come out of nowhere. It grows from a legal conversation much older than ChatGPT and as perennial as the internet: whether a fast-moving technology demands a law of its own. U.S. Court of Appeals Judge Frank Easterbrook’s famous "Law of the Horse" critique warned against inventing a bespoke body of law every time a new technology arrives. Instead, he argued, we should apply and adapt existing doctrines.
Professor Lawrence Lessig refined that point: noting where targeted adjustments make sense and highlighting that the architecture, market and norms surrounding technologies can regulate behavior as forcefully as law so all of these need to be considered. In 2026, AI governance will live precisely in that space. Like cyberspace, AI will live in a space between mature legal concepts we already know how to use and carefully scoped add-ons where gaps are undeniable.
The practical message for privacy and AI governance professionals is that AI is a technology, not a discipline. There isn't a need for a capital-L law of AI to govern how companies use models in hiring, scoring, underwriting, content moderation, productivity tooling or safety-critical workflows. The laws with teeth in 2026 are the ones already in use every day. Privacy statutes and regulations apply whenever personal data is processed. Civil rights and anti-discrimination regimes apply when AI nudges or determines outcomes for people. Consumer protection and unfairness authority cover deceptive claims and unreasonable practices. Cybersecurity obligations extend to the model interfaces, pipelines and data supply chains that AI introduces. None of these laws need AI in the title to be impactful.
What's changing is not the words on the page so much as where accountability lands. Policymakers across jurisdictions have signaled a preference to keep innovation velocity high for model builders. AI is a technology critical to the future of economies and to national security, so it is easy to see where this motivation to keep the momentum going comes from. That said, this approach does not remove obligations; it redistributes them. The gravitational pull is toward the deployer, the organization that decides how and where AI is used. That is where risk becomes concrete. That is where bias emerges in a workflow, where explainability and documentation either exist, or they don't.
For the privacy professional, this rebalancing is a mandate to operationalize. The how is less about inventing new compliance machinery and more about integrating AI into your existing programs. Treat model adoption the same as high-risk data or automated-decisioning initiatives. Fold AI into privacy by design reviews. Extend information security to cover model endpoints, prompt injection surfaces and data provenance. Align fairness and anti-discrimination testing with the contexts in which your tools make or inform decisions. Build vendor oversight that treats AI services as critical infrastructure. Document life cycle controls the same way you would for any system that shapes real outcomes for people.
If looking for a mental model to share with your business stakeholders, borrow the cyberspace lesson and translate it for AI. Autonomous and probabilistic systems now intermediate decisions that used to be purely human. That raises questions about foreseeability, intent, and accountability that product liability, agency principles, and anti-discrimination law can already articulate, even if they need refinement at the edges. In 2026, anticipate more explicit duties of explainability, auditability, and control in certain regulated sectors, but expect those obligations to plug into core compliance architecture.
What does good look like in 2026? It looks like procurement playing defense and offense at the same time: insisting on meaningful contractual controls for AI vendors and insisting internally on pre-deployment testing that is fit for purpose. It looks like privacy teams running impact assessments that aren't checkbox theater, but pressure-test use cases, data flows and model behavior against risks.
It looks like harmonizing the existing program with recognized frameworks, i.e. ISO 42001 and the NIST AI Risk Management Framework, so regulators, customers and auditors see continuity, not chaos. It looks like horizon scanning for targeted AI-specific legislation without treating every new acronym as a revolution. Above all, document what was done and why. The difference between reasonable and reckless in 2026 will increasingly be a matter of evidence. Transparency is the result of doing a lot of other things right.
Two traps are worth avoiding. First, don't let the AI-requires-new-law chorus become an excuse for paralysis. The compliance tools you have are largely sufficient to make meaningful progress today.
Second, don't mistake light-touch policy for a free pass. Before rolling out a model to a workflow that impacts people in a material way, ask the questions many teams still skip: Have we tested for bias and error that matter for this context? Are we capturing explanations proportionate to the decision's significance? Can we show our work?
Privacy is uniquely positioned to bring order to AI adoption because you already steward cross-functional programs. You know how to align legal, security, risk and ethics. You know the difference between policy and practice. In 2026, that muscle memory is the asset. Use it to keep AI grounded in the frameworks that have served organizations well — privacy by design, robust security, fair treatment and transparent accountability — while staying nimble as targeted AI-specific obligations mature. If the internet era's arc taught us anything, it's that we can adapt the law's toolkit to new architectures without reinventing the entire toolbox.
So, what should you do now? Treat AI as a technology to be governed through the disciplines you already master. Inventory deployments, scope the ones that affect people, and plug them into existing review and control gates. Expect more to be asked of deployers and prepare to show how controls work. Be what you already are: a practical, accountable steward of data and decisions. That's the job in 2026.
Andrew Clearwater, AIGP, CIPP/E, CIPP/US, CIPM, FIP, is a partner at Dentons.


