Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

The future has arrived — just not quite in the way we imagined.

No flying cars — yet — and your morning commute still involves traffic instead of zooming to work like George Jetson, whose ride conveniently folded into a briefcase.

But while our highways remain frustratingly Earth-bound, artificial intelligence has firmly planted itself in our daily lives through chatty virtual assistants, smart home devices that know your coffee order better than your barista, and algorithms that predict what Netflix show you will binge next.

AI isn't just making life easier at home, it's reshaping entire industries. In finance, it's sniffing out fraud faster than a bloodhound. In medicine, it's spotting diagnoses doctors might miss. And in law? AI is tackling contract analysis and legal research, though it's not quite ready to argue before the Supreme Court.

While AI may be boosting lawyer productivity, it's also raising ethical questions. Are our existing legal ethics rules prepared for AI-assisted lawyering, or are we hurtling toward a future where we need a whole new rulebook?

Bias and fairness

AI in legal work is a bit like an overeager intern — pulling information from everywhere, including the internet, social media, public records and who-knows-what sketchy corner of the web.

The problem? No lawyer has the ability — or sanity — to fact-check every data point AI has gobbled up to make sure it's unbiased or fair.

And let's be honest, if AI is trained on the internet, there's a good chance it has picked up some, let's say, questionable influences. Legal outcomes could end up as biased as a jury made up entirely of your client's in-laws.

Transparency and explainability

AI often operates in a black box — a mysterious void of computerized decision-making where even the experts are left scratching their heads. AI absorbs mountains of data, spots patterns, and spits out answers, but how it gets from point A to point B is anyone's guess.

Remember when your teachers insisted you show your work to get full credit on an exam? Well, AI clearly wasn't paying attention. Unlike traditional software, which follows clear rules and logic, AI simply hands you an answer without explaining how it got there. If AI were your study buddy, it would ace the test but leave you completely unprepared to explain anything. Good luck arguing with a judge when you have no idea why the AI recommended a certain legal strategy.

Accountability

If AI spits out faulty legal advice, who takes the blame — the attorney or the AI developer? Spoiler alert: it's not the algorithm showing up to court. Accountability in the legal profession means lawyers are on the hook for acting ethically, competently, and in accordance with professional standards, legal rules, and client expectations.

Accountability is at the very heart of legal practice. Any legal strategy, advice, or document generated by AI isn't a get-out-of-jail-free card. It must be reviewed, corrected and signed off by an actual human lawyer.

AI is a tool, not a scapegoat. Do we sanction AI for producing fictitious cases to support our argument? No, of course not, because AI doesn't have a law license to suspend — or a bank account to pay fines. But the lawyer who blindly copied and pasted ChatGPT's greatest legal hallucinations into a court filing? That's a different story.

If an attorney submits bogus case law conjured by an overeager AI, it's their name, not the AI's, that goes on the docket. At the end of the day, AI is a powerful tool, but it's not a licensed attorney. And until it passes the bar and figures out how to pay malpractice insurance, the responsibility and consequences land squarely on the shoulders of the human using it.

Client confidentiality

Client confidentiality is the holy grail of the attorney-client relationship. It's not just a rule — it's a sacred, unbreakable vow, and lawyers know that keeping client confidences is non-negotiable. In fact, this duty doesn't just outlive the attorney-client relationship — it outlives the client. Clients spill their deepest, sometimes messiest truths to their attorneys with the absolute expectation that those secrets go nowhere. The second a third-party catches wind of that privileged information? Boom — privilege shattered.

Which brings us to AI. Hasn't that information just been flung into the mysterious, algorithmic ether, out of the lawyer's hands and into the great AI universe? Congratulations, you may have just handed your clients secrets to ChatGPT.

Why traditional legal ethics fall short

So, fasten your seatbelts, store your tray tables in the upright position, and let's jet back in time on George Jetson's briefcase-turned-rocket car to law school — where we all survived on caffeine and studied The Model Rules of Professional Conduct.

"Rule 1.1: Competence" states "A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation."

In other words, lawyers are expected to know what they're doing — and to put in the work to make sure they're doing it right.

But here's the million-dollar question: How does using AI fit into this competence rule? Does a lawyer who relies on AI for legal research or document drafting really possess the skill and knowledge required under Rule 1.1?

AI was prepared the moment the lawyer hit "enter." Sure, the lawyer input the information, but how much do they know about the AI's decision-making process, its sources, or even its limitations? AI may have "skill," but can we really say the lawyer has satisfied their ethical duty of skill just by pressing a button?

"Rule 5.3: Responsibilities Regarding Nonlawyer Assistance" states that "With respect to a nonlawyer employed or retained by or associated with a lawyer: … a lawyer having direct supervisory authority over the nonlawyer shall make reasonable efforts to ensure that the person's conduct is compatible with the professional obligations of the lawyer."

Great. But here's the catch. AI isn't a newly minted associate nervously triple checking citations. AI doesn't have a law degree, a résumé, or even the common decency to show up late with an overpriced latte. So, how exactly is a lawyer supposed to "supervise" an algorithm?

At the end of the day, AI isn't a human subordinate, it's a tool. And just like any tool, if a lawyer blindly relies on it without oversight, they're the one on the hook when things go sideways.

"Rule 1.7: Conflict of Interest" states "A lawyer shall not represent a client if the representation involves a concurrent conflict of interest." A conflict exists if the "representation of one client is directly adverse to another" or "there is a significant risk that representation of one or more clients will be materially limited by the lawyer's responsibilities to another client, a former client or a third person or by a personal interest of the lawyer."

But what about AI? Can it accidentally play both sides? Picture this: A law firm uses AI to draft contracts for a corporate giant, while another firm uses the same AI for its fiercest competitor. If the AI pulls from both, even in an anonymized way, is it basically moonlighting as a double agent?

AI models, trained on vast legal datasets, might soak up confidential strategies from one firm and, without meaning to, sprinkle them into work for another. Even without direct data sharing, AI's pattern spotting habits raise serious conflict of interest concerns. If AI-generated content starts recycling legal arguments like a lawyer with a broken copy-paste button, has it switched sides?

More importantly, do lawyers now need to start conflict checking their algorithms too? Because let's be honest, explaining to the bar that your AI "didn't mean to" could make for an awkward ethics hearing.

A solution? A new ethical code

Doctors have a Hippocratic Oath, so why wouldn't lawyers using AI? Its core principles would be transparency, accountability, fairness and confidentiality.

AI is a tool, not a licensed attorney, so supervise it like an overeager intern. Check its work, question its sources, and never assume it knows what it's talking about.

The legal profession can't let AI run wild like an unsupervised summer associate. Law firms, bar associations and AI developers must join forces to craft real, enforceable ethical standards before regulators do it for them.

David Arrick, CIPP/US, is a finance and privacy attorney.