The IAPP's "Profiles in Privacy" series features a monthly conversation with a notable privacy professional to discuss their journey in privacy, challenges and lessons learned along the way, and more.

For 18 years, Kevin Bankston worked as an attorney with organizations like the American Civil Liberties Union, Electronic Frontier Foundation, and Center for Democracy and Technology, litigating on a wide variety of associated issues like government surveillance and advocating on behalf of consumer privacy.

His advocacy and legal work often put him "at loggerheads" with large technology companies, like then-Facebook, over consumer privacy issues. But Bankston found he also aligned with companies, including what is now Meta, on issues related to government surveillance and online speech.

After nearly two decades in civil society, Bankston went into the "belly of the corporate beast," joining Meta as AI policy director, founding its artificial intelligence policy team and serving as a founding leader in the company's Responsible AI product organization.

Four years later, following what he described as "one of the most intense learning experiences," Bankston returned to civil society and the Center for Democracy and Technology as senior advisor on AI governance, applying what he's learned to "further responsible AI development in the public interest."

"It definitely gave me a lot of insight that I could then bring back to civil society," Bankston said of his time at Meta. "To put it bluntly, it does give me and people like me, who have now seen both sides, the ability to call B.S. when companies are claiming they can't do something, or that it's out of their reach, while also being able to say, 'No, they are telling the truth,' if and when they are making a real, good, practical point about what is possible and what is not."

Bankston launched his career in 2001 as a Justice William Brennan First Amendment Fellow attorney at the ACLU in New York City, litigating cases involving internet free speech. He spent nearly 10 years at the EFF, where he led advocacy around internet and cellphone surveillance and was a lead counsel in lawsuits against the National Security Agency and AT&T, challenging the legality of the NSA warrantless wiretapping program. He first served at CDT in 2012 as director of its Free Expression Project, advocating for greater transparency around foreign intelligence surveillance.

Bankston characterized himself as "an angry but reasonable advocate," not shying away from criticizing tech companies when warranted but also "not hitting them below the belt" and listening "to reason around what was technically possible and not possible."

Through his work, he said he learned to know and trust several of Meta's policy staff and said they had figured him out as well. When the opportunity arose in 2019 to help the company determine a roadmap for how to develop AI responsibly, as well as its external policy advocacy around AI, Bankston said it "was just too fascinating an opportunity to pass up."

"To be able to go to an enormous, globe-spanning company that was a key leader in the development of AI, but at that time was not a leader in demonstrating its responsible development of AI, and to be able to play in both the product-facing side of the work and the policy-maker and civil society and expert side of the work was incredibly appealing to me," Bankston said. "Even if I didn't before, I didn't there, and I don't now always align with all the choices Meta makes."

Another key determinant in accepting the position was the fact he would be working for Rob Sherman, Meta's vice president, policy, and deputy chief privacy officer, who Bankston described as a "smart, honorable, reasonable guy" with "a lot of authority that he's built up over the years inside Meta." Bankston said Sherman was a leader he could trust to have his back in challenging situations, like advocating for something that might be unpopular internally and "knowing that he wouldn't put me in a position where I publicly felt I had to say something I wasn't aligned with."

"I was very lucky that I had a very empowering senior leader as my boss who could back my plays, help me build resources, give me the resources to actually build a team to be a strong internal advocate for responsible AI development practices, and building out clearer principles — we called them pillars — for making decisions, helping clarify processes by which products were reviewed, and clarifying the resources that product teams could resort to within the policy team to help them puzzle out some of the harder AI questions."

Bankston leads the CDT's AI Governance Lab alongside former Meta colleague Miranda Bogen. Launched in the fall of 2023, the lab aims to "advance robust solutions that address the risks and harms of AI systems."

Bankston, the CDT's senior advisor on AI Governance, said the lab is less focused on directly impacting policymakers and more on providing expertise and resources around developing best practices, standards and governance strategies that could then support policy proposals. His experience at Meta — which included finding a solution to detecting and mitigating bias in a system without users' demographic data, in response to a lawsuit by the Housing and Urban Development Agency over alleged bias in Meta's ad delivery — shows how particularly challenging these issues can be.

"That's why our AI Governance Lab is so focused on trying to get into specifics, like which risks are you most concerned about and why? What are the available resources, both in terms of data and technology that could be used to mitigate it? How can we get to those mitigations actually being developed and deployed? How do we incentivize that? It's those kinds of particulars that we're really interested in, while we're more skeptical of very broad and vague proposals," he said.

"We don't know what we want or don’t want in this area yet, or more to the point, we don't know what is practical or impractical in terms of detecting and mitigating risks."

Having so many questions without clear, evidence-based answers is a fundamental challenge in the AI governance space, Bankston said.

"Although I am eager to see meaningful, sensible AI regulation as soon as possible, to the extent we are trying to do that very broadly or vaguely, it is likely going to be a waste of a lot of people's time and effort," he said. "That's why so much of our work in the lab is trying to disambiguate between different goals and risks and how to mitigate those specific risks so we can get out of talking about, 'Should AI be regulated,' and start addressing it more in terms of specific questions. How should AI, in this context, be regulated? How should this type of AI model be regulated if it is deployed in this way? Trying to put some meat on that bone."

Bankston said there is a large and growing community of people attempting to do that work and, as companies reorganize and deploy teams to "meet the moment," it presents a "huge opportunity for people in the privacy field."

"The opportunity with AI and the need for practices and structures is commensurate to the previous moments for privacy, content moderation, and trust and safety. This next step in that evolution also highlights how many of those concerns are becoming the same concerns, and it's getting much harder to artificially distinguish between what should necessarily be handled by a privacy team versus a trust and safety team versus a compliance team," he said. "When you're talking about a general-purpose data machine that ingests and spits out content, there's going to be a lot of overlap. It's a really hard organizational question. Companies are grappling with it and all of them need smart people with experience in one or more of those three big areas to help them build those systems."

Among the many murky questions in the AI governance space, Bankston said one thing "everyone can agree on" is "we don't have the workforce necessary to do all of this responsible AI and compliance work … yet."

"We don't have the experts necessary to develop the practices. We don't have the experts necessary to apply them consistently. We don't have the experts necessary to evaluate whether they are being applied consistently," he said. "So, there is enormous opportunity for people who want to work in this space to go ahead and do so."

Jennifer Bryant is an associate editor for the IAPP.