With the U.S. seeing a rush to regulate artificial intelligence at the state and federal levels, policymakers are faced with the difficult task of defining baseline standard operations. 

For Miranda Bogen and Kevin Bankston, it is an opportunity to make sure those baselines are ethical. That's one of a few goals for the Center for Democracy and Technology's new AI Governance Lab, which the two Meta veterans launched recently. Bogen is the founding director and Bankston acts as the senior advisor.

The new advocacy unit focuses on how to mitigate bias and teaching companies how to build safety protocols into AI systems from the development stages. Bogen said she wants to see the conversation around AI regulation move from the high-concept level to the nitty-gritty of how to make those concepts viable for all stakeholders involved.

"Because in this conversation, the details matter, the high-level concepts everyone can kind of agree in theory are really important," she said. "But what that ends up looking like in implementation will actually define whether whether interventions are effective in truly holding AI systems and their developers accountable."

The push to regulate AI is fueled in part by concerns about how the ever-evolving technology is affecting our everyday lives.

For example, how algorithms can negatively effect the mental health of young people has been the subject of several state-level laws. Creatives are fighting for protections from unauthorized use of their work to train AI systems. Advocates are pressing for lending industries to guard against bias in their sorting algorithms. And competition anxieties have also increased as the EU and China look to finalize their own AI policies.

However, the urgency to find the best way to govern AI does not mean policies should be rushed, according to Bankston. He said the EU's quickness to pass the proposed AI Act could mean the U.S. should follow the same path, but it could also mean another regulatory avenue should be explored rather than creating duplicative work.

"I don't think it's a race," he said. "It's attempting to find the right answers."

Both Bogen and Bankston have a long history advocating for civil rights and fairness in technology.

Bogen was a fellow at the Internet Law & Policy Foundry and a policy analyst at Upturn before co-leading the Fairness, Transparency, and Accountability Working Group at the Partnership on AI.

Bankston spent almost a decade at the Electronic Frontier Foundation, where he led advocacy around internet and cellphone surveillance. He previously spent time at the CDT as senior counsel and a free expression director and worked as the director of the Open Technology Institute.

They took their experiences and applied them to the corporate sector. Bogen was the policy lead on fairness and equity and the policy manager on AI at Meta and Bankston was the platform's AI policy director.

Bankston said his time at Meta taught him the best ways to support internal AI ethics advocates who are ultimately tasked with changing leadership's minds. And Bogen said her experience gives her and Bankston a unique ability to take AI expertise — which she said is seen as only coming from the industries that are creating it — and giving that information to advocates to help inform their work.

"We want to counterbalance that conversation and empower the broader community," she said.

And while the conversations around AI are wide-ranging and serious, Bankston said regulators need to "walk and chew gum at the same time" when it comes to balancing the existential concerns with the real-life risks AI presents today.

"We need to work to ensure that the focus also remains on the everyday AI systems that people are interacting with now and how it's affecting their lives now," he said.