TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: Sectoral rules make US AI governance policy leader Related reading: A view from DC: FTC prepares for commissioner appointments, potential bipartisanship

rss_feed

"Government should find those problems that affect everybody but are too big for people to solve on their own."

In a fireside chat hosted by the Brookings Center for Technology Innovation, U.S. Equal Employment Opportunity Commission Chair Charlotte Burrows framed her remarks on artificial intelligence and employment by highlighting the EEOC's deeply held regulatory philosophy, with its focus on the problem of ensuring a fair and inclusive workforce.

After all, Burrows reflected, the EEOC "is the little agency brought to you by the March on Washington for Jobs and Freedom. We were the jobs part of that. While we came from that in the '60s and still carry that flame and understand our work as rooted in that very clear democratic request, we also understand we have to be nimble enough to do our work now."

When used in the hiring and employment context, automated systems have the potential to perpetuate or even worsen the types of societal biases and unfair outcomes the EEOC was created to solve. As Burrows put it, "We have to make sure that those things as a democratic society that we think are important are not undermined inadvertently—I'm not accusing anyone of trying to do it — undermined or changed in ways that we do not even know because it's very closely held, this knowledge of what exactly is happening (with AI)."

There is something to be said for the depth and deliberate nature of the sectoral approach to oversight in the U.S.

In the EEOC's ongoing work to clarify how its existing legal authorities translate to the use of automated systems, the agency released groundbreaking guidance that will inform the practice of AI governance for years to come. Not only is regulatory scrutiny over employment practices causing innovation toward more trustworthy practices in the sector, but these same innovations will no doubt inform future AI governance best practices across the economy.

After more than a century of building a strong administrative state, the U.S. is home to a wide spectrum of agencies with deep expertise in their respective sectors, powered by lifelong civil servants like Burrows. As AI-powered systems proliferate, each of these agencies is engaged in the work of determining how AI fits within their mission and mandate.

For some, this looks simply like building their own internal AI governance policies and procedures, as the White House has mandated — and is expected to require with more specificity in a forthcoming executive order.

For other agencies, especially those that govern conduct in the private sector, the rise of AI triggered a reckoning. Examining their existing statutory authority, each agency has considered whether it applies to AI. Often the answer has been "yes."

Technology-neutral laws that govern fair, equitable or safe outcomes apply just as clearly to the use of AI systems as they do to human-driven actions. This was reiterated most clearly in the joint statement from the EEOC, the Federal Trade Commission, the Consumer Financial Protection Bureau and the Department of Justice.

IAPP Principal Reseacher, Privacy Law and Policy, Müge Fazlioglu, CIPP/E, CIPP/US, published research about the many facets of AI governance policy at the federal level. But it is worth digging deeper on any one of these agency actions. The hiring and employment context provides a prime example of what we can learn.

Burrows was hosted on stage by Brookings Center for Technology Innovation Director Nicol Turner Lee, who used the opportunity to preview a forthcoming Brookings CTI project called the AI Equity Lab, "which will develop tools and resources to aid in the design and development of anti-racist, and non-discriminatory AI in areas that include employment, education, health care, and criminal justice."

Throughout their conversation, Burrows referenced the importance of translation between domains of expertise numerous times. Companies often have individuals with deep expertise on civil rights laws, and separately hold deep computer science expertise in building and using advanced technologies.

"But they are two different worlds in a lot of places," Burrows said. "As we think about what’s being designed, first and foremost the challenge is making sure those who develop these technologies — who in all good faith are trying to make the world a better place, I truly believe that — understand what their civil rights obligations are. Making sure that the civil rights experts know enough about what’s happening in the technology to advise, and that the folks who are developing the technology understand their obligations, that's the crossover."

To my mind, this crossover, this essential role of translation, is precisely where AI governance professionals come in.

As Burrows put it, "There's a lot of conversation about humans in the loop. If you put someone in the loop, they need training. They need the right kind of position within your organization so that that training can be heard. They need to understand what their role is and to know that you will support them."

Luckily, those who take up this call don't have to start from scratch. To date, the EEOC has thought of this as a bridging exercise, undertaking the work to educate themselves and the public in understanding how civil rights rules interact with new technologies. The agency already has a lot to show for this workstream and its most recent outputs are worth reviewing in full:

  • Guidance for employers on assessing adverse impact in software, algorithms and AI used in employment selection procedures under Title VII of the Civil Rights Act of 1964.
  • A technical assistance document for complying with the Americans with Disabilities Act while using software, algorithms and AI to assess job applicants and employees.
  • The 2023-2028 Strategic Enforcement Plan, which includes a priority to recognize "employers' increasing use of technology, including AI and machine learning, to target job advertisements, recruit applicants, and make or assist in hiring and other employment decisions."

Also worth a deep read is the recent law review article co-authored by EEOC Commissioner Keith Sonderling on "the promise and the peril" of AI and employment discrimination.

At the same time, industry and civil society stakeholders are not waiting for regulatory fiat to fine-tune AI governance practices in the employment space.

Most recently, the Future of Privacy Forum released a set of best practices for AI employment and workplace assessment based on a working group of AI vendors including ADP, Indeed, LinkedIn and Workday. The Center for Industry Self-Regulation, part of BBB National Programs, earlier published a compatible set of principles and certification protocols guiding the use of AI during the recruitment and hiring process. This process included some of the top U.S. employers, including Amazon, Allegis, Dentsu Americas, Qualcomm and Unilever. Earlier still, the Center for Democracy and Technology, in partnership with leading civil rights organizations, released another set of "civil rights standards for 21st century employment selection procedures." Though these three standards often overlap, there are important differences too.

Just as with data-privacy standards, AI governance best practices are emerging via the parallel efforts of many stakeholders who understand the importance of getting this right. This ongoing conversation between regulatory agencies, industry and individuals will continue to inform what the emerging profession of AI governance looks like in the coming years.

Here's what else I’m thinking about:

  • Committee hearings continue, even when Congress is in shambles. As the U.S. enters a third week without a speaker to oversee the House of Representatives, committees are still moving forward on efforts to understand AI and privacy guardrails. The IAPP reported on two hearings this week, one put on by the Committee on Energy and Commerce and the other by the Committee on Science, Space and Technology. Sidenote: the latter piece was filed by our brand-new seasoned staff writer, Caitlin Andrews. Welcome Caitlin!
  • Speaking of amazing journalists, belated congratulations to privacy reporter Tonya Riley, who joined Bloomberg Law this month and is already filing new pieces on FTC complaints and California's intent to appeal the injunction of its Age-Appropriate Design Code Act.

Upcoming happenings:

  • 23 Oct., 14:00 ET: Brookings hosts a webinar on why the Global South has a stake in dialogues on AI governance (virtual).
  • 24 Oct., 22:00 ET: The Bipartisan Policy Center hosts a discussion on creative industries and the emergence of generative AI (virtual).
  • 25 Oct., 14:00 ET: R Street hosts a discussion on data privacy and security as a national security imperative (virtual).
  • 26 Oct., 13:00 ET: The IAPP hosts a LinkedIn Live welcoming our new AI Governance Center Managing Director, Ashley Casovan (virtual).

Please send feedback, updates and March on Washington pins to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.