AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that's been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice. Luminos.Law Partner Brenda Leong, AIGP, CIPP/US, helps global businesses manage their AI and data risks. IAPP Editorial Director Jedidiah Bracy recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
Listen here
Jedidiah Bracy is the editorial director of the IAPP.
13 Nov. 2024
AI red teaming strategy and risk assessments: A conversation with Brenda Leong
Related stories
Contracting around AI: Reading the fine print
Accelerating differential privacy deployment in the federal government
New laws in California look to the future of privacy and AI
Civil society, academics consider impacts of CPPA's ADMT rulemaking
Developing privacy as a service as a stand-alone offering for financial institutions