In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches for identifying, measuring, and mitigating harms ahead of time,” and requiring controls “to secure beneficial uses and guard against misuse.”
22 June 2022
Microsoft unveils framework for responsible AI
Related stories
Governor signs Texas Responsible Artificial Intelligence Governance Act
New threads in the patchwork: Key trends in US comprehensive state privacy law amendments
A View from DC: Former FTC Chair Khan reflects on her privacy legacy
Notes from the IAPP Canada: The growing need for collaboration
Key takeaways from Ireland's DPC annual report