In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches for identifying, measuring, and mitigating harms ahead of time,” and requiring controls “to secure beneficial uses and guard against misuse.”
22 June 2022
Microsoft unveils framework for responsible AI
Related stories
Notes from the IAPP Canada: New government PIA standard takes effect mid-October
A case study in privacy operations: The Maryland SPI rule
10 tips to prepare for the EU Cyber Resilience Act
A view from Brussels: State of the (European) Union
US senator aims to advance US AI leadership with sandbox, federal regulatory exemptions