In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches for identifying, measuring, and mitigating harms ahead of time,” and requiring controls “to secure beneficial uses and guard against misuse.”
22 June 2022
Microsoft unveils framework for responsible AI
Related stories
Notes from the IAPP Canada: AI, managing data responsibly is top of mind
A view from DC: Don't mess up your employee privacy notice
Personal data defined? Ulrich Baumgartner on the implications of the CJEU's SRB ruling
Notes from the Asia-Pacific region: China's AI Plus initiative accelerates AI integration
A view from Brussels: A wind of pragmatism
