In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches for identifying, measuring, and mitigating harms ahead of time,” and requiring controls “to secure beneficial uses and guard against misuse.”
22 June 2022
Microsoft unveils framework for responsible AI
RELATED STORIES
Controllers, processors and subprocessors in chains
Notes from the IAPP Canada: Recommendations, calls to reform Privacy Act 'a good start'
A view from DC: Marriott and the minimum extent necessary
Retrospective: 2024 in state sectoral privacy law and AI law
Notes from the Asia-Pacific region: India's PM talks global governance for digital technology