In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches for identifying, measuring, and mitigating harms ahead of time,” and requiring controls “to secure beneficial uses and guard against misuse.”
22 June 2022
Microsoft unveils framework for responsible AI
RELATED STORIES
Privacy in Arkansas: Is Arkansas ready for a consumer privacy law?
A view from DC: CFPB calls for states to regulate financial privacy
Notes from the IAPP Canada: OPC's WADA investigation 'raises some interesting issues'
A view from Brussels: European Commission's new tech policy center of gravity
First fine imposed under Thailand's Personal Data Protection Act