Wired reports the National Institute of Standards and Technology is looking to quantify user trust in artificial intelligence. The NIST is accepting public comments until July 30, saying it wants to identify areas of mistrust in AI and promote informed decisions in its use. A user trust score will be used to measure items such as the age, gender, cultural beliefs and AI experience of an individual using an AI system, while a trustworthiness score will explore technical concepts.
22 June 2021
NIST seeks to quantify user trust in AI
Related stories
Notes from the Asia-Pacific region: Looking back on an exceptional 2025 and the year to come
Notes from the IAPP Europe: Another piece of the EU Digital Package puzzle — the Data Union Strategy
Gaps in website opt-out functionality under the microscope in privacy enforcement
The case for differential privacy in the age of agentic AI
Santa Fe 4.0: la reforma constitucional que redefine derechos, tecnología y ciudadanía digital en Argentina
