Artificial intelligence and machine learning technologies are rapidly developing across virtually all sectors of the global economy. One nascent field is empathic technology, which, for better or worse, includes emotion detection. It is estimated that emotion-detection technology could be worth $56 billion by 2024. However, judging a person's emotional state is subjective and raises a host of privacy, fairness, and ethical questions. Ben Bland has worked in the empathic technology space in recent years and now chairs the Institute of Electrical and Electronics Engineers P7014 Working Group to develop a global standard for the ethics of empathic technology. We recently caught up to discuss the pros and cons of the technology and his work with IEEE.
Exploring the privacy, ethical issues with emotion-detection tech
Related stories
Cloud-based test automation: Managing data privacy risks in regulatory technology environments
Latin America unlikely to see EU-style AI regulation, practitioners say
The EU AI Act and copyrights compliance
How longtime privacy professionals are tackling the evolution of digital governance
Evaluating data privacy across Africa: Toward a unified GDPR-inspired framework
This article is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.