Greetings from Brussels!
A Consumer AI survey report was published this week by the European Consumer Organization (BEUC), which I think is worth a mention and reflection. For background, separate consumer surveys were conducted through nine EU member state consumer organizations (BEUC members) to ascertain how consumers think about AI — more specifically, what they understand about AI deployment, what their concerns might be, their views on value, regulation and consumer rights. A summary report can be found here, which is well worth the read.
Generally speaking, consumers acknowledged that AI can foster benefits for individuals and society. That aside, when asked about mainstream AI-enabled services, such as home virtual assistance systems, personalized content or e-commerce advertising, the jury seemed "undecided": 45% of European consumers said such services offered no added value to their lives at all.
One of the main findings was that consumers were concerned that AI could lead to increased abuse of personal data — perhaps no surprise there. In fact, particularly intrusive technologies, such as voice (or facial) recognition, many consumers — 68% in Germany and 71% in Belgium, to name two — have little trust that their privacy is protected.
A similar trend was expressed when consumers were asked if they trusted in their privacy being protected when using AI devices, such as wearables. Anywhere between 45% and 50% expressed a significant lack of trust from the north down to the south of Europe; this low level of trust seems prevalent regardless of national culture.
In what concerns regulation, a significant number of respondents said they do not think that current legislation is adequate to effectively regulate AI-based activities. In several EU countries, more than 55% expressed low trust and confidence in the national authorities to regulate AI or put in place oversight mechanisms. Just one-fifth of respondents said that current rules protect them from the potential harm AI poses. Where there seems to be a high percentage of consensus — more than 75%, in most cases — is the need for transparency and to be informed when exposed to automated decision making. BEUC concludes that in terms of the rights consumers think they should have, most of the respondents want “to be informed and have control over the automated processes that concern them and be free to say ‘no’ to automated decision making.” In many respects, this supports the premise that, as individuals, we would still prefer that our decisions are emotionally and intellectually driven over automated influences.
Monique Goyens, director general of BEUC, summed up her views rather poignantly stating, “It is a concern that a majority of consumers do not trust that their privacy is protected when using AI tools such as smart watches or voice assistants. Consumers are worried about the risk that companies and governments can deploy AI to manipulate their decisions and that AI will lead to unfair discrimination. EU legislators need to take these concerns seriously and make sure consumers are protected and can trust this technology.”
Making the case for more consumer protection regulation, she added, “Current consumer protection, privacy and liability rules are simply not fit for purpose to protect consumers from the negative consequences of AI. The EU is planning to propose rules on AI. They are urgently needed. Consumers must be protected from risks such as discrimination or manipulation.”
This is an important piece of consumer research, which lends itself to EU plans for a law on AI. It appears that work in this area is in line with people’s expectations. There has been EU feasibility work and a public consultation held as part of a broad stakeholder consultation process. Following an in-depth analysis and detailed impact assessment, the European Commission will present a regulatory AI proposal. However, it is not known at this stage when that proposal will be presented.
If you want to comment on this post, you need to login.