Disappointing: Proposed regulation of Artificial Intelligence protects consumers only very selectively

Statement by Klaus Müller, Executive Director of the Federation of German Consumer Organisations (Verbraucherzentrale Bundesverband – vzbv)

Klaus Müller, Vorstand vzbv

Credit: Corinna Guthknecht - vzbv

The European Commission published its long-awaited proposal on the regulation of artificial intelligence (AI). It protects consumers only very selectively and people remain unprotected in many areas. The rules will not, as vzbv hoped, increase consumers’ trust in AI: The proposal hardly entails measures to increase any transparency and traceability for consumers. Independent audits are foreseen merely for a few high-risk AI systems. The scope of application is too narrow so that the rules do not apply to a number of high-risk AI applications. Klaus Müller, vzbv’s Executive Director, comments:

vzbv welcomes that the European Commission plans to introduce rules for AI applications. Unfortunately, the approach is too weak and not ambitious enough: it only focuses on a limited set of high-risk AI-systems, all other systems, including “medium” risk systems, are treated negligently. Thus, in many cases, consumers will not be protected from damages caused by AI in many areas. For example, the proposal neglects economic damages by AI systems that systematically deny consumers access to services or exclude them from entire markets based on opaque personality analysis.

In terms of transparency for consumers, the proposal is equally disappointing: It merely envisages labelling requirements that will apply for instance to systems for emotion recognition or when AI interacts with people. But in order for consumers to be able to exercise their rights, traders must provide them with significantly more information. This includes information on the risks, accuracy, robustness as well as the data set on which a decision is based. Everything else is half-hearted at best.

Instead of checks by independent auditors providers are largely entrusted with assessing themselves whether their high-risk AI systems comply with the regulation. This damages trust in and acceptance of AI in general.

Although the proposal bans the use of some AI systems that manipulate, exploit and physically and psychologically harm the elderly, children, and people with disabilities, all other consumer categories remain unprotected. But economic harm, such as inflicted from forced or overpriced product sales, can harm every consumer.

The European Parliament and the Council of the European Union must now improve this proposal. They must ensure more transparency and comprehensibility for consumers and make independent checks mandatory for all high-risk AI systems.

Learn more

Artificial Intelligence needs real world regulation

Artificial Intelligence needs real world regulation

Position paper of the Federation of German Consumer Organisations (vzbv) on the European Commission’s proposal for an Artificial Intelligence Act (AIA).

PDF | 634.88 KB



Press office

Service for journalists

presse@vzbv.de +49 30 258 00-525