Skip to content

To protect consumers against discrimination in the digital world:

Make algorithmic decision-making processes more transparent

Algorithmically controlled processes need to be made transparent and understandable. For example, consumers need to be able to understand the way in which ranking and comparison platforms arrive at their findings or why the price that they are being offered might differ from the price that is being offered to someone else. The European Commission therefore needs to create a binding legal framework that goes further than the General Data Protection Regulation. Consumers need to know what data – including non-personal data – is being taken into account in relevant algorithm-based decision-making processes and how the processed data is being weighted. This is the only way in which we can empower consumers to fight discrimination. In order to achieve more transparency and better security and to prevent discrimination, an independent control system needs to be able to review the algorithms that are being used and the results and decisions that they produce. This review can be designed in a manner that protects companies’ trade and business secrets. A key requirement and an important element of an AI strategy would be for the European Commission to establish technical standards for the design of algorithm-based decision-making processes. This would make it possible to ensure compliance with legal requirements and make sure that the relevant processes can be audited (‘accountability by design’).

Downloads