EU AI Proposal reviewed

Evotegra GmbH
7 min readMay 4, 2021

--

While ignoring the biggest threats by AI for the European democracy the proposal of the EU commission on harmonised AI regulation already fails its objectives as well as its principles and proved itself wrong. It pretends to protect European society from social scoring, algorithmic discrimination and mass surveillance, while it actually paves the way for such technologies. Next to failing its objectives it will add a high degree of legal insecurity which will further slow down the already slow adoption of an economic key technology within the European Union.

The intention of the proposal
The intention of the proposal is to protect European citizen from automated mass surveillance, manipulation and discrimination by AI. The regulation is based on the assumption that AI technology will be mass adopted in the future and therefore is a preventive attempt to avoid potential negative effects.

The current state of European AI industry
Compared to USA or China the European AI industry is lacking about a decade if not two. What probably every AI company in Europe knows is that the combination of high fines, lack of existing regulatory compliance certifications as well as unspecific definitions lead to a cacophony of GDPR interpretations all across Europe. As a consequence the high degree of legal insecurity induced by GDPR delays most AI projects or stalls them completely.
The degree of investments in AI startups and technology is just a fraction of what USA or China invest and therefore the technological gap will further increase and lead to a systematic technological dependency in this key technology area.

The proposal proved itself wrong
The regulation is based on the OECD AI definition which includes:
“c.) Statistical approaches, Bayesian estimation, search and optimisation methods” (Annex 1)
But it is a simple fact that these algorithms are now used for decades in software development. Hence it is evident that the current regulation is sufficient to prevent any of the dystopian scenarios now used as a pretext for the EU AI regulation proposal. E.g. as of today 22 GDPR explicitly forbids any automated decision which has significant impact on the affected person.
This proves that the use of AI in HR is in fact not a high risk use-case as there was no incident on systematic discrimination by such systems.
The proposal already proved itself wrong.

The proposal fails its objective
Under the regulation an arbitrary choice of algorithms is subject to regulation while others are not. By in fact legalizing certain algorithms for algorithmic discrimination, social scoring and mass surveillance the regulation is failing its objective to protect European society from these kind of applications.

The proposal fails its own principles
1.) To measure bias you have to define a new European standard bias for all use-cases, that evidently will discriminate and as such is a contradiction to the intention of the regulation.
2.) “Free of discrimination” means the same rights and duties for everyone. But by making an arbitrary choice which algorithms to regulate the EU Commission is in fact discriminating all algorithms under the OECD AI definition.

Free of bias never existed and never will
No software, AI and especially no human will ever be free of bias. This can be best seen in the regulation which exposes a significant amount of diffuse technology fears and as such bias. So anyone trying to exclude potentially biased systems is questioning itself from making such decisions.
Next to that there are many use-cases where bias cannot be fixed. E.g. data for certain rare minorities or people with rare disabilities might not even exist. Other systems are trained on huge amounts of training data. E.g. GPT-3 is trained on 45TB of Wikipedia data. How to fix the inherent bias? Write more commission conform Wikipedia articles to unbias GPT-3? The same problem applies to AI on historic data. History cannot be rewritten to meet the bias standards of the EU commission.

AI reflects society
Data is a product of society. As such it contains all its latent biases and discriminations. As a statistical algorithm AI does not inherently discriminate but it reflects any bias contained in the data back to the society it is coming from. Demanding from AI to be the better human is not solving the problem and restricting its use will not prevent discrimination.

Regulation does not lead to legal security
The fatal effects for the European data economy by another regulation based on philosophic terms can be seen today. The study by IW Köln revealed what most AI companies in Europe experience on daily basis:
“A total of 85 percent of all companies surveyed describe “gray areas” under data protection law in general as an obstacle to the commercial use of data; the lack of legal certainty in the anonymization of data is cited by 73 percent.”

The EU is discriminating until today its own enterprises. E.g. related to GDPR the EU commission is writing standard contract clauses for non-EU companies and European citizens can claim their rights with any data privacy authority. But while even part of the regulation in GDPR 42(1), not a single of the compliance certifications to give legal security to German enterprises exists until today.
And while GDPR is supposed to harmonize the data privacy regulation across Europe the fact that 16 data protection officers just in Germany driven by their federal governments induce a cacophony of GDPR interpretations imposes legal risks to any exchange of data between Munich and Berlin.

Legalizing the biggest threat to democracy
The use of AI for psychoanalysis and human behavior prediction on mass scale in social networks I consider the greatest threat for society & democracy by AI systems today. While it is a proven fact that elections have been manipulated through social networks the regulation proposal fully ignores this risks.

Matrix regulation
The AI proposal of the EU Commission will introduce an horizontal algorithmic specific regulation across all industries while most potential use-cases of AI are already regulated in their specific vertical industry. This imposes an inherent conflict of regulations when vertical and horizontal regulation contradict each other.

A high degree of legal insecurity
A key chapter to the regulation is the list of prohibited uses (Title II). It starts with:
“Prohibited … the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”

This paragraph in a EU regulatory proposal is just absurd nonsense and a perfect example what happens when diffuse technology fears are boosted with political pressure and lack of expertise. Affecting thousands of European key technology companies this erratic paragraph is just irresponsible and enough to disqualify the regulation proposal of the EU commission as a whole.

Solutions
The solution to potential discrimination in automated decision processes is as simple as obvious. As a matter of fact it is impossible to prevent any kind of discrimination before it occurs. But it can be prevented that discrimination unfolds its effect.

To give legal security to those affected by potential discrimination independent from a specific algorithm, any person subject to automated decision processes must be informed. To avoid a human bias induced by recommendation systems this information right must be independent from human in or on the loop. Affected persons must have a right to object automated decisions and have the right for human revalidation.

Any algorithmic regulation will be arbitrary and fail its objective. Therefore regulation must cover use-cases and be independent . This means that instead of regulating in a horizontal fashion the existing regulations must be adopted to cover the potential of AI and future technologies where necessary.

Regulation based on abstract philosophical terms is easy and good to impress the voters. E.g. demand “bias free” training data from European AI companies is simple when you do not have to say what “bias free” is supposed to mean and how to measure it. Also GDPR has shown us that in the end the European companies and institutions are left alone with the interpretations of philosophic terms leading to a wild fire of interpretations and contradicting the intrinsic intention of any European regulation. Therefore there must be legal security for European companies following a risk dependant “best effort” approach in their specific domain and to not repeat the mistakes made with GDPR there must be regulatory compliance certifications from day one this regulation comes into effect.

The author
I am Tobias Manthey, the manager of Evotegra GmbH, a provider of customer specific AI solutions for industry and automotive. While all statements in this article represent my sole personal opinion I represent the German AI association as a regional director for South-West Germany where I am also leading the working group data economy but also giving presentations related to AI Ethics and Explainability.

--

--

Evotegra GmbH
Evotegra GmbH

Written by Evotegra GmbH

Evotegra GmbH is a provider of custom AI solutions as well as professional AI image datasets

No responses yet