Daniela Tafani, GDPR could protect us from the AI Act. That’s why it’s under attack
Thankfully, the AI Act will not be isolated within the EU law system, but rather, it will coexist with the GDPR and the Charter of Fundamental Rights.
Daniel Leufer, Fanny Hidvegi, Alessia Zornetta
Abstract: So-called ‘AI’ is a derivative of a surveillance business model that allows Big Tech to provide extrajudicial surveillance services for both civil and military purposes. In countries where such generalised and pervasive surveillance is banned, Big Tech has spread a family of narratives – including the myths of technological exceptionalism and legal vacuum – to evade the law and continue to distribute outlawed products. Regulatory capture has produced the AI Act, under which fundamental rights can be violated with impunity as long as there is no foreseeable harm. So Big Tech’s next target is any norm that still protects fundamental rights. Thus, the narrative of an alleged legal vacuum is now replaced by a narrative that, under the pretext of ‘regulatory certainty’ and vague appeals to competitiveness and so-called ‘innovation’, aims to wipe out those regulations that, like the GDPR, still stand in defence of fundamental rights. It is therefore no coincidence that an open letter from major tech companies and Mario Draghi’s report, The future of European competitiveness, almost simultaneously call for the removal of those parts of regulation – such as certain provisions of the GDPR – on whose systematic violation Big Tech’s business model relies. The attacks on the GDPR are sneaky attacks on the fundamental rights that the GDPR protects, not attacks on the alleged obstacles that stifle innovation.
Keywords: AI Act ‧ GDPR ‧ fundamental rights ‧ regulatory capture ‧ privacy ‧ surveillance ‧ Big Tech
Reference: Daniela Tafani, GDPR could protect us from the AI Act. That’s why it’s under attack, in «Bollettino telematico di filosofia politica», 2024, https://commentbfp.sp.unipi.it/gdpr-could-protect-us-from-the-ai-act-thats-why-its-under-attack/, doi:10.5281/zenodo.14002329
So-called ‘artificial intelligence’ is “a derivative of surveillance”: machine learning systems –“probabilistic systems that recognize statistical patterns in massive amounts of data” – require massive computational infrastructures and access to constantly updated data streams that only Big Tech can afford. These systems are deeply intertwined with Big Tech’s surveillance business model, which allows tech monopolies to offer public and private actors the promise of algorithmic profiling and extrajudicial surveillance services, for both civil and military purposes.
In countries where generalised and pervasive surveillance is illegal, the business model of large technology companies is based on a ‘legal bubble’ , i.e. on the systematic violation of legally protected rights and on the bet that the law will give way. As Marco Giraudo, Eduard Fosch-Villaronga and Gianclaudio Malgieri note, companies are betting that the illegal commodification of all citizens’ personal data and metadata will not lead to sanctions, but to the abandonment of legal protection of the fundamental rights violated by this practice.
To this end, Big Tech has disseminated a number of narratives – ideas conveyed in the form of stories – that shape public perceptions of the relationship between ethics, politics, law and technology. In this way, tech monopolies address the conflict between their private interests and the public interest not by overtly imposing their will, but by making certain narratives part of the conventional wisdom and determining the underlying frameworks and axioms of any public discussion about artificial intelligence. This is a form of regulatory capture that operates in the cultural dimension: by distorting the shared conception of what is in the public interest and suppressing the possibility of conceiving alternatives, public policies are made to favour the industries they are supposed to regulate, to the detriment of the real public interest and without significant dissent or protest. Indeed, conflict is suppressed in advance, through narratives that obscure the interests at stake and produce a general consensus, accompanied by a tendency to dismiss as retrograde or Luddite anyone who does not agree with the established approach.
The myths of technological exceptionalism and legal vacuum belong to the family of narratives spread to protect an illegal business model, ensuring the general acceptance of mass surveillance as inevitable and making people forget that the right to privacy is the right to “be left alone”, i.e. the right to to demand that no data at all be collected about them in certain spheres of their lives. The anthropomorphisation of AI systems is promoted by companies because it allows them to market outlawed products and services based on immature, opaque and fragile technologies, claiming that existing laws do not apply to ‘artificial intelligence’ systems because of their novelty and exceptional nature. European regulators believed that new laws written specifically for AI were really needed, and consulted as experts the lobbyists of the companies they were supposed to regulate, with whom they already shared – as captured – the cultural patterns and basic assumptions about the objectives to be pursued by the regulation itself. The AI Act was so dictated by the “lobbying ghost in the machine” of regulation and that’s why it does not offer anything more than ‘motherhood and apple pie’: aspirational “noble principles” whose implementation is delegated “to AI providers themselves, without adequate oversight or redress mechanisms”.
Being written in the exclusive interest of tech monopolies, AI Act is an example of the neoliberal attack on rights. From a legal point of view, as Roberto Caso notes, it is a “normative indecency” which “provides less protection where the risk is higher” and always follows the same recipe:
– Multiply regulatory provisions, write them in a verbose and incoherent manner, and take power away from the ordinary judiciary (increasing that of countless administrative authorities).
– Spread the regulatory declamations of love for fundamental rights, democracy and the rule of law all over the place (as smokescreen).
– Betray the declamations with operational rules based on cosmetic principles and malignant exceptions.
Even the most notorious AI snake oil systems, such as emotion recognition system, whose pseudo-scientific nature is also recognised in the AI Act, are subject to only “a very limited ban”, as Nathalie A. Smuha notes, “namely in the areas of workplace and education institutions and with exceptions where the system is used for ‘medical or safety’ reasons”, thereby allowing the use of such invasive and non-working systems in sensitive areas, such as policing. Thus, as Enrico Pelino notes, “glaring contradictions emerge with regard to the vaunted centrality of the human being”.
In the AI Act, the very concept of fundamental rights is operationally redefined in such a way as to reduce it to an empty shell. The protection of fundamental rights is provided through a “Fundamental Rights Impact Assessment for High-Risk AI Systems” (Art. 27), which merely requires measures to be taken against specific risks of harm to categories of individuals and groups that may be affected by the deployment of each system. Instead of prohibiting the violation of fundamental rights, the AI Act prohibits only those violations of fundamental rights that might cause harm. In this way, any violation ‘for the good’ – no matter how authoritarian, despotic, paternalistic and threatening to fundamental rights – can be considered as legitimate.
For example, AI systems that employ subliminal techniques or intentionally manipulative or deceptive techniques with the aim or effect of significantly distorting a person’s behaviour are prohibited only if they are likely to cause significant harm to persons (Art. 5). Subliminal or intentionally manipulative or deceptive techniques are thus permitted, if they are “for the good”, however defined.
The AI Act’s approach to the protection of fundamental rights is based on a profound misconception of the law: for the law to protect a right means to protect against the violation of that right, as Mireille Hildebrandt reminds us, and harm is by no means a condition for the violation of a fundamental right. Under the AI Act’s harm approach to fundamental rights impact assessment, fundamental rights can be violated with impunity as long as there is no foreseeable harm. Requiring only a risk impact assessment is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer.
In a liberal State, interference in the sphere of personal freedoms cannot be justified solely on the grounds of the absence of harm or its beneficial nature, since it may be that such deprivations of liberty are beneficial to someone but at the same time deprive them of their right to liberty, dignity and autonomy. The legal protection of certain fundamental rights requires the subordination of any teleological approach to the prior respect of those rights. The AI Act, with its teleological approach based on the risk of harm, should be infallibly subordinated to compliance with laws such as the GDPR, whose approach is based on the protection of fundamental rights.
As Gianclaudio Malgieri and Cristiana Santos argue, when assessing impacts on fundamental rights, we should first focus on “how the legislator decided to interpret, implement, and substantiate the specific fundamental rights”:
This first element that we consider to assess whether an interference is serious (i.e. tending towards a proper violation) is the existence of an infringement of some more or less specific rules that implement fundamental rights in practice. To make an example, to understand if there is a proper violation of the fundamental right to data protection (Art. 8 EU Charter) we can first consider if there is an infringement of one of the provisions of the secondary legislation implementing it, i.e. the GDPR.
Because Big Tech’s narratives are instrumental in protecting their business model, new ones can spread as quickly as a marketing campaign is launched. Now that the AI Act has been passed, with an approach that erases fundamental rights, Big Tech is halfway through, and its targets are now the norms that still protect those rights. So the narrative of an alleged legal vacuum is now being replaced by a narrative that, under the pretext of ‘regulatory certainty‘ and vague appeals to competitiveness and so-called “innovation” – a buzzword so abused that it is even invoked to defend concentration camps for migrants as ‘innovative solutions‘ – aims to wipe out those regulations that, like the GDPR, still stand in defence of fundamental rights.
It is therefore no coincidence that an open letter from large tech companies and the Report by Mario Draghi, The future of European competitiveness almost simultaneously call for the removal of those parts of the regulation – such as certain provisions of the GDPR – on whose systematic violation Big Tech’s business model relies:
the EU faces now an unavoidable trade-off between stronger ex ante regulatory safeguards for fundamental rights and product safety, and more regulatory light-handed rules to promote EU investment and innovation, e.g. through sandboxing, without lowering consumer standards. This calls for developing simplified rules and enforcing harmonised implementation of the GDPR in the Member States, while removing regulatory overlaps with the AI Act.
At the moment there seem to be no forces capable of opposing the neoliberal oligarchic drive towards increased surveillance and control, the militarisation of civil systems, the use of predictive systems of a totalitarian nature and the reduction, first de facto and then de jure, of the protection guaranteed by fundamental rights. But even monarchies by divine right once seemed to be eternal.
References
Maurizio Borghi, L’Europa futura, meno diritti per più competitività, in «Centro per la Riforma dello Stato», 11 ottobre 2024, https://centroriformastato.it/leuropa-futura-meno-diritti-per-piu-competitivita/.
Roberto Caso, Dall’età dell’innocenza all’età dell’indecenza (normativa), Le nuove fonti di regolazione della IA: la Convenzione del Consiglio d’Europa del 17 maggio 2024 e il DDL governativo in materia di IA, Scuola di Alta Formazione dell’Università L’Orientale, Procida, 3-5 ottobre 2024, https://www.robertocaso.it/wp-content/uploads/2024/10/Roberto_Caso_Eta-dellindecenza_Procida_2024.pdf.
Corporate Europe Observatory, The lobbying ghost in the machine. Big Tech’s covert defanging of Europe’s AI Act, February 23, 2023, https://corporateeurope.org/en/2023/02/lobbying-ghost-machine.
Marco Giraudo, Eduard Fosch-Villaronga, Gianclaudio Malgieri, Competing Legal Futures – “Commodification bets” all the way from personal data to AI, in «German Law Journal», 2024, https://doi.org/10.1017/glj.2024.29.
Chris Hedges, The Secret History of Neoliberalism (interview with George Monbiot), in «The Chris Hedges Report», October 9, 2024, https://chrishedges.substack.com/p/the-secret-history-of-neoliberalism.
M. Hildebrandt, Beyond the GDPR?, September 22, 2023, https://www.cohubicol.com/assets/uploads/response-hildebrandt-purtova.pdf.
Heidy Khlaaf, Sarah Myers West, Meredith Whittaker, Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting, 2024, https://arxiv.org/abs/2410.14831.
Daniel Leufer, Fanny Hidvegi, Alessia Zornett, The Pitfalls of the European Union’s Risk-Based Approach to Digital Rulemaking, 2024, https://www.uclalawreview.org/the-pitfalls-of-the-european-unions-risk-based-approach-to-digital-rulemaking/.
B. Lynn, M. von Thun, K. Montoya, AI in the Public Interest: Confronting the Monopoly Threat, 15 novembre 2023, https://www.openmarketsinstitute.org/publications/report-ai-in-the-public-interest-confronting-the-monopoly-threat.
Gianclaudio Malgieri, Cristiana Santos, Assessing the (Severity of) Impacts on Fundamental Rights, June 25, 2024, http://dx.doi.org/10.2139/ssrn.4875937.
Enrico Pelino, Fiducia nelle istituzioni: l’AI Act e i diritti fondamentali, narrazione o realtà?, in «Digeat», 19 settembre 2024, https://digeat.info/articolo-rivista/digeat32024-pelino/.
Andrea Saltelli, Dorothy J. Dankel, Monica Di Fiore, Nina Holland, Martin Pigeon, Science, the endless frontier of regulatory capture, in «Futures», 2022, 135, https://doi.org/10.1016/j.futures.2021.102860.
Nathalie A. Smuha and Karen Yeung, The European Union’s AI Act: beyond motherhood and apple pie?, forthcoming in Nathalie A. Smuha (ed.), The Cambridge Handbook on the Law, Ethics and Policy of Artificial Intelligence, Cambridge University Press, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4874852.
Nathalie A. Smuha, The paramountcy of data protection law in the age of AI (Acts), in Two decades of personal data protection. What next?, EDPS 20th Anniversary, 2024, pp. 226-239, https://www.edps.europa.eu/data-protection/our-work/publications/book/2024-06-20-two-decades-personal-data-protection-whats-next_en.
Daniela Tafani, Do AI systems have politics? Predictive optimisation as a move away from the rule of law, liberalism and democracy, in «Ethics & Politics», 2024, 26, 2, https://zenodo.org/records/10866778.
Ben Tarnoff, What is Privacy for?, in «The New Yorker», October 5, 2024, https://www.newyorker.com/culture/the-weekend-essay/what-is-privacy-for.
W. Y Li, Regulatory capture’s third face of power, in «Socio-Economic Review», 2023, 21, 2, https://doi.org/10.1093/ser/mwad002.
Samuel D. Warren, Louis D. Brandeis, The Right to Privacy, in «Harvard Law Review», December 15, 1890, 4, 5, pp. 193-220, https://www.cs.cornell.edu/~shmat/courses/cs5436/warren-brandeis.pdf.
Comments
0 Comments on the whole Page
Login to leave a comment on the whole Page
0 Comments on block 1
Login to leave a comment on block 1
0 Comments on block 2
Login to leave a comment on block 2
0 Comments on block 3
Login to leave a comment on block 3
0 Comments on block 4
Login to leave a comment on block 4
0 Comments on block 5
Login to leave a comment on block 5