The EU’s artificial intelligence law came into effect last August. However, as Gloria González Fuster explains, there are more questions left than the answers about how the EU’s approach to AI actually works.
If there was a race to regulate artificial intelligence (AI), the European Union won it. Maybe they were raceless or just on the political agenda of EU policymakers. Nevertheless, the urgency to regulate AI was felt in Brussels in 2019. At the time, she was still elected president of the European Commission, and announced that she would submit AI laws in the first 100 days.
The EU was ultimately able to push forward strongly on adopting rules in the region and implement pioneering horizontal regulations before anyone else. But in 2025 we still want clarity about what these rules mean, and where all of these exactly adopt us.
Turning Points for AI Governance
It was in 2021 that the European Commission announced legislative proposals to regulate AI. The proposal told us a turning point in AI governance and explicitly drove it towards the legal realm.
Until then, discourses on AI were characterized by reference to ethics and the need for an ambiguous outline of “ethical AI.” With that proposal, the Commission accepted the need for hard law. And if the EU wanted to promote AI, it’s definitely done and still does, AI could have a negative impact on fundamental rights like privacy, data, etc. With that fact, I admitted that I had to find a way to square that vigorous support. I will list protection, non-discrimination, freedom of expression, and a few.
The relationship between AI law and fundamental rights is often misunderstood. Looking at the broader picture of EU AI, AI law stands out as a means of having a primary goal of AI fixation in EU values, framing and constraining developments regarding EU fundamental rights. I’m doing it.
From data spaces to AI factories, there are many other European measures that have other goals centered on supporting AI. At the same time, it is true that AI law itself is far from concerned with basic rights alone. Its clear objective is to improve the functioning of the internal market, ensuring a high level of protection of “health, safety and fundamental rights” against the harmful effects and “support” of AI, and to improve the functioning of internal markets. It is to promote the intake of “reliable” AI. innovation”.
It’s a very goal, bringing together the suspicious elements together and somehow moves in almost the opposite direction. This heterogeneity is reflected in the entire provisions of the AI Act. Here is the shoulder of the safety of products with the rule of law. A very diverse range of stakeholders are given roles, including international standardization agencies and national human rights organizations. The fundamental rights impact assessment comes to some (limited) scenarios, waving in the EU Charter. And a “regulatory sandbox” designed to help innovate innovators is also proposed, gently succumbing to the point where innovation is essential.
Deferred clarity
The risk-based approach of AI law exemplifies its pragmatism. The premise is that some AI systems simply ought to be banned as they carry too much risk for democratic societies to be affordable. This is basically the case for the use of facial recognition in public places by police.
For some reason, these systems are now imaged as pyramids that are popular today, and are the cores that AI methods make up the most detailed adjustments, and are considered “high risk.” there is. This is a kind of risk that EU lawmakers think we should actually take if deployers and developers are compliant with some rules.
AIA’s pragmatism and complexity may seem distracting, if not intrusive. So it is not the main novelty of EU law. This appears to be against the other hand while being relatively used to protect basic rights with one hand (EU data protection laws are built on similar rocks; It’s going well.). What really distinguishes AI behavior from other EU digital laws is, rather, dealing with the clarity of definitions, rules and solutions to a later point, despite regulating new fields, or precisely for that. It’s the method. Sometimes it’s intentional, sometimes not.
After all, the marathon
In the spring of 2024, when EU lawmakers reached an agreement on the final text of the AI Act, almost everyone in Brussels seemed very proud of themselves. It was certainly an achievement for the Spanish presidency, and it was a hard work to portray herself as a digital trend, von der Leyen won her own game of being the first to regulate AI. You may be happy to have done it. The negotiator was still pounding on the back when some of the cracks in the outcome of the negotiated began to become visible.
The most visible cracks are the steps, i.e., if parts of the AI Act are already applied, many of which are related to the definition of an AI system itself and how to interpret a portion of that provision, including part of the AI literacy. “The pending question is that many people should have already bought it.
A true solution to legal certainty to these gaps may be seen, perhaps, if you are clear about all authorities responsible for the implementation and enforcement of AI laws. For now, patches have emerged in a rather rough way, so if such hard laws are applied in sloppy soft law initiatives, the issue of rushing to regulate AI in hard laws is to be dictated. I encourage it.
Today, a lot of pressure is in the European Commission. Commissions that partially disguised the AI office were able to gain important power under the AI Act. It is not yet completely clear whether such enthusiasm constitutes excessive commitment. Under AI Act, the Commission not only has the authority to adopt important, delegated actions, but also in particular in the meaning that it helps everyone understand their rights and obligations on time. He is also responsible for issuing certain guidance.
The publication of guidelines on prohibited AI practices after such practices have actually been banned, coupled with the fact that the Commission emphasizes that the guidelines may be unilaterally revised or withdrawn. , it’s not a sign of peace of mind. Hopefully, the pace will improve. So the EU can also win games that quickly and effectively apply their ideas in their own hurry. This shows that we understand that AI regulations will always be a marathon.
Note: This article presents the author’s views, not European politics and policy, or the European position of the London School of Economics. Featured Image Credits: Alexandros Michailidis /shutterstock.com