EU countries adopt a common position on the regulation of artificial intelligence

EU countries adopt a common position on the regulation of artificial intelligence

EU ministers gave the green light to a general approach to AI law at the Telecoms Council meeting on Tuesday 6 December. EURACTIV gives an overview of the main changes.

The Artificial Intelligence Act is a landmark legislative proposal to regulate artificial intelligence technology based on its potential for harm. The Council of the EU is the first co-legislator to complete the first stage of the legislative process, with the European Parliament due to finalize its version around March next year.

“The final compromise text of the Czech Presidency takes into account the main concerns of member states and preserves the delicate balance between protecting fundamental rights and promoting the adoption of AI technology”, said Ivan Bartoš , Czech Deputy Prime Minister for Digitization.

Definition of AI

The definition of AI was a key part of the discussions, as it defines the scope of the regulation.

Member States were concerned that traditional software would be included, so they proposed a narrower definition of systems developed by machine learning, logic and knowledge based approaches, things that the Commission can further refine or update through delegated acts.

General Purpose AI

General-purpose AI includes large language models that can be adapted to perform various tasks. As such, it did not initially fall within the scope of the IA Regulation which only contemplated systems by objectives.

However, member states felt that leaving these critical systems out of scope would have crippled AI regulation, while the specifics of this nascent market required some adaptation.

The Czech Presidency solved the problem by instructing the Commission to carry out an impact assessment and consultation with a view to adapting the rules on general-purpose AI by means of an implementing act within a year and a half from the entry into force of the regulation.

Prohibited practices

The AI ​​rulebook prohibits the use of the technology for subliminal techniques, exploiting vulnerabilities, and establishing Chinese-style social rating.

The ban on social scoring has been extended to private actors to prevent the ban from being circumvented through an entrepreneur, while the notion of vulnerability has also been extended to socio-economic aspects.

High risk categories

Under Annex III, the regulation lists uses of AI that are considered to present a high risk of harm to persons or property and, therefore, must comply with stricter legal obligations.

Notably, the Czech Presidency has introduced an additional layer, which means that, to be classified as high risk, the system must have a decisive weight in the decision-making process and not be “purely incidental”, a notion left to the Commission. to set via an implementing act.

The Council removed the detection of deepfakes by law enforcement authorities, the analysis of crime and the verification of the authenticity of travel documents from the list. However, essential digital infrastructure and life and health insurance have been added.

Another important change, the Commission will not only be able to add high-risk use cases to the annex, but also remove them under certain conditions.

In addition, the obligation for high-risk providers to register in an EU database has been extended to users from public bodies, except law enforcement.

High Risk Bonds

High-risk systems will need to comply with requirements such as data set quality and detailed technical documentation. For the Czech Presidency, these provisions “have been clarified and adjusted so that they are technically more feasible and less restrictive for the stakeholders to comply with”.

The general approach also tries to clarify the distribution of responsibilities along the complex AI value chains and how the AI ​​law will interact with existing sectoral legislation.

Law enforcement

Member states have introduced several exclusions for the application of the law in the text, some of which are intended to be “bargaining chips” for negotiations with the European Parliament.

For example, while users of high-risk systems will need to monitor the systems after launch and report to the vendor in the event of serious incidents, this obligation does not apply to sensitive information arising from law enforcement activities.

What EU governments seem less willing to concede is the exclusion of national security, defense and military-related AI applications from the regulation’s scope and ability. police services to use “real-time” remote biometric identification systems in exceptional circumstances.

Governance and enforcement

The Council has strengthened the AI ​​Committee, which will bring together the competent national authorities, in particular by introducing elements already present in the European Data Protection Board, such as the pool of experts.

The general approach also requires the Commission to designate one or more test facilities to provide technical support for enforcement and adopt guidance on how to comply with the legislation.

Penalties for breaching AI obligations have been eased for SMEs, while a set of criteria has been introduced for national authorities to take into account when calculating the penalty.


The AI ​​law provides for the possibility of setting up regulatory sandboxes, controlled environments under the supervision of an authority where companies can test AI solutions.

The Council text allows such tests to be carried out under real conditions, whereas under certain conditions such real tests could also take place unsupervised.


Transparency requirements for emotion recognition and deepfakes have been tightened.

[Edited by Nathalie Weatherald]

#countries #adopt #common #position #regulation #artificial #intelligence

Leave a Comment

Your email address will not be published. Required fields are marked *