EU AI Act

Notes

IMPORTANT: EC Proposes Major Timeline Changes

Important Development

The European Commission published a proposal on November 19, 2025, called the “Digital Omnibus on AI” that would significantly modify the EU AI Act before it fully applies in August 2026.

Key Changes Proposed:

Extended Compliance Deadlines

  • High-risk AI requirements (Annex III use cases): postponed to December 2, 2027 (instead of August 2, 2026)

  • High-risk AI in regulated products: postponed to August 2, 2028 (instead of August 2, 2027)

  • These dates are conditional on standards/guidelines being ready; Commission can bring dates forward if “adequate compliance measures” exist

Grace Periods for Existing Systems

  • Generative AI systems released before August 2, 2026: extra 6 months (until February 2, 2027) to meet transparency obligations (watermarking, metadata)

  • Legacy high-risk systems already on market: can continue unchanged without retrofitting (except public authority use – must comply by August 2030)

Other Significant Changes

  • Registration relief: No need to register “not high-risk” AI systems in EU database

  • Bias mitigation: All AI providers/deployers can now process sensitive data to reduce bias (previously limited)

  • Codes of practice: Become voluntary “soft law” instead of potentially binding

  • SME relief: Extended benefits to small mid-cap companies (< 750 employees)

  • Enforcement: EU AI Office gains exclusive authority over general-purpose AI models and VLOP/VLOSE systems

Critical Timing Issue:

This is only a proposal – it needs Council and Parliament approval. If not adopted before August 2, 2026, the original strict deadlines will apply, potentially before supporting standards are ready.

Bottom line: Businesses face timeline uncertainty and should prepare for multiple scenarios.

  • Training due to article 4: The Q&A also specify the enforcment date for this Article – 3 August 2026 onwards. I suspect that the months prior to this date, companies will invest heavily into the compliance with the Article 4, such as through developing and running internal trainings, or seeking external trainers.

Basics of AI

  1. AI is self learning algorithms that evolve their decision-making process based on data.
  2. AI learns by analysing large amounts of data during a training phase. Usually, once the training is complete, we deploy it and enter a second phase – utilisation.
  3. AI should generalise its learnings to unseen data and not just memorise specific examples.
  4. Overfitting and bias are two major areas where AI can go wrong.
    1. E.g. overfitting could be just studying past exam questions, but not the course material itself.
    2. E.g bias could be only 4 players used as training data for chess games. One of those players might not be good so the AI wouldn’t learn to play chess well. It would learn to imitate those players. If one of those players repeatedely made a certain mistake, the AI would inherit this bias and reproduce these mistakes. Or if trained on human generated text, our society has biases and these may be transmitted through the text and so the AI model will inherit those biases.

Summary of EU AI act

  • Is it AI? The definition is a bit blurry. Don’t rely on your intuitive understanding of AI
  • AI model vs AI system
    • Model is a piece of technology that is usually part of something larger that we call an AI system.
  • Any AI that touches EU is regulated
  • Banned practices – strictly avoid (February 2025)
  • High-risk AI – rather explicit classification (e.g. creditworthiness models). If high-risk, then heavily regulated (August 2026).
  • Provider vs deployer. Providers are regulated more. You can be both, depending on a use case.
  • Computationally expensive models are specific category – General Purpose Artificial Intelligence  “GPAI”. Can be also classified as “GPAI with systemic risk”.
  • When does it enter force? Two key deadlines, Feb 2025 and Aug 2026).

Understanding the AI Act

Why do we need an AI Act?

  1. mistakes/errors
    1. e.g. in image generation it might make mistakes a human wouldn’t like not the right number of stars for EU flag. We have to look for mistakes that can be specifically done by artificial intelligence
  2. impacts
    1. with a simple sentence you can make an image to deceive people. Very easy to do and lots of impact. AI is a technology on which applications are built. Because of potential for impacts, it has to be regulated.
  3. automation
    1. Creative work is being automated. White collar workforce who are in danger of automation and have not prepared for it for example by reskilling or changing geographical location.
  4. gap in regulatory space
    1. GDPR helps with consumer data regulation. Copyright act gives authors some tools for claiming rights to their works for example when they are used to train an AI model on top of them. But AI algorithms have never been directly regulated. E.g. you could obtain consent for some marketing purpose, then your organisational capabilities increase and you can do sophisticated profiling of that person and target them with more ads. Gap in regulatory space for this sort of thing.

How Data shapes AI decisions (and can be manipulated)

Example (in the real world, system would have way more than just 2 data points, but simplified for the example). We are a bank and want to build an automated engine to approve loans, based on (1) Income and (2) Debt Ratio of a customer.

AI Act: danger of data poisoning.

Get 30 people or so to go to register at the bank. Then say, oh by the way, I also want to tell you about my income debt ratio, and what happens with loans in the past. Bank doesn’t object as it is voluntary data so record it, and no reason to check the data as they are not applying for a loan, just signing up for an account. So attacker has poisoned our data and if it is used to train an AI model, it will have biased the model.

That was not the attack. The attack now is to get a loan on the red cross where otherwise the customer would not have received a loan.

Can we avoid the AI Act by NOT using AI?

“Ugh, forget all the “learning” algorithms. Let’s build an expert-rule based system. These worked well for ages!”

AI Act: danger of adversarial attack.

Well known from visual recognition tasks.

E.g. Self driving car should see a stop sign, and stop the wheels. Adversarial attack would make it think the stop sign means highway ahead. It does this through noise construction. To humans it might look like pixels. If an attacker got hold of our models and inserted this noise, then went around the city with stickers of this noise and put it on stop signs, the model could think these meant highway ahead.

Expert system: Create a set of rules, combine them into a single system, and you have an expert-rule based system. E.g. a bank that wants to send email marketing campaigns. If a customer is an active customer AND this customer has not been contacted in the last 3 months AND if there is a possibility that this customer might buy a certain product, send this customer an email.

Adversarial attack: The bank has an online simulator so customers can see if they would be approved or rejected for a loan, with no limit on number of attempts. An attacker can probe and find where the edge cases are, and then know what the model is. Data on customers isn’t directly taken as we don’t know who earns what, but is indirectly taken as we know now whether someone would be approved based on the parameters.

Who handles AI Act compliance?

  1. The Scrutinisers: Auditors (Legal & Technical)
    1. Difference compared to GDPR, auditors need to be at least partly technical. E.g. might need to go into the system and see if there are back up plans, so might need to inspect python.
  2. The Builders: AI Developers & Data Scientists
  3. The Rulemakers & Enforcers: Govermental bodies & regulatory agencies
  4. AI’s Watchdogs: Ethical AI Groups, NGOs, & Civil Society Organisations

Brings positive future

An AI Act brings:

  • Trust (e.g. like you trust the cars you drive are safe)
  • Professionalism (aligns with software dev norms for quality, structure and scalability)
  • Resilience (making them more secure against exploitation)
  • Transparency (means developing mechanisms to trace decision making pathways so less a black box and offer clear insights into AI outcomes)
  • Accountability

These align with what businesses want. It is an opportunity and positions EU institutions at the forefront of responsible and secure AI development building a reputation for technological prowess. Can potentially create a label “AI from EU”, which guarantees safety, ethics, and responsible design. Imagine in 20 years, a device in your living room that can do everything for you, shop, bank etc. You want to trust a system like that, and thanks to that label, you could trust that system.

The Structure of the AI Act

  1. Title I: Scope and Definitions
  2. TItle II: Prohibited Aritficial Intelligence Practices
  3. Title III: High-Risk AI Systems
  4. Title IV: Transparency Obligations for Certain AI Systems
  5. Titles V, VI, VII: Innovation Support, Governance, & Implementation
  6. Title VIII: General Purpose AI
  7. Titles IX, X, XI, XII: Final “house keeping”

Title I: General Provisions

Autonomy of AI systems

Recital 12: “… varying levels of autonomy, meaning that they have some degress of independence of actions from human involvement and of capabilities to operate without human intervention”

Example:

  • Should Robert Get a mortgage > AI > Yes/No
  • Should Robert Get a mortgage > AI > Yes/No > Human final decision

The second one you might not think of as autonomous. But over time, a human might begin to trust the AI based on its successful recommendations. And over time, they may no longer check the decisions or recommendations made by the AI. So in this indirect way, our AI system has gained autonomy, even though the system was designed to not have any autonomy.

Could be other biases too, human may want a certain product to sell, may be distrustful and do the opposite of recommendation. Putting a human after AI is always tricky.

The EU Commission has now released official Guidelines. The key takeaway from the Guidelines is that the “autonomy” requirement is designed to set a low but distinct bar. Its primary purpose is to exclude systems that operate with

“full manual human involvement and intervention”. If a system has even “some degree of independence,” it satisfies the autonomy condition. The guidelines give an example:

A system that requires manually provided inputs to then generate an output by itself is considered a system with “some degree of independence of action”.

The Guidelines directly address our mortgage example above. They state that an expert system that takes input from a human and produces an output on its own, such as a recommendation, fulfills the condition for autonomy. This clarifies that we don’t need to speculate about the “human factor” or whether a person learns to trust the machine over time. From the AI Act’s perspective, the moment the machine can independently generate that recommendation, it is operating with a degree of autonomy.

Finally, the Guidelines clarify the fundamental link between a system’s ability to infer and its autonomy. They state that the two notions “go hand in hand”. It is an AI system’s core capacity to infer—to generate outputs like predictions, content, or recommendations—that is the very thing that “is key to bring about its autonomy”.

To summarize the updated understanding based on the official Guidelines:

  • Autonomy is a necessary condition for a system to be considered AI.
  • However, the threshold is low. It only excludes fully manual systems.
  • If a system can take an input and independently generate an output (like a prediction or recommendation), it has a sufficient level of autonomy.
  • This capacity for autonomy is seen as a direct result of the system’s ability to perform inference.

Therefore, instead of disregarding the autonomy clause as being too complex, we should now see it as a clear—albeit broad—filter. If your system is machine-based and can infer how to generate an output on its own, it will almost certainly be considered to operate with the necessary “varying levels of autonomy.”

Adaptive AI Systems

Recital 12: The adaptiveness that an AI system could exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use.

Adaptiveness of a system: the quality of being able to change to suite different conditions.

  • Calculator: not adaptive
  • Website with cookies to remember dark theme: not adaptive
  • Manually constructed system with expert rules: not adaptive
  • Machine learning system with automatically inferred rules: adaptive – most widespread and hence important to calrify/tackle. E.g. ChatGPT
  • Machine learning system with “online” continuous learning: adaptive – less widespread as open to attacks. E.g. a chess game that learns. You could purposely play bad games to “poison” the data.

Inference

Recital 12: A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI systems to infer transcends basic data processing by enabling learning, reasoning or modelling.

  • Calculator
  • Website with cookies to remember dark theme
  • Manually constructed system with expert rules: infers (encoded knowledge or symbolic representation of the task to be solved)
  • Machine learning system with automatically inferred rules: clearly infers (machine learning approaches)
  • Machine learning system with “online” continuous learning: clearly infers (machine learning approaches)

Is it AI? The EU Definition and Implications for your technology

‘AI system’ means a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Simplifying:

  • Machine-based
  • (varying levels) of autonomy (even minimum fulfills)
  • (may) exhibit adaptiveness (disregard)
  • Infers to influence enviornment

Which systems are getting regulated?

  • Calculator: NOT regulated
  • Website with cookies to remember dark theme: NOT regulated
  • Manually constructed system with expert rules: Regulated
  • Machine learning system with automatically inferred rules: Regulated
  • Machine learning system with “online” continuous learning: Regulated

E.g.

  • Smart marketing campaigns
  • Risk/Insurance calculations
  • Website adjustment engines
  • Predicting weather
  • Car assistance systems
  • Fraud detection
  • FaceID recognition
  • Server failure prediction
  • Chatbots

Excluded systems

IN SCOPE: Logic- and Knowledge-Based Approaches The systems that are considered AI are those that “infer from encoded knowledge or symbolic representation of the task to be solved”. These are the classic “expert systems” we talked about, which use a knowledge base and a reasoning engine to draw conclusions. The medical diagnosis system mentioned in the Guidelines, which infers a diagnosis from a set of symptoms based on encoded expert knowledge, is a perfect example. The warning from above remains highly relevant for these complex systems.

OUT OF SCOPE: Simple Execution of Rules Crucially, the Guidelines explicitly state that the definition of AI should not cover “systems that are based on the rules defined solely by natural persons to automatically execute operations”. This is considered “basic data processing”. Think of a simple script that executes a pre-programmed IF-THEN command. These systems execute fixed instructions without any “learning, reasoning or modelling”.

This distinction is vital. It means that while complex expert systems are indeed regulated, simpler rule-based automation is not.

Explicitly Excluded Systems – The “Safe Harbors”

This provides clarity and prevents over-compliance for traditional software. Here are the key categories:

1. Systems for Improving Mathematical Optimization This includes systems that might use machine learning techniques simply to speed up or approximate traditional methods without fundamentally changing the decision-making model. For example, using an ML model to accelerate a traditional physics-based simulation would not, under these conditions, be considered an AI system.

2. Basic Data Processing, Analysis, and Visualization Systems that perform tasks based on fixed, human-programmed rules are excluded. This covers:

  • Database management systems that sort or filter data based on user criteria.
  • Standard spreadsheet software (without AI-enabled features).
  • Software for descriptive analysis, like a sales dashboard that creates charts to visualize historical sales trends but does not recommend actions to improve sales.

3. Systems Based on Classical Heuristics These are problem-solving techniques that use experience-based rules or trial-and-error strategies instead of data-driven learning. A classic example is a chess program that uses a minimax algorithm with pre-defined evaluation functions to assess board positions without having learned from data.

4. Simple Prediction Systems This is a very important exclusion. Systems that use basic statistical rules to establish a baseline or benchmark fall outside the scope. For instance:

  • Predicting a future stock price by simply using the historical average price.
  • Using the average temperature of the last week to predict tomorrow’s temperature. These systems are considered too simple and do not exhibit the sophisticated modeling of true AI systems.

The official Guidelines provide indispensable boundaries. They confirm that your complex expert systems likely fall under the Act, but they also give you the confidence to exclude simpler rule-based automation, traditional statistical analysis, and basic software from its scope. This nuanced understanding is essential as you begin to inventory and assess your company’s systems for AI Act compliance.

Pubicly Accessible space (Recital 19)

Definition:

  • Accessible to an undefined number of people
  • Irrelevant of private or public ownership

Examples:

  • Commerce: Shops, restaurants, cafés
  • Services: Banks, hospitals, offices
  • Sports: Gyms, stadiums, pools
  • Transport: Stations, airports
  • Entertainment: Cinemas, theatres, museums
  • Leisure: Parks, roads, playgrounds

Exclusions:

  • Restricted spaces (e.g. offices, factories)
  • Private areas within public spaces (e.g. private residential hallways)
  • Not just unlocked doors (clear restrictions apply)

Not covered:

  • Online spaces

Accessibility determined case-by-case, considering specific situations and usage contexts.

Emotion recognition system (Recital 18)

An AI system identifying or inferring emotions or intentions using biometric data. (e.g. a photo of yourself is biometric data)

Emotions or Intentions:

  • Happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, amusement.

Exclusions

  • Physical states like pain or fatigue (e.g. systems detecting pilot or driver fatigue).
  • Mere detection of expressions, gestures, or movements unless used to infer emotions.

Examples of exclusions:

  • Expressions: Frowns, smiles
  • Gestures: Hand, arm, or head movements
  • Voice characteristics: Raised voice, whispering

Focus on identifying emotions through biometric data, not physical states or obvious expressions alone.

Provider and Deployer

‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;

‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI sstem is used in the course of a personal non-professional activity;

Even supply chains are getting regulated: authorised representatitve, importer, distributor, operator

  • Training and building system from scratch – provider
  • Taking an untrained model from vendor and training it on own data – provider
  • Transfer learning: sending our data to model trained by vendor – provider
  • Transfer learning: Using a pretrained (open-source) model, optionally fine-training it with our own data – provider
  • Deploying ChatGPT for our employees on our servers – deployer  

Intended Purpose & Reasonably Foreseeable Misuse

‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific conext and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;

  • Healthcare AI: Designed to assist doctors in diagnosing diseases using patient data
  • Customer Service AI: Intended to handle customer queries through chatbots
  • Educational AI: Developed to personalise learning experiences for students based on their progress

‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems;

  • Healthcare AI: Used by untrained individuals to self-diagnose conditions, leading to incorrect treatments
  • Customer Service AI: Misused by users to generate inappropriate responses or content
  • Educational AI: Students manipulating the system to cheat or bypass learning modules

Special categories of Personal Data

Article 9(1) of Regulation (EU) 2016/679 (GDPR)

  • Racial or ethnic origin
  • Political opinions
  • Religious or philosophical beliefs
  • Trade union membership
  • Genetic data
  • Biometric data for uniquely identifying a natural person
  • Data concerning health
  • Data concerning a natural person’s sex life or sexual orientation

Article 10 of Directive (EU) 2016/680 (Law Enforcement Directive)

  • Personal data revealing racial or ethnic origin
  • Political opinions
  • Religious or philosophical beliefs
  • Trade union membership
  • Genetic data
  • Biometric data for uniquely identifying a natural person
  • Data concerning health
  • Data concerning a natural person’s sex life or sexual orientation

Profiling means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s:

  • Performance at work
  • Economic situation
  • Health
  • Personal preferences
  • Interests
  • Reliability
  • Behaviour
  • Location or movements

Profiling is often used in contexts such as targeted advertising, credit scoring, and assessing risks in various sectors.

Available Open-Source Exceptions

Providers of open-source models will be regulated to a much smaller degree compared to the providers of vendor models:

  • Documentation: Maintain and update technical documentation; available to AI Office and national authorities
  • Free and Open-Source Licenses: Allow free access, use, modification, and redistribution; exempt from regulatory requirements if not monetised, personal data used only for security/compatibility, and open repository provision not monetised
  • Transparency for AI Models: Publicly share model parameters, architecture, summarisation of training data, and usage; ensure original provider credit and distribution terms respected
  • Systemic Risk Exceptions: Open-source models with systemic risks must comply with regulations

Article 4 Obligation for Training of Employees

Article 4 of the EU AI Act states:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

From Article 4 Q&A:

  • All employees dealing with AI systems shall be trained.
  • My personal expectation of the extent is, 1-2 hours for general employees, up to 1 day (with regular updates) for employees developing or influencing the AI systems.
  • IMPORTANT: The Q&A also specify the enforcment date for this Article – 3 August 2026 onwards. I suspect that the months prior to this date, companies will invest heavily into the compliance with the Article 4, such as through developing and running internal trainings, or seeking external trainers.

Title II: Prohibited Practices

Themes:

  1. Manipulation and deception
  2. Exploiting vulnerabilities
  3. Discriminatory Social Scoring
  4. Mass Surveillance and Biometric Identification

Theme 1: Manipulation and deception

The Principle: Artificial Intelligence must not be designed in a way that tricks or secretly influences people to make decisions harmful to themselves or others.

Article 5 Point (a): 

  • “Subliminal techniques beyond a person’s conciousness…” Think subtle cues in the design of an AI interface, potentially not things a user even conciously notices.
  • “…purposefully manipulative or deceptive techniques…” Focus on intention: The AI is explicitly engineered to mislead or nudge a person in a predtermined direction.
  • “…with the objective to or the effect of materially distorting…” Even unintended consequences matter if they cause serious disruption of decision-making. “…causing…significant harm.” Consider harm as: economic loss, psychological distress, damage to reputation, etc.

What could this look like?

  • Scenario 1: an online gambling platform’s AI subtly alters visuals or sounds to increase a sense of urgency, tricking users into excessive betting behaviour even when they may have intended to stop.
  • Scenario 2: A political bot on social media, designed to look like a genuine person, targets individuals based on their browsing history to stir up anger, spreading misinformation and leading to polarised views.
  • Scenario 3: Automated financial advisor who “nudges” a customer through automated email into a mortgage, whereas the customer does not have sufficient financial literacy.

Key takeaway: even with “good” intentiuons, if AI undermines our ability to make rational, informed choices, it violates Title II.

Theme 2: Exploiting Vulnerabilities

The Principle: I can’t be built to take advantage of someone’s age, disability, or socioeconomic circumstances to manipulate them into harmful actions.

Article 5 Point (b): 

  • “…exploits any of the vulnerabilities of a person or a specific group…” These vulnerabilities are inherent traits or temporary situations making someone easier to target.
  • “…the objective to or the effect of… distorting behaviour that causes…significant harm.”Like point (a), intent and outcome both matter.

What could this look like?

  • Scenario 1: Highly targeted ads aimed at children that exploit their developmental age to increase desires for unhealthy products or in-app purchases or online games.
  • Scenario 2: An AI chatbot for a loan company preys on individuals with a recent medical emergency, knowing their desperation and potentially limited finances.

Key takeaway: AI shouldn’t leverage personal hardship or lack of cognitive development for profit or influence. There’s an inherent power imbalance.

Theme 3: Discriminatory Social Scoring

The Principle: AI cannot be used to build ranking systems that assign scores to individuals or groups based on their social behaviour, characteristics, or any category leading to unjustified discrimination.

Article 5 Point (c): 

  • “…evaluation or classification of natural persons…over a certain period of time based on their social behaviour…”  AI-based ‘reputation’ or ‘risk’ scores become problematic.
  • “…social score leading to either or both…” Here, either type of harm counts:
    • Detrimental treatment in unrelated contexts – a low score impacting opportunities unrelated to the initial reason the score was created.
    • Disproportionate treatment – the score is exaggerated or out of context compared to real-world harm.

What could this look like?

  • Scenario 1: an insurance company uses AI to develop a “responsibility score” based on social media and spending habits, unfairly raising premiums for people in certain demographics
  • Scenario 2: There has been quite some talk in recent years whether it is fair/ethical to use inputs into our models which natural persons cannot influence, e.g. age. Even if you omit age, you still have strong proxies for age which might also have to be removed
  • Scenario 3: We have (or will have) a wide range of unsupervised algorithms, where we often don’t know what they distill out of data. What if we model something that we should not model with through these algoithms and do no realise it.

Key takeaway: even if accurate, these scores reduce individuals to data. If misused, they perpetuate societal biases and inequalities.

Theme 4: Mass Surveillance with Biometric Identification

The Principle: ‘Real-time’ facial recognition and other biometric tracking in public spaces cannot be used without extreme justification and strict oversight due to the erosion of privacy.

Article 5 Point (d): 

  • “‘Real-time’ remote biometric identification systems…” Emphasies not just data collection but real-world tracking at the moment it happens.
  • “…in publicly accessible spaces…” Think broad surveillance not confined to a closed venue for which people had an obvious choice to enter or not.
  • “…for the purpose of law enforcement…” Acknowledges there may be very limited legitimate uses, subject to serious debate.
  • “…strictly necessary for one of the following objectives…” The EU lays out clear limitations:
    • Specific missing persons or abducted children
    • Imminent threats to life or the prevention of a terrorist attack
    • Identifying an actively sought suspect of serious crime (note the severity required)

What could this look like? (Potential Misuse)

  • Scenario 1: Citywide facial recognition system used by law enforcement not with targeted search but to track political dissidents at protests.
  • Scenario 2: A large corporation installs advanced biometric monitoring within shopping malls with the pretext of enhancing safety, but in reality sells consumer data.

Special notes on Point da: Addresses an equally disturbing AI technique – predictive policing – using profiling to assess someone’s likelihood to commit a crime (Minority Report). This is outright banned.

Title III: High-Risk AI Systems

Classification rules for high-risk AI systems

High-risk AI systems are identified as those that either function as a:

  1. (IN) safety component of a product regulated by specific EU legistation listed in Annex I and are mandated to undergo an external conformity evaluation as per those Annex I regulations (e.g. Machinery, Medical devices, Toys, Lifts, Pressure equipment, Recreational watercraft, Civil aviation security; OR
  2. (IN) are associated with the use cases outlined in Annex III. Derogation: (OUT) AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This is the case if one or more is true!
    • If the AI system carries out a narrow procedural task;
    • the AI system is intended to improve the result of a previously completed human activity
    • Identifies patterns or deviations in decision-making without intending to substitute or sway the prior human judgement without adequate human oversight;
    • Executes a preliminary task for an evaluation significant to the purposes outlined in Annex III.
    • Obligation: Providers of AI systems falling under Annex III, which believe their system does not constitute a high-risk, shall document such an assessment prior to its market launch or service provision
      • (BACK IN) Note: AI systems that profile individuals, defined as the automated processing of personal data to evaluate various aspects of an aindividual’s life such as work performance, financial status, health, personal interests, reliability, behaviour, locations, or movements, are very likely classified as high-risk.

Biometrics (Annex III)

Biometrics, in so far as their use is permitted under relevant Union or national law: 

a) remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;

b) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

c) AI systems intended to be used for emotion recognition.

 

Critical Infrastructure (Annex III)

Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity.

Education and Vocational Training (Annex III)

a) AI systems intended to be used to determine access of admission or to assign natural persons to educational and vocational training institutions at all levels;

b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

Employment & Workers’ Management (Annex III)

a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applciations, and to evaluate candidates;

b) Ai systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

Access to essential services and benefits (Annex III)

Access to and enjoyment of essential private services and essential public services and benefits:

a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud:

c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;

d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.

Law Enforcement (Annex III)

a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;

b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;

c) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation of prosecution of criminal offences;

d) AI systems intended to be used by law enforcement authortities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personal traits and characteristics or past criminal behaviour of natural persons or groups;

e) AI systems intended to be used by or on behalf of law enforcement authortities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural person as referred to in Article 3(4) of Directive (EU) 2016/680, in the course of the detection, investigation or prosecution of criminal offences.

Migration, Asylum & Border Management (Annex III)

a) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools;

b) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;

c) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated compliants with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;

d) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.

Administration of Justice & Democratic Processes (Annex III)

a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;

b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the xercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or stucture political campaigns from an administrative or logistical point of view.

Future changes to Annex III (Article 7)

The purpose of Article 7 is to allow the European Commision to update the list of high-risk AI systems as technology and societal risks evolve. The Commission can either add new use cases to the list or remove existing ones based on specific criteria.

  • Intended Purrpose: What is the AI system designed to do? Is it critical to safety, legal decisions, or health?
  • Extent of Use: How widely is the AI system being used or likely to be used?
  • Data Nature: Does the AI system process sensitive data, like health or personal information?
  • Autonomy: How much control does the AI system have? Can humans easily override decisions if something goes wrong?
  • Past Harm or Concerns: Has the AI system already caused harm or raised significant concerns? Are there reports of issues?
  • Potential Harm: If something goes wrong, how severe could the impact be? Would it affect many people or a vulnerable group disproportionately?
  • Dependence on Outcome: Are poeple reliant on the AI’s decisions, with no easy way to opt-out or correct errors?
  • Power Imbalance: Is there a significant power difference between those deploying the AI and those affected by it? Are the affected people vulnerable?
  • Reversibility: Can any harmful outcomes be easily corrected or reversed?

High-Risk AI Compliance: EU AI Act Requirements Checklist.

  1. Risk management system (Article 9)
  2. Data governance (Article 10)
  3. Documentation (Article 11)
  4. Record-keeping (Article 12)
  5. Transparency (Article 13)
  6. Human Oversight (Article 14)
  7. Accuracy (Article 15)
  8. Robustness (Article 15)
  9. Cybersecurity (Article 15)

Risk management system (Article 9)

  1. Establishing a Risk Management System
  2. Continuous and Iterative Process
  3. Steps in the Risk Management Process
    1. Identify and Analyse Risks
    2. Estimate and Evaluate Risks
    3. Evaluate Additional Risks Based on Post-Market Data
    4. Adopt Targeted Risk Management Measures
  4. Focus on Risks that can be mitigated or eliminated
  5. Ensuring Residual Risks are acceptable
  6. Testing the AI system
  7. Special Consideration for vulnerable groups
  8. Integration with other risk management processes

Case study: We are a large organisation that wants to use AI in hiring (use case as been classified as high-risk)

  1. Appoint a dedicated team
  2. Establish the Risk Management Process
  3. Initial risk identification
  4. Risk analysis and documentation
  5. Implementing design solutions
  6. Continous monitoring during development
  7. Pre-deployment testing
  8. Defining metrics and thresholds
  9. Deploy the AI system
  10. Regular reviews and updates
  11. Risk mitigation and communication
  12. Consideration for vulnerable groups

Data & Data Governance (Article 10)

The important realisation is that this is data gvernance from AI perspective (!). Imagine we are a bank with creditworthiness model:

  1. Assembe a data governance team
  2. Define data governance protocols
  3. Ensuring data quality and relevance
  4. Analyse and mitigate bias
  5. Processing special categories of data
  6. Rigorous testing of the AI system
  7. Monitor and update post-deployment
  8. Maintain comprehensive documentation
  9. Regular compliance reviews

Technical documentation (Article 11)

The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be ept up-to-date.

The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notiied bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements.

Part 1: General System Description of AI system
Effectively document the purpose, provider information, system interactions, forms of distribution, and hardware specifications of your AI system.

Part 2: AI System Development Details
What aspects of your development methodology, design decisions, system architecture, and dataset choices need to be thoroughly documented.

Part 3: Monitoring, Risk Management, and Beyond
Document your AI system’s post-deployment performance monitoring, evolving risk assessments, human oversight safeguards, system updates, and conformity with relevant standards.

Record keeping (Article 12)

Imagine we are a city that is using a high-risk AI system for traffic conrol:

  1. Implementing logging capabilities (event-logging, timestamping, database references)
  2. Ensure compliance with Article 12(2) Requirements (risk identification, operation monitoring, error logging)
  3. Logging periods of system use
  4. Record the reference database used
  5. Logging input data leading to matches
  6. Identify natural persons involved in verification

Transparency towards deployers (Article 13)

  1. High-Risk AI systems shall be designed and developed in such a way as to ensure their operation is suficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with thte relevant obligations of the provider and deployer set out in Section 3.
  2. High-Risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.

Article 11 is technical documentation for us and the regulator. It is internal documentation.

Article 13 is if we are a provider and giving to a deployer (without all our secrets etc that are in Article 11). It is transparent documentation that can be provided to every customer.

Human oversight (Article 14)

  1. High-risk AI systems shall be designed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
  2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

Case Study A

Context: A hospital deploys a high-risk AI system, MediScanAI, to assist radiologists in diagnosing diseases from medical images such as X-rays and MRIs. The AI provides recommendations on potential diagnoses, but the final decision is made by a human doctor.

  • Human review: Every diagnosis suggested by MediScanAI must be reviewed by a radiologist. The AI system provides a confidence score with each diagnosis (e.g. 85% confidence this is lung cancer”).
  • Override capability: Radiologists have the ability to verride the AI’s suggestion if they believe it is incorrect or if other clinical data suggests a different diagnosis.
  • Double-check: For cases where the AI’s confidence score is below a certain threshold (e.g. less than 70%), the system automatically flags the case for a second radiologist to review independently.
  • Stop button: If the AI behaves unpredictably (e.g. producing results that seem nonsensical), the radiologist can use a “stop” function to halt the AI’s operation and revert to manual analysis.

Case Study B

Context: A bank uses an AI system, LoanDeciderAI, to automate the evaluation of loan applications. The AI assesses applicants’ creditworthiness based on financial history,employment status, and other factors.

  • Human Review for Borderline cases: Loan applications that receive a mid-range score from LoanDeciderAI (e.g. between 40 and 60 on a 100-point scale) are flagged for human review. A loan oficer must manually assess these applications before a final decision is made.
  • Automated alerts for high-risk decisions: If LoanDeciderAI recommends denying a loan due to high-risk factors, the system automatically alerts a senior loan officer to review the decision. This is particularly important in cases where the denial could have significant social or legal implications.
  • Training and awareness: Loan officers receive regular training on how LoanDeciderAI works, including its limitations and potential biases. This helps them to interpret AI outputs critically and avoid automation bias.

Accuracy (Article 15)

  1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
  2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.
  3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.

Robustness (Article 15)

High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shal be taken in this regard.

The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.

Data-centric robustness:

  • High quality input features (NOT just “data”)
  • Data diversity and representation (data augmentation, targeted data collection and stratified sampling)
  • Bias detection (easier said than done)
  • Fairness metrics (e.g. disparate impact)
  • Unexpected data points (data validation)

Model-centric robustness:

  • Robust model architectures (e.g. ensemble)
  • Cost-sensitive learning (i.e. exposing true costs of errors to the model for training)

Cybersecurity (Article 15)

High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vunerabilities. The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.

The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. 

Training data set (data poisoning)

  • Our desire for “more” data (maybe even because regulation asks for that)
  • Attacks are becoming cheaper due to GenAI tools

Pre-trained components used in training

  • One component, which was trained by somebody else. This is not code, it is learned solution (model weights).
  • What if it has back door? If specific inputs are encountered, certain behaviour is triggered.

Make a mistake (adversarial examples or model evasion)

Testing against model evasion is not so easy. Imagine for example an observation:

  • Man
  • 32 yo
  • Self-employed

I need to create perturbations of this observation and observe when the prediction changes, evaluate whether it changes correctly.

Confidentiality attacks or model flaws

  • Model stealing (maybe this is meant from e.g. national security perspective)
  • Data exfiltration (e.g. ask ChatGPT for its training data – let it violate IP of authors)
  • Inference attacks (extract sensitive data about individuals from a model)

Real-world consequences of the EU AI Act

Current industry standard for AI use case development:

  1. Backlog of raw ideas
  2. List of formulated concrete proposals 
  3. Prototypes and experiments (1-12 months) (if high risk, new obligations)…tricky
  4. Fully deployed use cases (if high risk, new obligations)…understandable

Probably only about 10% of prototypes make it to full deployment. The obligations might hurt innovation, or can be cumbersome. If the obligations came between steps 3 and 4, it would be easier.

Note: we will probably see redesign of prototyping processes. E.g. don’t use AI in your first prototype. This can simplify the burden caused by regulation.

Title IV: Transparency Obligations for Providers and Deployers of AI systems

Direct interaction with natural persons

Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminial offence.

Synthetic content and deep fakes

(Recital 133) A variety of AI systems can generate large quantities of synthetic content that becomes increasingly hard for humans to distinguish from human-generated and authentic content.

(Title I) ‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Obligations for synthetic content

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artifically generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.

Marking in machin-readable format:

  • Digital watermarking (images, videos, audio)
  • Metadata embedding (e.g. header of an HTML)
  • Hashing and signatures
  • Steganography
  • File format extensions (e.g. “.aiimg”, or “.aitxt”)

 

Deep fakes

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

Title V: General Purpose Artificial Intelligence (GPAI)

General purpose AI models (often referred to as “foundational models”)

  • Definition of general-purpose AI
  • Obligations for providers of general-purpose AI models (Article 53)
  • Classification as general-purpose AI models with systemic risk (Article 51)
  • Obligations of providers of general-purpose AI models with systemic risk (Article 55)

General Purpose AI Definition

Title I (63)

‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;

  1. I have labels (target feature): Supervised learning
  2. I don’t have labels: Unsupervised learning
  3. I can generate labels: Self-supervised learning

Self-supervised learning, high-level examples:

  • The cat sat on the [MASK]
  • Generate the next frame in a video
  • Fill in the missing part of an image

 

  • What qualifies as “significant generality”? The AI Act’s recitals offer a practical benchmark. Models with at least one billion parameters that are trained on a large amount of data using self-supervision are generally considered to display significant generality.
  • What are typical examples? The EU explicitly points to large generative AI models as a prime example of GPAI. This is because their ability to flexibly generate content like text, audio, images, or video allows them to be adapted to a vast array of tasks.
  • Does it have to be multi-modal? No. A model can be considered GPAI even if it operates within a single modality, such as text or images, provided that modality is flexible enough to perform a wide range of distinct tasks.
  • What about specialized or fine-tuned models? A model that has been further developed, modified, or fine-tuned to excel at a specific task or within a specific domain can still be considered a GPAI model. The underlying generality remains the key factor. If you fine-tune a GPAI model, you may become a provider of a new GPAI model, with your obligations limited to the modifications you made.
  • Are different versions different models? The EU clarifies that different stages of a single model’s development do not constitute different models.

This nuanced understanding is crucial. It shows that the “general-purpose” designation is broad, capturing not just the foundational models but also those adapted for specific functions, and it provides a clearer picture of the types of technologies that fall under these specific rules.

Obligations for General Purpose AI Model Providers

  • Technical Documentation and Evaluation (Article 53(1)(a))
  • Information sharing with AI system providers (Article 53(1)(b))
  • Compliance with coyright and related rights (Article 53(1)(c))
  • Transparency about training data (Article 53(1)(d))
  • Open-Source models (Article 53(2))
  • Cooperation with authorities (Article 53(3))
  • Codes of practice and harmonised standards (Article 53(4))
  • Confidentiality and data protection (Article 53(7))

The draft Code of Practice details the practical steps for fulfilling these obligations, particularly concerning transparency and copyright.

1. Transparency in Practice: The Model Documentation Form

The Code of Practice introduces a tangible tool for compliance: the

Model Documentation Form. This standardized form is designed to capture the technical documentation required by the Act.

  • Purpose: It ensures that providers compile the necessary information in a single place to be shared with two key groups: downstream providers who integrate the model, and the AI Office or national competent authorities upon a formal request.
  • Content: The form requires detailed information, including:
    • General Information: The model’s name, unique identifier, and release date.
    • Model Properties: A description of the model architecture (e.g., a transformer architecture), its input and output modalities (e.g., text, images), and the total number of parameters.
    • Training Process: A description of the main training stages, methodologies, and the key design choices made.
    • Training Data: Details on the types of data used (e.g., text, images), their provenance (e.g., web crawling, private datasets), and the data curation methodologies (e.g., cleaning, filtering).

2. Copyright Policy in Practice

The “Copyright” chapter of the Code of Practice specifies what this policy must address to demonstrate compliance with EU law.

  • Lawful Access: Providers commit to only using lawfully accessible content. This includes a commitment not to circumvent technological measures like paywalls or subscription models that restrict access.
  • Identifying Rights Reservations: The policy must detail how the provider identifies and respects the reservation of rights by rightsholders. The primary method specified is adhering to machine-readable protocols, most notably the Robots Exclusion Protocol (robots.txt). This means if a website’s robots.txt file prohibits scraping for AI training, the provider must comply.
  • Mitigating Infringing Outputs: The policy must also cover measures to mitigate the risk of the model generating infringing content. This involves implementing appropriate technical safeguards and prohibiting copyright-infringing uses in the model’s acceptable use policy or terms of service.
  • Complaint Mechanism: Providers must designate a point of contact and establish a mechanism for rightsholders to submit complaints regarding non-compliance with these copyright commitments.

These practical measures move beyond the high-level principles of the Act, giving providers a clearer roadmap for what is expected of them in their day-to-day operations.

Classification of general purpose AI with systemic risk

  1. A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions:
    • A) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodolgies, including indicators and benchmarks;
    • B) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (A) having regard to the criteria set out in Annex XIII.
  2. A general purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (A), when the cumulative amount of computation used for its training measured in floating point operations is greater than 10^25
  3. The Commision shall adopt delegated acts in accordance with Article 97 to amend the thresholds listed in paragraphs 1 and 2 of this Article, as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, or these thresholds to reflect the state of the art.

 

Guidance from the EU AI Office and details within the Act itself provide more concrete anchor points for this classification.

There are two primary ways a GPAI model can be classified as having systemic risk.

Pathway 1: The “High-Impact Capabilities” Threshold

The AI Act presumes a model has “high-impact capabilities” if its training involves a massive amount of computational power.

  • The Specific Threshold: This is met when the cumulative amount of computation used for training is greater than 10^25 floating-point operations (FLOPs).
  • Putting it in Context: This is not a trivial amount of computing. The EU AI Office notes that training a model at this scale is currently estimated to cost tens of millions of Euros. It is fair to assume that the most powerful, state-of-the-art models from major developers have already surpassed this threshold.
  • An Evolving Target: The Commission has the power to adopt delegated acts to amend this threshold in response to technological developments like algorithmic improvements or more efficient hardware.

Pathway 2: Commission Designation

Even if a model does not meet the 10^25 FLOPs threshold, it can still be designated as having systemic risk by the European Commission.

  • The Basis for Designation: This decision is made based on a set of criteria outlined in Annex XIII of the AI Act. These criteria allow the Commission to consider a more holistic view of the model’s potential impact.
  • Key Criteria in Annex XIII: The criteria are broad and include:
  • The number of parameters in the model.
  • The quality or size of the data set used for training, potentially measured in tokens.
  • The input and output modalities of the model (e.g., its ability to process text, voice, and video).
  • The number of registered end-users of the model.
  • Its performance on various benchmarks.

This second pathway gives the EU the flexibility to classify a model as systemic-risk even if it was trained with less computation, for instance, if it achieves a very wide reach in the market or has particularly sensitive capabilities. This confirms the video’s conclusion: the EU has significant latitude to bring GPAI models under the systemic risk category.

Obligations for GPAI with systemic risk

  • Model Evaluation and Adversarial Testing (Article 55(1)(a))
  • Risk Assessment and Mitigation (Article 55(1)(b))
  • Incident reporting and corrective measures (Article 55(1)(c))
  • Cybersecurity measures (Article 55(1)(d))
  • Reliance on codes of practice (Article 55(2))
  • Confidentiality obligations (Article 55(3))

 

The Safety and Security Chapter of the Code of Practice transforms these high-level duties into a detailed, continuous, and state-of-the-art operational process. It’s not just a checklist, but a comprehensive risk management system.

Providers of systemic risk models must adopt, implement, and continuously update a Safety and Security Framework. This framework is the provider’s documented plan for managing systemic risks across the model’s entire lifecycle. Here are its key pillars:

1. Comprehensive Risk Assessment and Mitigation

This is an iterative, multi-step process:

  • Systemic Risk Identification: Providers must systematically identify potential systemic risks, drawing from a specified list that includes risks to public security, health, fundamental rights, and society as a whole. This includes considering specified risks like enabling chemical/biological weapons, loss of control over the model, enabling sophisticated cyber-attacks, and enabling harmful manipulation of behavior or beliefs.
  • Systemic Risk Analysis: Each identified risk must be analyzed through rigorous model evaluations, which we’ll detail below.
  • Systemic Risk Acceptance Determination: Providers must define “systemic risk acceptance criteria” within their framework to determine if a model’s risks are acceptable. If the risks are deemed unacceptable, the provider must implement further safety and security mitigations and re-assess the risk before proceeding.

2. Rigorous Model Evaluation

This is a cornerstone of the framework. Evaluations must be state-of-the-art and go beyond simple testing.

  • Internal and External Testing: The framework mandates both internal evaluations and, crucially, evaluations by adequately qualified independent external evaluators.
  • Adversarial Testing: Providers must conduct and document adversarial testing (or “red-teaming”) designed to proactively expose vulnerabilities, biases, errors, or security flaws in the model.
  • Model Elicitation: Evaluations must be conducted with a high level of “model elicitation,” meaning they must use techniques to systematically enhance and measure the model’s full range of capabilities, minimizing the risk that the model is “sandbagging” or hiding its true abilities.

3. Serious Incident Reporting with Clear Timelines

While the video mentioned incident reporting, the Code of Practice provides specific, tiered reporting deadlines based on the incident’s severity. After becoming aware of an incident involving their model, providers must submit an initial report:

  • Within 2 days: For a serious disruption of critical infrastructure.
  • Within 5 days: For a serious cybersecurity breach, including model weight exfiltration.
  • Within 10 days: For the death of a person.
  • Within 15 days: For serious harm to a person’s health, fundamental rights, property, or the environment. Follow-up and final reports are also required.

4. Robust Cybersecurity

Providers must ensure an adequate level of cybersecurity for the model and its physical infrastructure. The Code of Practice details specific objectives, including:

  • Protecting Unreleased Model Parameters: This involves using a secure internal registry for all copies of model parameters, encrypting them during transport and at rest, and using confidential computing where appropriate.
  • Hardening Interface Access: Limiting and reviewing any software interfaces that have access to model parameters to prevent data leakage or exfiltration.
  • Protecting Against Insider Threats: This includes background checks for personnel with access to model parameters and using sandboxes to reduce the risk of model self-exfiltration.

In summary, the obligations for systemic risk models demand a sophisticated, transparent, and continuous governance and risk management process that is deeply embedded in the provider’s organization.

Title VI+: Regulatory Bodies

EU AI Act Sandboxes: a safe space for innovation and testing – implemented on a national level

The sandbox concept, originating from the fintech sector, found its way into the EU AI Act. The idea is to carve out a supervised and controlled environment within the existing regulatory framework where innovative AI systems can be developed, tested, and validated before full-scale market launch.

  • Primarily meant for providers
  • Some exceptions to support SMEs
  • Some countries already have prototypes (such as UK)
  • You either propose a real-world experiment to be approved or test within the sandbox
  • Semi-public EU database for high-risk systems

Extra Learnings

EU AI Act Action Plan: Getting your business ready for compliance

  1. Focus on “prohibited AI practices. Run an informative session with a data/AI community on this topic specifically.
  2. Map systems where you are a deployer and where you are a provider.
  3. Keep an informal documentation of everything that happens with regards to this topic. At many points, AI Act mentions the need for this.
  4. Follow your national regulator. As soon as they form these governing bodies in any form, such as “sandboxes”, get in touch. Already now, ask your AI colleagues to come up with two toy use cases – one low risk and one high risk and apply into this body asap to get contacts and a feel of how they will work.
  5. Dedicate multiple people (if large company) from amoung developers and auditors (personas mentioned) to get thoroughly informed on this topic.
  6. Don’t get defensive such as “we want to avoid AI Act by not using AI”. That would be missing out on a huge competitive advantage.

AI Authorship & Ownership: Navigating Copyright in the EU AI Act Era

According to 3 of the Copyright Acr No. 185/2015 Coll. a work of authorship is defined as follows:

(1) The subject matter of copyright is a work of literature, art, or science which is the unique result of the creative mental activity of the author perceptible to the sense, regardless of its form, content, quality, purpose, form of its expression or degree of its completion.

Co-authorship, as described under Section 15:

(1) Co-authors are two or more authors who have created a single work by creative intellectual activity in such a way that it is impossible to distinguish the individual authors’ creative conributions from each other and use them as separate works.