What is the EU AI Act?

tball

The Artificial Intelligence Act (AI Act) is a landmark regulation introduced by the European Union to govern the development and use of artificial intelligence technologies.

What is the EU AI Act?

The EU AI Act is a comprehensive regulatory framework that governs how artificial intelligence systems are developed, placed on the market, and used within the European Union. It applies to both AI providers and deployers and establishes clear legal obligations based on how AI systems impact individuals, society, and fundamental rights. The Act promotes innovation while safeguarding health, safety, and fundamental rights by setting requirements for transparency, risk management, human oversight, data governance, and cybersecurity across the AI lifecycle.

EU AI Act risk categories

The EU AI Act uses a risk-based approach, meaning AI systems are regulated according to the level of risk they pose to individuals and society. The higher the potential risk, the stricter the obligations imposed on providers and deployers.

  • Prohibited practices
    AI systems that pose an unacceptable risk are banned outright. These include cognitive behavioural manipulation, social scoring by public authorities, indiscriminate scraping of facial images, and biometric inference of sensitive attributes such as political beliefs or sexual orientation.
  • High-risk AI systems
    High-risk systems are those used in sensitive or regulated areas such as critical infrastructure, employment and HR, creditworthiness assessments, education, healthcare, law enforcement, border control, and the administration of justice. These systems must meet strict requirements, including risk management processes, high-quality training data, technical documentation, human oversight, cybersecurity safeguards, and CE marking before being placed on the market.
  • General-purpose AI models (GPAI)
    This category includes large-scale models such as large language models (LLMs). Providers must meet transparency obligations, respect copyright rules, document training practices, and implement measures to address systemic risks, particularly for more capable or widely deployed models.
  • Limited-risk AI systems
    These systems are subject primarily to transparency obligations. Users must be informed when they are interacting with AI-generated content, such as chatbots or deepfakes, unless an exception applies.
  • Minimal or no-risk AI systems
    AI applications that pose little to no risk, such as AI-enabled video games or spam filters, are largely exempt from regulation under the AI Act.

Violations of the AI Act can result in fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.

regulation according to risk level

EU AI Act and AI Literacy

Since February 2, 2025, providers and deployers must ensure sufficient AI literacy among staff. Training is recommended but not mandatory.

Key considerations include the company's role, general understanding of AI, associated risks, and tailored literacy measures based on technical knowledge and context.

Implementation in Companies

Implementing the EU AI Act requires organisations to translate regulatory obligations into operational controls. Audits play a critical role in this process by helping companies understand how AI systems are used, where risks exist, and what compliance measures are required.

These audits are intended to:

  • Identify and classify AI systems according to the Act’s risk categories
  • Determine whether the organisation acts as a provider, deployer, or both
  • Assess how AI systems process data and generate outputs
  • Identify gaps in transparency, human oversight, cybersecurity, and risk management
  • Inform remediation plans, governance structures, and internal controls needed for compliance

In practice, these assessments form the foundation for AI governance programs and enable organisations to prioritise compliance efforts based on risk and regulatory exposure.

EU AI Act Timeline

The EU AI Act follows a phased implementation timeline, giving organizations time to adapt their AI governance, risk management, and compliance programs. Key milestones include when the regulation entered into force, when specific obligations apply, and when full compliance is required.

Key EU AI Act dates

Key EU AI Act Dates

Date

Milestone

1 August 2024

The EU AI Act enters into force, officially becoming EU law.

2 February 2025

Prohibited AI practices take effect and are banned. National authorities responsible for enforcement are appointed.

2 August 2025

Rules for general-purpose AI (GPAI) models and related governance obligations begin to apply.

2 August 2026

The AI Act is fully applicable. Compliance obligations apply across all AI risk categories, including high-risk systems.

2 August 2027

High-risk AI systems embedded in regulated products must be fully compliant with EU AI Act requirements.

This phased approach is intended to balance innovation with legal certainty, allowing organizations to progressively implement governance, technical safeguards, and oversight mechanisms based on AI risk.

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is an area of computer science that imitates human cognitive capabilities by identifying and sorting input data. This intelligence can be based on programmed workflows or created with machine learning.

In machine learning, training data is used to teach the AI to recognise patterns and make predictions. The AI Act defines an AI system as a machine-based system that operates with varying levels of autonomy and generates outputs such as predictions, content, recommendations, or decisions.

Examples of AI systems under the AI Act include emotion recognition, facial recognition, candidate selection, administration of justice, healthcare (e.g. symptom analysis), customer service, chatbots, and generative AI.

Generative AI, such as ChatGPT, refers to AI systems that autonomously generate results based on input data using machine learning and large language models (LLMs). These systems can make mistakes and 'hallucinate'—inventing probable but inaccurate statements.

Data Protection

Use of AI systems involving personal data must comply with GDPR and implement data loss prevention practices. Fines for breaches can reach 4% of global turnover or EUR 20 million.

Companies must ensure lawful processing, respect data minimisation, accuracy, and confidentiality, and fulfill information obligations.

Automated decisions with legal effects must involve human discretion. Technical and organisational measures (TOM) like encryption and pseudonymisation are essential.

A data protection impact assessment is required for high-risk processing.

Protection of Trade Secrets

Trade secrets must be protected against unlawful acquisition and disclosure. Requirements include confidentiality measures, access restrictions, and NDAs.

AI systems, training data, and output may constitute trade secrets. Companies must regulate input usage and review third-party terms to avoid disclosure risks.

Copyright issues arise on both input and output sides of AI systems. Use of protected content for training is under legal scrutiny.

AI-generated works lack copyright protection under current law, as they are not human creations. This means such output is in the public domain.

Who does the EU AI Act Apply to?

The EU AI Act applies broadly to organisations involved in the development, distribution, or use of AI systems that affect the EU market, regardless of where the organisation is headquartered.

Liable parties include:

  • AI providers, such as companies that develop or place AI systems or general-purpose AI models on the EU market
  • AI deployers, including organisations that use AI systems in business operations, decision-making, or customer-facing services
  • Importers and distributors that bring AI systems into the EU supply chain
  • Product manufacturers that integrate AI into regulated products or services

Within organisations, responsibility typically falls across:

  • Executive leadership responsible for governance and risk oversight
  • Legal and compliance teams managing regulatory obligations
  • IT, security, and data teams responsible for implementation, monitoring, and safeguards

The Act holds these parties accountable through enforcement mechanisms, including fines, market restrictions, and liability exposure for non-compliance or harm caused by AI systems.

Does the EU AI Act apply to the UK?

The EU AI Act does not directly apply in the UK, as the UK is no longer part of the European Union. However, UK-based organisations may still be affected if they develop, sell, or deploy AI systems that are used within the EU or impact EU individuals.

In such cases, UK companies may be required to comply with the EU AI Act as extraterritorial legislation. This makes alignment with EU AI Act requirements relevant for UK organisations operating internationally or serving EU markets, particularly in regulated or high-risk use cases.

What Should Be Reviewed in AI System Terms of Use?

Companies must review third-party AI system terms, focusing on:

  • Applicable law and jurisdiction

  • Storage and use of input for training

  • Rights to output

  • Indemnification against copyright claims

  • Warranty and liability limitations

What Guidelines Should Companies Follow for EU AI Act Compliance?

Internal AI guidelines help regulate employee use of AI systems. These may include:

  • Descriptions and authorisations of AI systems

  • Instructions for input and output handling

  • Confidentiality and data protection compliance

  • Cybersecurity measures and transparency obligations

EU AI Act Summary

The EU AI Act will largely apply from August 2, 2026, and must be implemented by companies using AI. It regulates AI providers and deployers through a risk-based approach: the higher the risk of societal harm, the stricter the rules.

  • Compliance with GDPR is mandatory when processing personal data using AI systems.

  • AI systems must be safeguarded against unauthorised access and cyber attacks.

  • Trade secrets must be protected when using AI systems.

  • Copyright issues on both input and output sides are under legal scrutiny.

  • Companies are liable for defects in products and services caused by AI.

  • Terms of use of third-party AI systems must be reviewed carefully.

  • AI literacy among employees should be promoted through internal guidelines.

How does Trend Micro support compliance with the AI Act?

Staying compliant with the EU AI Act means more than just understanding the rules—it requires active governance, risk monitoring, and clear accountability across your AI systems. From copyright and liability to terms of use and internal guidelines, organisations must ensure that every aspect of AI deployment aligns with evolving legal standards.

To support this, companies can leverage advanced tools that simplify compliance and reduce exposure to cyber risks by using Trend Micro’s Cyber Risk Exposure Management platform, designed to help you identify vulnerabilities, manage AI-related risks, and maintain trust across your digital operations.

Frequently Asked Questions (FAQs)

Expand all Hide all

What is the EU AI Act?

add

The EU AI Act is a regulation governing artificial intelligence systems to ensure safety, transparency, and fundamental rights protection.

When does the EU AI Act come into force?

add

The EU AI Act enters into force in 2024, with full application expected by 2026 across all EU member states.

Who does the EU AI Act apply to?

add

The EU AI Act applies to providers, users, and importers of AI systems operating within or targeting the European Union market.

When was the EU AI Act passed?

add

The EU AI Act was passed by the European Parliament in 2024 after extensive negotiations and stakeholder consultations.

How to comply with the EU AI Act?

add

To comply, organisations must classify AI systems by risk, ensure transparency, conduct conformity assessments, and maintain documentation.

What penalties exist for non-compliance with the EU AI Act?

add
  • Non-compliance with the EU AI Act can result in fines up to €35 million or 7% of global turnover.