The Artificial Intelligence Act (AI Act) is a landmark regulation introduced by the European Union to govern the development and use of artificial intelligence technologies.
Table of Contents
The EU AI Act is a comprehensive regulatory framework that governs how artificial intelligence systems are developed, placed on the market, and used within the European Union. It applies to both AI providers and deployers and establishes clear legal obligations based on how AI systems impact individuals, society, and fundamental rights. The Act promotes innovation while safeguarding health, safety, and fundamental rights by setting requirements for transparency, risk management, human oversight, data governance, and cybersecurity across the AI lifecycle.
The EU AI Act uses a risk-based approach, meaning AI systems are regulated according to the level of risk they pose to individuals and society. The higher the potential risk, the stricter the obligations imposed on providers and deployers.
Violations of the AI Act can result in fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Since February 2, 2025, providers and deployers must ensure sufficient AI literacy among staff. Training is recommended but not mandatory.
Key considerations include the company's role, general understanding of AI, associated risks, and tailored literacy measures based on technical knowledge and context.
Implementing the EU AI Act requires organisations to translate regulatory obligations into operational controls. Audits play a critical role in this process by helping companies understand how AI systems are used, where risks exist, and what compliance measures are required.
These audits are intended to:
In practice, these assessments form the foundation for AI governance programs and enable organisations to prioritise compliance efforts based on risk and regulatory exposure.
The EU AI Act follows a phased implementation timeline, giving organizations time to adapt their AI governance, risk management, and compliance programs. Key milestones include when the regulation entered into force, when specific obligations apply, and when full compliance is required.
Date
Milestone
1 August 2024
The EU AI Act enters into force, officially becoming EU law.
2 February 2025
Prohibited AI practices take effect and are banned. National authorities responsible for enforcement are appointed.
2 August 2025
Rules for general-purpose AI (GPAI) models and related governance obligations begin to apply.
2 August 2026
The AI Act is fully applicable. Compliance obligations apply across all AI risk categories, including high-risk systems.
2 August 2027
High-risk AI systems embedded in regulated products must be fully compliant with EU AI Act requirements.
This phased approach is intended to balance innovation with legal certainty, allowing organizations to progressively implement governance, technical safeguards, and oversight mechanisms based on AI risk.
Artificial intelligence (AI) is an area of computer science that imitates human cognitive capabilities by identifying and sorting input data. This intelligence can be based on programmed workflows or created with machine learning.
In machine learning, training data is used to teach the AI to recognise patterns and make predictions. The AI Act defines an AI system as a machine-based system that operates with varying levels of autonomy and generates outputs such as predictions, content, recommendations, or decisions.
Examples of AI systems under the AI Act include emotion recognition, facial recognition, candidate selection, administration of justice, healthcare (e.g. symptom analysis), customer service, chatbots, and generative AI.
Generative AI, such as ChatGPT, refers to AI systems that autonomously generate results based on input data using machine learning and large language models (LLMs). These systems can make mistakes and 'hallucinate'—inventing probable but inaccurate statements.
Use of AI systems involving personal data must comply with GDPR and implement data loss prevention practices. Fines for breaches can reach 4% of global turnover or EUR 20 million.
Companies must ensure lawful processing, respect data minimisation, accuracy, and confidentiality, and fulfill information obligations.
Automated decisions with legal effects must involve human discretion. Technical and organisational measures (TOM) like encryption and pseudonymisation are essential.
A data protection impact assessment is required for high-risk processing.
Trade secrets must be protected against unlawful acquisition and disclosure. Requirements include confidentiality measures, access restrictions, and NDAs.
AI systems, training data, and output may constitute trade secrets. Companies must regulate input usage and review third-party terms to avoid disclosure risks.
Copyright issues arise on both input and output sides of AI systems. Use of protected content for training is under legal scrutiny.
AI-generated works lack copyright protection under current law, as they are not human creations. This means such output is in the public domain.
The EU AI Act applies broadly to organisations involved in the development, distribution, or use of AI systems that affect the EU market, regardless of where the organisation is headquartered.
Liable parties include:
Within organisations, responsibility typically falls across:
The Act holds these parties accountable through enforcement mechanisms, including fines, market restrictions, and liability exposure for non-compliance or harm caused by AI systems.
The EU AI Act does not directly apply in the UK, as the UK is no longer part of the European Union. However, UK-based organisations may still be affected if they develop, sell, or deploy AI systems that are used within the EU or impact EU individuals.
In such cases, UK companies may be required to comply with the EU AI Act as extraterritorial legislation. This makes alignment with EU AI Act requirements relevant for UK organisations operating internationally or serving EU markets, particularly in regulated or high-risk use cases.
Companies must review third-party AI system terms, focusing on:
Applicable law and jurisdiction
Storage and use of input for training
Rights to output
Indemnification against copyright claims
Warranty and liability limitations
Internal AI guidelines help regulate employee use of AI systems. These may include:
Descriptions and authorisations of AI systems
Instructions for input and output handling
Confidentiality and data protection compliance
Cybersecurity measures and transparency obligations
The EU AI Act will largely apply from August 2, 2026, and must be implemented by companies using AI. It regulates AI providers and deployers through a risk-based approach: the higher the risk of societal harm, the stricter the rules.
Compliance with GDPR is mandatory when processing personal data using AI systems.
AI systems must be safeguarded against unauthorised access and cyber attacks.
Trade secrets must be protected when using AI systems.
Copyright issues on both input and output sides are under legal scrutiny.
Companies are liable for defects in products and services caused by AI.
Terms of use of third-party AI systems must be reviewed carefully.
AI literacy among employees should be promoted through internal guidelines.
Staying compliant with the EU AI Act means more than just understanding the rules—it requires active governance, risk monitoring, and clear accountability across your AI systems. From copyright and liability to terms of use and internal guidelines, organisations must ensure that every aspect of AI deployment aligns with evolving legal standards.
To support this, companies can leverage advanced tools that simplify compliance and reduce exposure to cyber risks by using Trend Micro’s Cyber Risk Exposure Management platform, designed to help you identify vulnerabilities, manage AI-related risks, and maintain trust across your digital operations.
The EU AI Act is a regulation governing artificial intelligence systems to ensure safety, transparency, and fundamental rights protection.
The EU AI Act enters into force in 2024, with full application expected by 2026 across all EU member states.
The EU AI Act applies to providers, users, and importers of AI systems operating within or targeting the European Union market.
The EU AI Act was passed by the European Parliament in 2024 after extensive negotiations and stakeholder consultations.
To comply, organisations must classify AI systems by risk, ensure transparency, conduct conformity assessments, and maintain documentation.