What is the EU AI Act?

tball

The Artificial Intelligence Act (AI Act) is a landmark regulation introduced by the European Union to govern the development and use of artificial intelligence technologies.

What is the EU AI Act?

The AI Act entered into force on August 1, 2024, with full application starting August 2, 2026. Prohibited practices have been banned since February 2, 2025. The Act promotes innovation while safeguarding health, safety, and fundamental rights.

regulation according to risk level

It applies to both providers and deployers of AI systems, with providers bearing more extensive obligations. The Act uses a risk-based approach:

  • Prohibited practices: Includes cognitive behavioral manipulation, social scoring, scraping facial images, and biometric inference of sensitive attributes.

  • High-risk AI systems: Used in critical infrastructure, HR, credit rating, or courts. Must meet strict requirements including risk management, CE marking, and human oversight.

  • General-purpose AI models (GPAI): Includes LLMs like ChatGPT. Providers must ensure transparency, copyright compliance, and cybersecurity.

  • Limited risk AI systems: Subject to transparency obligations, such as labeling AI-generated content and deep fakes.

  • Minimal or no risk: Includes AI-enabled video games and spam filters, which are not regulated under the AI Act.

Violations of the AI Act can result in fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.

AI Literacy

Since February 2, 2025, providers and deployers must ensure sufficient AI literacy among staff. Training is recommended but not mandatory.

Key considerations include the company's role, general understanding of AI, associated risks, and tailored literacy measures based on technical knowledge and context.

Implementation in Companies

The AI Act comprises 113 articles and 13 annexes, requiring thorough planning and resources for implementation. Companies should conduct audits to assess:

  • Existence and categorization of AI systems

  • Operator type and intended purpose

  • Data processed and outputs generated

  • Compliance with prohibited practices and high-risk requirements

  • Transparency, cybersecurity, and human oversight obligations

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is an area of computer science that imitates human cognitive capabilities by identifying and sorting input data. This intelligence can be based on programmed workflows or created with machine learning.

In machine learning, training data is used to teach the AI to recognize patterns and make predictions. The AI Act defines an AI system as a machine-based system that operates with varying levels of autonomy and generates outputs such as predictions, content, recommendations, or decisions.

Examples of AI systems under the AI Act include emotion recognition, facial recognition, candidate selection, administration of justice, healthcare (e.g. symptom analysis), customer service, chatbots, and generative AI.

Generative AI, such as ChatGPT, refers to AI systems that autonomously generate results based on input data using machine learning and large language models (LLMs). These systems can make mistakes and 'hallucinate'—inventing probable but inaccurate statements.

Data Protection

Use of AI systems involving personal data must comply with GDPR. Fines for breaches can reach 4% of global turnover or EUR 20 million.

Companies must ensure lawful processing, respect data minimization, accuracy, and confidentiality, and fulfill information obligations.

Automated decisions with legal effects must involve human discretion. Technical and organizational measures (TOM) like encryption and pseudonymization are essential.

A data protection impact assessment is required for high-risk processing.

Protection of Trade Secrets

Trade secrets must be protected against unlawful acquisition and disclosure. Requirements include confidentiality measures, access restrictions, and NDAs.

AI systems, training data, and output may constitute trade secrets. Companies must regulate input usage and review third-party terms to avoid disclosure risks.

Copyright issues arise on both input and output sides of AI systems. Use of protected content for training is under legal scrutiny.

AI-generated works lack copyright protection under current law, as they are not human creations. This means such output is in the public domain.

Who Is Liable for AI-Related Defects?

Companies are liable for defects in products and services, including those caused by AI.

The reformed EU Product Liability Directive imposes strict liability for defective AI systems and components, covering personal injury, property damage, and data corruption.

What Should Be Reviewed in AI System Terms of Use?

Companies must review third-party AI system terms, focusing on:

  • Applicable law and jurisdiction

  • Storage and use of input for training

  • Rights to output

  • Indemnification against copyright claims

  • Warranty and liability limitations

What Guidelines Should Companies Follow for AI Use?

Internal AI guidelines help regulate employee use of AI systems. These may include:

  • Descriptions and authorizations of AI systems

  • Instructions for input and output handling

  • Confidentiality and data protection compliance

  • Cybersecurity measures and transparency obligations

EU AI Act Summary

The EU AI Act will largely apply from August 2, 2026, and must be implemented by companies using AI. It regulates AI providers and deployers through a risk-based approach: the higher the risk of societal harm, the stricter the rules.

  • Compliance with GDPR is mandatory when processing personal data using AI systems.

  • AI systems must be safeguarded against unauthorized access and cyber attacks.

  • Trade secrets must be protected when using AI systems.

  • Copyright issues on both input and output sides are under legal scrutiny.

  • Companies are liable for defects in products and services caused by AI.

  • Terms of use of third-party AI systems must be reviewed carefully.

  • AI literacy among employees should be promoted through internal guidelines.

How does Trend Micro support compliance with the AI Act?

Staying compliant with the EU AI Act means more than just understanding the rules—it requires active governance, risk monitoring, and clear accountability across your AI systems. From copyright and liability to terms of use and internal guidelines, organizations must ensure that every aspect of AI deployment aligns with evolving legal standards.

To support this, companies can leverage advanced tools that simplify compliance and reduce exposure to cyber risks by using Trend Micro’s Cyber Risk Exposure Management platform, designed to help you identify vulnerabilities, manage AI-related risks, and maintain trust across your digital operations.

Frequently Asked Questions (FAQs)

Expand all Hide all

What is the EU AI Act?

add

The EU AI Act is a regulation governing artificial intelligence systems to ensure safety, transparency, and fundamental rights protection.

When does the EU AI Act come into force?

add

The EU AI Act enters into force in 2024, with full application expected by 2026 across all EU member states.

Who does the EU AI Act apply to?

add

The EU AI Act applies to providers, users, and importers of AI systems operating within or targeting the European Union market.

When was the EU AI Act passed?

add

The EU AI Act was passed by the European Parliament in 2024 after extensive negotiations and stakeholder consultations.

How to comply with the EU AI Act?

add

To comply, organizations must classify AI systems by risk, ensure transparency, conduct conformity assessments, and maintain documentation.