EU AI Act Articles

EU AI Act Articles

Deepening the Understanding of Key Governance Areas in the EU AI Act

Risk Levels in the EU AI Act

The EU AI Act uses a risk-based approach, categorizing AI systems into four risk levels:

  1. Minimal Risk: AI applications that fall here can be developed and used with minimal compliance requirements.

  2. Limited Risk: AI systems must comply with transparency obligations to inform users when they are interacting with an AI (e.g., chatbots).

  3. High Risk: This category includes AI systems used in sectors like financial, healthcare, education, employment, law and enforcement, migration and border control, and transport. These systems must meet stringent requirements in terms of transparency, data governance, and robustness before being deployed.

  4. Unacceptable Risk: Certain uses of AI are banned outright due to the severe risks they pose, such as social scoring systems or AI that manipulates human behavior.

High-Risk AI Systems

High-risk AI systems are subject to rigorous compliance obligations under the EU AI Act. These obligations include:

  • Technical Documentation (Article 11)

    • Article 11 requires that each high-risk AI system is accompanied by comprehensive technical documentation. This documentation must provide all necessary details about the system's design, development, and deployment, including descriptions of the algorithm, training methodologies, data used, and functionalities. This documentation ensures that AI systems are transparent and can be assessed for compliance with relevant regulations.
  • Risk Management (Article 17)

    • Article 17 outlines the requirement for AI systems to have a comprehensive risk management system in place. This system must be continuously updated and must assess the risks associated with the use, handling, and outcomes of AI systems. The article emphasizes that the risk management process should be iterative, reflecting changes in the operational environment and new knowledge about potential risks.
  • Logging and Monitoring (Articles 21 and 22)

    • Articles 21 and 22 specify the obligations for maintaining logs that are detailed enough to reconstruct AI system activities, especially for high-risk AI systems. These logs are vital for the purpose of traceability of outputs and for accountability in case of adverse effects. The logs must be managed in a way that respects data protection regulations, ensuring that personal data is handled securely and ethically.
  • Data Governance (Article 10)

    • Article 10 focuses on the quality and control of data used by AI systems. It mandates that data sets used by high-risk AI systems must be relevant, representative, free of errors, and complete. Additionally, the data must be gathered and processed in full compliance with EU data protection laws, ensuring the protection of fundamental rights and freedoms, particularly regarding personal data privacy.
  • Cybersecurity and Firewalls (Article 25)

    • Cybersecurity considerations are integrated throughout the EU AI Act. The Act underscores the importance of implementing strong cybersecurity measures to protect AI systems from unauthorized access and attacks. These measures include maintaining data integrity, securing communication channels, and safeguarding against potential breaches that could compromise AI systems.

Additional References

  • Transparency and Informing Users (Article 13): This article mandates that operators of high-risk AI systems inform users about the AI system's capabilities and limitations, ensuring that users are aware of how decisions are made by the AI.

  • Human Oversight (Article 14): Article 14 addresses the need for adequate human oversight mechanisms to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes.

These references highlight the EU AI Act’s comprehensive approach to regulating AI systems, ensuring they operate safely, ethically, and transparently within the EU. Each article plays a critical role in establishing a framework that balances innovation with fundamental rights and safety.