Scroll Top
19th Ave New York, NY 95822, USA
Screen Shot 2024-02-02 at 15.55.47

Artificial Intelligence Systems and Key Requirements of the European Artificial Intelligence Act

The European Artificial Intelligence Act (AI Act) will officially come into force on 1st August 2024. Although it becomes effective on this date, its requirements will be phased in over the next few years, with full enforcement expected by August 2026. The AI Act regulates the development, deployment, and use of artificial intelligence systems (AI systems) operating in or accessible from the European market. It sets out multiple requirements for providers and their representatives, importers, distributors, and deployers of such AI systems.

In this article, we focus on the scope of the AI Act and the requirements for high-risk AI systems. Future articles will cover other aspects of the Act.

Scope of AI Systems Concerned

  • Prohibited AI systems. The AI Act bans AI systems that manipulate human behaviour to the detriment of users, exploit vulnerabilities of individuals, engage in social scoring by public authorities, use real-time biometric identification in public spaces for law enforcement purposes (with certain exceptions), and employ indiscriminate surveillance or predictive policing technologies.
  • High-risk AI systems. High-risk AI systems include those used in biometric identification and categorisation, management and operation of critical infrastructure, education and vocational training, employment, worker management, access to essential private and public services, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes. These systems face stringent requirements for risk management, data governance, transparency, human oversight, and robustness.
  • Limited-risk AI systems. Limited-risk AI systems, while not explicitly categorised in the same way as high-risk systems, are those AI systems that pose lower risks and require transparency measures to ensure users are aware they are interacting with an AI system. These systems are subject to certain obligations to mitigate risks and ensure transparency, but they do not face the stringent requirements applied to high-risk AI systems.
  • General-purpose AI models and systems. General-purpose AI models and systems are designed to perform a wide variety of tasks across different domains. They include large-scale AI models such as language models, which can be adapted and integrated into various applications beyond their original purpose. These models and systems must meet specific obligations under the AI Act, including ensuring transparency, explainability, and compliance with set requirements to mitigate risks associated with their broad applicability.

Requirements for High-Risk AI Systems

  • Risk management. Providers must implement risk management processes throughout the AI system’s lifecycle to identify and mitigate risks. This includes identifying, analysing, and mitigating risks associated with the AI system.
  • Data governance. High-quality datasets are essential to minimise biases. Providers must ensure that data used for training, validation, and testing is relevant, representative, and free from errors.
  • Technical documentation. Providers need to prepare detailed technical documentation that includes information on the system’s design, development, and operational processes.
  • Information to users. Providers must ensure that AI systems are transparent. Users should be informed that they are interacting with an AI system. Additionally, providers must furnish clear and comprehensible instructions for use.
  • Human oversight. AI systems must be designed so that humans can step in and intervene when necessary. This means setting up clear protocols and tools that allow human operators to monitor the AI system’s decisions and actions in real time. If something goes wrong or if the AI behaves unexpectedly, humans must be able to override or shut down the system.
  • Accuracy, robustness, and cybersecurity. AI systems must be designed to achieve accuracy, reliability, and security throughout their lifecycle. This includes protection against attacks and ensuring consistent performance.
  • Post-market monitoring. Providers must establish a post-market monitoring system to track the AI system’s performance and compliance once it is in use. Any serious incidents or malfunctions must be reported to the relevant authorities.
  • Compliance and conformity assessment. AI systems are subject to conformity assessments to meet regulatory standards. Providers may need to obtain certifications and maintain compliance records for inspection by authorities.

Correlation with the GDPR

The AI Act applies in parallel with the General Data Protection Regulation (GDPR), enhancing personal data protection in AI applications. Both regulations stress transparency, accountability, and individual rights. However, the AI Act specifically addresses the nuances of AI systems, such as algorithmic transparency and potential risks to individuals. This dual framework requires businesses to comply with the AI Act and GDPR when dealing with AI technologies and personal data. Companies must ensure their AI systems are transparent, fair, and protective of personal data, adhering to the standards set by both regulations.

Conclusion

The European AI Act introduces new requirements for developing and using AI systems. Similar to the GDPR, the AI Act impacts businesses outside Europe. Since many AI applications involve personal data, both the AI Act and GDPR will often apply. Staying informed is crucial as we approach the full implementation of the AI Act. If you have questions, we offer a complimentary 20-minute call. Book your session with our lawyers.

Image by vectorjuice on Freepik

Anna Levitina

Partner

anna.levitina@loganpartners.com

More about Anna

Read other articles written by Anna Levitina