Scroll Top
19th Ave New York, NY 95822, USA
imageedit_1_8495754676

Inside the AI Act: What Companies Need to Know

On December 8, 2023, the Council of the European Union and the European Parliament reached a provisional agreement on the Artificial Intelligence Act, commonly referred to as the ‘AI Act’. This proposal is poised to set a global standard for AI regulation, akin to the General Data Protection Regulation’s (GDPR) influence on data privacy. In this article, we’ll delve into the critical aspects of the proposed act and examine its implications for affected businesses.

Understanding the AI Act’s Core Principles  

The AI Act is the EU’s proposed regulation impacting AI systems. Its objective is to ensure the safety of AI systems used in the EU and their adherence to fundamental rights and EU values. The core principle is to regulate AI based on its potential to cause harm to society, employing a ‘risk-based’ approach where higher risks warrant more stringent rules.

Key Features of the Provisional Agreement

  • Risk-Based Classification: AI systems are classified into high-risk and low-risk categories. High-risk systems, which could significantly impact safety or fundamental rights, face strict regulations. For these systems, specific obligations have been established, including a mandatory assessment of their impact on fundamental rights, obligations regarding potential bias (to ensure that systems are non-discriminatory and respect fundamental rights), the requirement for thorough documentation to demonstrate their compliance with the act, among others.
  • Scope and Definitions: the agreement aligns the definition of AI systems with the OECD’s approach, ensuring clarity in distinguishing AI from simpler software systems. It also specifies that the regulation does not apply to military or defence purposes and exempts AI systems used solely for research, innovation, or non-professional reasons. The act also introduces provisions to make requirements for high-risk AI systems technically feasible, lightening the burden, especially for small and medium-sized enterprises (SMEs).
  • Prohibited AI Practices: certain AI applications considered excessively dangerous. Noteworthy bans include cognitive behavioural manipulation, untargeted scraping of facial images for facial recognition, emotion recognition in workplaces and educational institutions, social scoring based on social behaviour or personal characteristics, biometric categorisation systems using sensitive data (e.g. political, religious, philosophical beliefs, sexual orientation, race), systems used to exploit people’s vulnerabilities (e.g., due to age, disability, social or economic situation), specific instances of predictive policing applications, and law enforcement use of real-time biometric identification in public (apart from limited, pre-defined and authorised situations).
  • General Purpose AI Systems: general purpose AIs (GPAIs) are AI systems that can be used for many purposes. To accommodate the wide range of tasks these systems can accomplish, it was agreed that GPAIs, along with their foundational models, must adhere to specific transparency requirements. These requirements include creating technical documentation, compliance with EU copyright law, and disseminating detailed summaries about the content used for training.

Penalties

Fines are delineated as a percentage of the global annual turnover or a fixed amount, with caps for SMEs and start-ups. Depending on the violation’s nature, fines can extend up to €35 million or 7% of the global annual turnover. Within the act, any individual can make complaints about non-compliance.

Next Steps

While an agreement has been made, the final text has not yet been agreed upon, and technical work to finalise the details is expected over the coming weeks. The Presidency will then present the consensus text to the Member States’ representatives for their endorsement, and upon approval, the agreed text will be formally adopted by the Parliament and the Council. The AI Act is expected to take effect two years after its enactment, with exceptions for specific provisions. Given recent developments, we anticipate it to be in force in the next few months, becoming applicable by the first half of 2026.

We encourage all businesses that may be covered by the law to start preparing. If your company uses or plans to deploy AI systems in the EU, now is the time to take the first steps towards compliance and seek legal advice to avoid surprises. We’re ready to assist with any questions regarding the legal considerations and practical implications of existing and upcoming regulatory frameworks for AI technologies. Get in touch with us for a free consultation.

 

Image by vectorjuice on Freepik

blank Isadora Werneck

Partner

isadora.werneck@loganpartners.com

More about Isadora
blank Anna Levitina

Senior Associate

anna.levitina@loganpartners.com

More about Anna

Read also

Visit Us On FacebookVisit Us On TwitterVisit Us On Linkedin