European Parliament Adopts AI Act

European Parliament Adopts AI Act
Depositphotos

European Parliament approved the Artificial Intelligence Act, the law that ensures safety and compliance with fundamental rights while boosting innovation. The regulation agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against, and 49 abstentions.

AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto is considered a high-risk use case, requiring judicial authorization being linked to a criminal offense.

Clear obligations are also foreseen for other high-risk AI systems due to their significant potential harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labeled as such.