Parliament Adopts Sweeping Artificial Intelligence Act to Safeguard Rights and Foster Innovation

Parliament today passed a groundbreaking legislation, the Artificial Intelligence Act, ushering in a new era of technological regulation. With an overwhelming majority of 523 votes in favor, 46 against, and 49 abstentions, the Act aims to strike a balance between safeguarding fundamental rights, democracy, and the environment while fostering innovation and positioning Europe as a global leader in the field of AI.

The Act introduces a comprehensive set of obligations for AI systems based on their potential risks and impact levels. By outlining prohibitions on certain applications that infringe on citizens’ rights, the legislation takes a proactive stance toward protecting individual freedoms. For example, the Act outlaws the use of biometric categorization systems relying on sensitive characteristics and the indiscriminate collection of facial images from the internet or surveillance footage for the creation of facial recognition databases.

Likewise, the Act prohibits the use of AI for emotion recognition in workplaces and schools, the implementation of citizen scoring systems, predictive policing solely based on profiling, and AI that manipulates human behavior or exploits vulnerabilities.

While maintaining a commitment to protecting citizens’ rights, the legislation acknowledges exceptions for law enforcement authorities. Biometric identification systems can be utilized under strict safeguards in specific and well-defined situations. For instance, the targeted search for a missing person or the prevention of a terrorist attack may necessitate the use of real-time biometric identification systems. However, their application requires limited usage time and place, as well as prior judicial or administrative authorization. Any retrospective utilization of such systems is considered high-risk and must be subject to judicial approval due to its association with criminal offenses.

The Act also imposes clear obligations on high-risk AI systems, which have the potential to significantly impact health, security, fundamental rights, the environment, democracy, and the rule of law. From critical infrastructure to education, vocational training, and essential public and private services like healthcare and banking, these systems must conduct risk assessments, maintain usage records, ensure transparency and accuracy, and incorporate human oversight. Moreover, individuals affected by AI systems have the right to lodge complaints and receive explanations regarding decisions made based on these systems.

To ensure accountability and transparency, the Act establishes requirements for general-purpose AI systems and the underlying models they rely on. It mandates compliance with EU copyright law, the disclosure of detailed summaries of the content used for training AI models, and clear labeling of artificial or manipulated images, audio, or video content for easy identification.

In a move to support innovation and foster the growth of small and medium enterprises (SMEs), the Act emphasizes the creation of controlled test and trial spaces. These spaces, available at the national level, enable SMEs and startups to develop and train their AI solutions under real-life conditions before commercialization.

The adoption of the Artificial Intelligence Act represents a pivotal moment in the regulation of AI technology. By striking a delicate balance between safeguarding fundamental rights and boosting innovation, Europe paves the way for ethical and responsible AI development. The Act sets a global precedent, ensuring that AI remains a tool for progress while safeguarding the values that underpin our societies.

Frequently Asked Questions

What is the purpose of the Artificial Intelligence Act?

The Artificial Intelligence Act aims to protect fundamental rights, democracy, the rule of law, and the environment while fostering innovation and positioning Europe as a global leader in AI technology.

What applications are prohibited under the Act?

The Act prohibits AI applications that infringe on citizens’ rights, including biometric categorization systems based on sensitive characteristics, indiscriminate collection of facial images for facial recognition databases, emotion recognition in workplaces and schools, citizen scoring systems, predictive policing solely based on profiling, and AI that manipulates human behavior or exploits vulnerabilities.

Are there any exceptions for law enforcement authorities?

Yes, law enforcement authorities are granted exceptions under strict safeguards. For example, the targeted search for missing persons or the prevention of terrorist attacks may involve the use of real-time biometric identification systems. However, prior judicial or administrative authorization is required, and retrospective utilization of such systems is subject to judicial approval.

What obligations apply to high-risk AI systems?

High-risk AI systems, which can significantly impact various aspects of society, must assess and mitigate risks, maintain usage records, ensure transparency and accuracy, incorporate human oversight, and allow individuals to lodge complaints and receive explanations regarding decisions made by these systems.

What requirements exist for transparency in AI systems?

General-purpose AI systems and their underlying models must meet transparency requirements, including compliance with EU copyright law, disclosure of training data summaries, and clear labeling of artificial or manipulated images, audio, or video content.

What measures support innovation and SMEs?

The Act emphasizes the creation of controlled test and trial spaces for SMEs and startups, allowing them to develop and train innovative AI solutions under real-life conditions before commercialization.

What is the purpose of the Artificial Intelligence Act?

The Artificial Intelligence Act aims to protect fundamental rights, democracy, the rule of law, and the environment while fostering innovation and positioning Europe as a global leader in AI technology.

What applications are prohibited under the Act?

The Act prohibits AI applications that infringe on citizens’ rights, including biometric categorization systems, the indiscriminate collection of facial images for facial recognition databases, emotion recognition in workplaces and schools, citizen scoring systems, predictive policing solely based on profiling, and AI that manipulates human behavior or exploits vulnerabilities.

Are there any exceptions for law enforcement authorities?

Yes, law enforcement authorities are granted exceptions under strict safeguards. For example, the targeted search for missing persons or the prevention of terrorist attacks may involve the use of real-time biometric identification systems. However, prior judicial or administrative authorization is required, and retrospective utilization of such systems is subject to judicial approval.

What obligations apply to high-risk AI systems?

High-risk AI systems, which can significantly impact various aspects of society, must assess and mitigate risks, maintain usage records, ensure transparency and accuracy, incorporate human oversight, and allow individuals to lodge complaints and receive explanations regarding decisions made by these systems.

What requirements exist for transparency in AI systems?

General-purpose AI systems and their underlying models must meet transparency requirements, including compliance with EU copyright law, disclosure of training data summaries, and clear labeling of artificial or manipulated images, audio, or video content.

What measures support innovation and SMEs?

The Act emphasizes the creation of controlled test and trial spaces for SMEs and startups, allowing them to develop and train innovative AI solutions under real-life conditions before commercialization.

For more information, you can visit the main domain of the European Union:
europa.eu

The source of the article is from the blog papodemusica.com

Privacy policy
Contact