European Parliament Approves New Regulations for Artificial Intelligence

The European Parliament has recently given its approval to the Artificial Intelligence Act, introducing a set of rules to govern the use of AI. These regulations aim to protect citizens’ rights while fostering innovation and ensuring compliance with fundamental principles.

The act, which was agreed upon in December 2023, received overwhelming support from MEPs, with 523 votes in favor, 46 against, and 49 abstentions. It includes several important measures to safeguard the responsible use of AI. For instance, the act establishes restrictions on general-purpose AI, imposes limits on the use of biometric identification systems, and bans the utilization of AI for manipulative or exploitative purposes.

One of the key provisions of the new regulations relates to the ban on certain AI applications that pose a threat to citizens’ rights. This includes the unauthorized collection of facial images from the internet or CCTV footage to create facial recognition databases. Such practices, which can compromise individual privacy, will be strictly prohibited under the act.

According to Deirdre Clune, an MEP from Ireland South, the act defines specific high-risk categories where additional requirements must be met to ensure compliance. These high-risk areas encompass various domains such as education, training, employment, and healthcare. For instance, the use of AI in healthcare, where it might assist in making treatment decisions, falls under the high-risk category due to potential risks to patient safety.

Clune emphasized that companies using AI systems in high-risk areas will be required to share the data upon which their algorithms are based. Additionally, they will have to engage with regulators and adhere to a code of conduct established by the regulatory authorities. The act also establishes a central AI office that will collaborate with member states, providing consultation, assistance, and support to developers and deployers.

The ban extends to several practices that have raised concerns regarding the ethical use of AI. Facial image scraping from the internet, social scoring, and the exploitation of individuals through subliminal techniques will all be explicitly prohibited under the new law. However, certain exceptions are made for law enforcement, which can utilize biometric identification in specific cases such as child abduction or terrorism, subject to judicial approval and time limitations.

It is important to note that the regulation is still subject to a final lawyer-linguist check and needs formal endorsement from the European Council. Once these steps are completed, the act will come into force 20 days after its publication and be fully applicable within 24 months from that date.

The approval of these regulations marks a significant milestone in the governance of AI in Europe. By addressing the potential risks and ensuring the responsible use of AI technologies, the European Parliament aims to foster innovation while upholding the principles of safety, privacy, and fundamental rights for its citizens.

FAQ

1. What is the purpose of the Artificial Intelligence Act?

The purpose of the Artificial Intelligence Act is to establish regulations that govern the use of AI, ensuring the protection of citizens’ rights, promoting innovation, and enforcing compliance with fundamental principles.

2. What are some of the key measures introduced by the act?

The act includes safeguards on general-purpose AI, restrictions on biometric identification systems, and bans on AI applications that could manipulate or exploit user vulnerabilities. It also prohibits certain practices such as facial image scraping, social scoring, and the use of subliminal techniques for exploitation.

3. What qualifies as a high-risk category under the regulations?

High-risk categories include areas like education, training, employment, and healthcare. For example, the use of AI in healthcare, where it influences treatment decisions, falls under the high-risk category due to potential risks to patient safety.

4. What obligations do companies using AI systems in high-risk areas have?

Companies operating in high-risk areas must comply with additional requirements, including sharing the data on which their AI systems are based, engaging with regulators, and adhering to a code of conduct established by the regulatory authorities.

5. Who will provide support and assistance to developers and deployers?

A central AI office will work in collaboration with member states to provide consultation, support, and assistance to developers and deployers of AI systems.

Sources:

– [European Parliament](https://www.europarl.europa.eu/news/en/press-room/20220106IPR24413/artificial-intelligence-act-meps-back-new-rules-governing-the-use-of-ai)

1. What is the purpose of the Artificial Intelligence Act?

The purpose of the Artificial Intelligence Act is to establish regulations that govern the use of AI, ensuring the protection of citizens’ rights, promoting innovation, and enforcing compliance with fundamental principles.

2. What are some of the key measures introduced by the act?

The act includes safeguards on general-purpose AI, restrictions on biometric identification systems, and bans on AI applications that could manipulate or exploit user vulnerabilities. It also prohibits certain practices such as facial image scraping, social scoring, and the use of subliminal techniques for exploitation.

3. What qualifies as a high-risk category under the regulations?

High-risk categories include areas like education, training, employment, and healthcare. For example, the use of AI in healthcare, where it influences treatment decisions, falls under the high-risk category due to potential risks to patient safety.

4. What obligations do companies using AI systems in high-risk areas have?

Companies operating in high-risk areas must comply with additional requirements, including sharing the data on which their AI systems are based, engaging with regulators, and adhering to a code of conduct established by the regulatory authorities.

5. Who will provide support and assistance to developers and deployers?

A central AI office will work in collaboration with member states to provide consultation, support, and assistance to developers and deployers of AI systems.

Sources:

– European Parliament: link

The source of the article is from the blog scimag.news

Privacy policy
Contact