New International Treaty on AI Aims to Uphold Human Rights

A landmark international treaty on Artificial Intelligence, the first of its kind to be legally binding, has been approved by the Council of Europe. This pioneering treaty’s principal aim is to ensure the protection of human rights, adherence to the rule of law, and the upholding of democratic legal standards when AI systems are employed.

The achievement comes on the heels of the European Union’s adoption of the AI Act, a groundbreaking legislation governing the development, market placement, and usage of AI systems.

Both the treaty and the AI Act share a risk-based approach to the potential negative impacts of AI deployment, with a shared goal to harness the technology’s potential while safeguarding human rights and democracy.

The treaty, endorsed during the annual meeting of the foreign ministers of the Council of Europe’s 46 member states in Strasbourg, is the culmination of two years of dedicated efforts by the Committee on Artificial Intelligence (CAI). The CAI is an intergovernmental body that includes EU nations, 11 non-member countries, as well as contributors from the private sector, academia, and civil society.

The Secretary-General of the Council of Europe articulated the treaty as a response to the global requirement for an international legal standard underpinned by shared values across continents to ensure the respectful integration of AI within human rights frameworks.

Nevertheless, the treaty’s adoption has sparked controversy. Criticisms, such as those from the United Nations High Commissioner for Human Rights, have centered around concerns over national security exemptions and the near-total exclusion of private sector obligations from the treaty.

Despite such dissonances, which did not lead to textual amendments, the convention maintains a disparity in the application of rules between public entities, which are obligated to comply, versus private entities that may take alternative measures to adhere to the treaty. This asymmetry is justified by the Council of Europe due to the variety in legal systems globally.

Another contentious issue is the national security exemption, which allows states to bypass treaty obligations for activities protecting national security interests, provided that such activities conform to international law and democratic processes.

The treaty also establishes a monitoring mechanism through the Conference of the Parties, tasked with ensuring effective implementation.

The treaty is set to be open for signature in Vilnius, Lithuania, on September 5th, coinciding with a conference of ministers of justice.

Relevant facts not mentioned in the article:

The Universal Declaration of Human Rights established by the UN, though not legally binding, provides a foundational framework for many human rights treaties. It could influence the understanding and interpretation of human rights in AI.

Governance of AI is a multi-stakeholder challenge that includes tech companies, governments, and international bodies, and the private sector has shown some willingness to self-regulate through initiatives like the Partnership on AI.

Geopolitical tensions can affect international agreements. The relationship between different blocs (like the US and China) may influence the adoption and enforcement of such treaties outside the Council of Europe’s member states.

Data protection regulations, such as the General Data Protection Regulation (GDPR) in the EU, are already in place in various jurisdictions which impose specific obligations on AI relating to personal data.

Public understanding and trust in AI are crucial for its adoption, and international treaties may play a role in ensuring transparency and accountability.

Important questions and their answers:

What is the Council of Europe’s role in AI governance?
The Council of Europe promotes cooperation among European countries to protect human rights, democracy, and the rule of law, and it is now extending its reach to establish rules governing AI that are consistent with these values.

How does the new treaty on AI differ from the EU’s AI Act?
While both the treaty and the AI Act have similar goals, the treaty is international and legally binding for its signatories, whereas the AI Act is specific legislation for EU member states.

Will private companies be legally bound by the treaty?
The treaty maintains a disparity in the application of rules; public entities are obligated to comply directly, whereas private entities are encouraged but not strictly bound to adhere to it, allowing for flexibility given the diversity in global legal systems.

Key challenges or controversies:

Enforcement: How to enforce the treaty, especially among private companies and in countries with different legal systems and values, remains a major challenge.

Technological pace: AI technology evolves rapidly, which means that treaty provisions must be adaptable to future developments without requiring frequent renegotiations.

National security exemptions: These may be seen as loopholes that allow states to undermine the rule of law or human rights under the guise of protecting security.

Advantages and disadvantages:

Advantages:
– Sets an international standard that could spearhead the development of similar laws globally.
– Strengthens protection for individual rights in the face of advancing technology.
– Provides a mechanism for monitoring compliance and fostering international cooperation.

Disadvantages:
– Exemptions and flexibility for private entities could lead to uneven applications of the treaty’s principles.
– Possible conflicts with national laws and interests may hinder the effectiveness of the treaty.
– The rapid evolution of AI technology may outpace the treaty’s provisions.

Suggested related links:
Council of Europe
United Nations
European Union

Opening for signature in Vilnius on September 5th is also of symbolic importance given Lithuania’s role in tech innovation and its stance on human rights within the EU and the Council of Europe.

Privacy policy
Contact