The Council of Europe has adopted the first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems.
The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation.
The convention adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.
The Council of Europe Framework Convention on AI and human rights, democracy and the rule of the law was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe's Committee of Ministers, which brings together the Ministers for Foreign Affairs of the 46 Council of Europe member states.
Council of Europe Secretary General Marija Pejčinović said: “The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds people’s rights. It is a response to the need for an international legal standard supported by states in different continents which share the same values to harness the benefits of Artificial intelligence, while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.”
The convention is the outcome of two years' work by an intergovernmental body, the Committee on Artificial Intelligence (CAI), which brought together to draft the treaty the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay), as well as representatives of the private sector, civil society and academia, who participated as observers.
The treaty covers the use of AI systems in the public sector – including companies acting on its behalf - and in the private sector. The convention offers parties two ways of complying with its principles and obligations when regulating the private sector: parties may opt to be directly obliged by the relevant convention provisions or, as an alternative, take other measures to comply with the treaty's provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. This approach is necessary because of the differences in legal systems around the world.
The convention establishes transparency and oversight requirements tailored to specific contexts and risks, including identifying content generated by AI systems. Parties will have to adopt measures to identify, assess, prevent, and mitigate possible risks and assess the need for a moratorium, a ban or other appropriate measures concerning uses of AI systems where their risks may be incompatible with human rights standards.
They will also have to ensure accountability and responsibility for adverse impacts and that AI systems respect equality, including gender equality, the prohibition of discrimination, and privacy rights.
Moreover, parties to the treaty will have to ensure the availability of legal remedies for victims of human rights violations related to the use of AI systems and procedural safeguards, including notifying any persons interacting with AI systems that they are interacting with such systems.
As regards the risks for democracy, the treaty requires parties to adopt measures to ensure that AI systems are not used to undermine democratic institutions and processes, including the principle of separation of powers, respect for judicial independence and access to justice.
Parties to the convention will not be required to apply the treaty's provisions to activities related to the protection of national security interests but will be obliged to ensure that these activities respect international law and democratic institutions and processes. The convention will not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy or the rule of law.
In order to ensure its effective implementation, the convention establishes a follow-up mechanism in the form of a Conference of the Parties.
Finally, the convention requires that each party establishes an independent oversight mechanism to oversee compliance with the convention, and raises awareness, stimulates an informed public debate, and carries out multi-stakeholder consultations on how AI technology should be used.
The framework convention will be opened for signature in Vilnius (Lithuania) on 5 September on the occasion of a conference of Ministers of Justice.
Top Comments
Disclaimer & comment rulesCommenting for this story is now closed.
If you have a Facebook account, become a fan and comment on our Facebook Page!