PARIS, France – The 2024 OECD Ministerial Council Meeting (MCM) has adopted revisions to the landmark OECD Principles on Artificial Intelligence (AI). In response to recent developments in AI technologies, notably the emergence of general-purpose and generative AI, the updated Principles more directly address AI-associated challenges involving privacy, intellectual property rights, safety, and information integrity.
With 47 adherents now including the EU and a general scope that ensures applicability to AI developments around the world, the OECD AI Principles provide a blueprint for policy frameworks on how to address AI risks and shape AI policies. As the first intergovernmental standard on AI, they advocate for AI that is innovative and trustworthy, and that upholds human rights and democratic values.
Tracking developments since the Principles were first adopted in 2019, the OECD AI Policy Observatory shows that venture capital investments in generative AI startups have increased nine-fold, demand for AI skills has soared by 130 percent, and the share of large firms using AI, on average in the OECD, has nearly doubled to more than four times their smaller counterparts. These developments coincide with significant policy attention and action, evidenced by more than 1,000 AI initiatives in over 70 countries and jurisdictions.
The imperative is growing to develop and deploy AI systems to boost productivity, accelerate scientific research, promote environmental sustainability, and improve healthcare and education while upholding human rights and democratic values. But risks such as to privacy, security, fairness and well-being are developing at an unprecedented speed and scale, turning into real-world harms such as the perpetuation of bias and discrimination, the creation and dissemination of mis-and-dis information and the distortion of public discourse and markets.
Key elements of the OECD revisions, which ensure that the Principles remain relevant, robust and fit-for-purpose, include:
- Addressing safety concerns, so that if AI systems risk causing undue harm or exhibit undesired behaviour, robust mechanisms and safeguards exist to override, repair, and/or decommission them safely;
- Reflecting the growing importance of addressing mis- and disinformation, and safeguarding information integrity in the context of generative AI;
- Emphasising responsible business conduct throughout the AI system lifecycle, involving co-operation with suppliers of AI knowledge and AI resources, AI system users, and other stakeholders;
- Clarifying the information regarding AI systems that constitute transparency and responsible disclosure;
- Explicitly referencing environmental sustainability, a concern that has grown considerably in importance over the past five years;
- Underscoring the need for jurisdictions to work together to promote interoperable governance and policy environments for AI, as the number of AI policy initiatives worldwide surges;
“The OECD has helped shape digital policy agendas for decades, through evidence-based recommendations and extensive multilateral and multi-stakeholder cooperation,” OECD secretary-general Mathias Cormann said. “The OECD AI Principles are a global reference point for AI policymaking, facilitating global policy interoperability and promoting innovation with humans at the centre. The revised OECD AI Principles will provide a blueprint for global interoperability on AI policy and for policymakers to keep pace with technology, by addressing general-purpose and generative AI and their effects on our economies and societies.”
The recommendation of the OECD Council on Artificial Intelligence, which includes the OECD AI Principles, contains definitions that underpin and encourage international interoperability; the recommendation’s definitions of an AI system and its lifecycle are used around the world, including in the European Union, Japan and the United States. The definitions also inform work of the United Nations and EU-US Trade and Technology Council.