WASHINGTON, USA – As the AI Seoul Summit begins (21 May), US Secretary of Commerce Gina Raimondo released a strategic vision for the US Artificial Intelligence Safety Institute (AISI), describing the Department’s approach to AI safety under president Biden’s leadership.
At president Biden’s direction, the National Institute of Standards and Technology (NIST) within the Department of Commerce launched the AISI, building on NIST’s long-standing work on AI. In addition to releasing a strategic vision, Raimondo also shared the Department’s plans to work with a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices, and to convene the institutes later this year in the San Francisco area, where the AISI recently established a presence.
“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly. That is the focus of our work every single day at the US. AI Safety Institute, where our scientists are fully engaged with civil society, academia, industry, and the public sector so we can understand and reduce the risks of AI, with the fundamental goal of harnessing the benefits,” said US secretary of commerce Gina Raimondo.
“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety. Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”
Commerce Department AI Safety Institute Strategic Vision
The strategic vision released today, available here, outlines the steps that the AISI plans to take to advance the science of AI safety and facilitate safe and responsible AI innovation. At the direction of President Biden, NIST established the AISI and has since built an executive leadership team that brings together some of the brightest minds in academia, industry and government.
The strategic vision describes the AISI’s philosophy, mission, and strategic goals. Rooted in two core principles, first, that beneficial AI depends on AI safety; and second, that AI safety depends on science, the AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more.
The AISI will focus on three key goals:
Advance the science of AI safety; Articulate, demonstrate, and disseminate the practices of AI safety; and support institutions, communities, and coordination around AI safety.
To achieve these goals, the AISI plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations, among other topics; and perform and coordinate technical research. The US AI Safety Institute will work closely with diverse AI industry, civil society members, and international partners to achieve these objectives.
Launch of International Network of AI Safety Institutes
Concurrently, today Secretary Raimondo announced that the Department and the AISI will help launch a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices focused on AI safety and committed to international cooperation.
Building on the foundational understanding achieved by the Republic of Korea and our other partners at the AI Seoul Summit through the Seoul Statement of Intent toward International Cooperation on AI Safety Science, this network will strengthen and expand on AISI’s previously announced collaborations with the AI Safety Institutes of the UK, Japan, Canada, and Singapore, as well as the European AI Office and its scientific components and affiliates, and will catalyze a new phase of international coordination on Al safety science and governance.
This network will promote safe, secure, and trustworthy artificial intelligence systems for people around the world by enabling closer collaboration on strategic research and public deliverables.
To further collaboration between this network, AISI intends to convene international AI Safety Institutes and other stakeholders later this year in the San Francisco area. The AISI has recently established a Bay Area presence and will be leveraging the location to recruit additional talent.