The United Nations should create a new international body to help regulate the use of artificial intelligence as the technology gradually reveals its potential risks and benefits, according to the Secretary-General of the UN, António Guterres.
The United Nations has an opportunity to set the rules of the road around the world for monitoring and controlling AI, Guterres said Tuesday at the first ever meeting of the United Nations Security Council focused on AI governance.
Just as the UN convened similar bodies to manage the use of nuclear energy, boost aviation safety and address the challenges of climate change, Guterres said, the UN has a unique role in coordinating the international response to AI.
Already, the United Nations is using artificial intelligence in its own operations to monitor ceasefires and identify patterns of violence, he said, and hostile actors using AI for malicious purposes are targeting United Nations peacekeeping and humanitarian operations also, “causing great human suffering. ”
“The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma and deep psychological damage on an unimaginable scale,” Guterres warned. “Generative AI has enormous potential for both good and evil at scale. Its creators themselves have warned that far greater risks, potentially catastrophic and existential, lie ahead. Without action to address these risks, we are abandoning our responsibilities to current and future generations.”
By 2026, the UN should develop a legally binding agreement banning the use of AI in fully automated weapons of war, Guterres said. He also pledged to convene an advisory council that will develop proposals to regulate AI more broadly by the end of the year, and discussed an upcoming policy brief with recommendations for governments on how to approach the technology responsibly.
Heading Tuesday’s meeting was UK Foreign Secretary James Cleverly, who called for international AI governance to be linked to principles that stand for freedom and democracy; respect for human rights and the rule of law; security, including physical security as well as protection of property rights and privacy; and reliability.
“We’re here today because AI will impact the work of this council,” Cleverly said. “It could enhance or disrupt global strategic stability. It challenges our basic assumptions about protection and deterrence. It raises moral questions about accountability for lethal decisions on the battlefield…. AI could aid in the reckless search for weapons of mass destruction by both state and non-state actors. But it could help us stop the spread.”
The Chinese government, meanwhile, has argued that UN rules should reflect the views of developing countries as it seeks to prevent technology from becoming a “running wild horse.”
The international laws and norms around AI should be flexible to give countries the freedom to establish their own national regulations, said Chinese Ambassador Zhang Jun, who also blasted unnamed “developed countries” for trying to dominate out in AI.
“In order to seek technological superiority, some developed countries make efforts to build their exclusive little clubs and maliciously hinder the technological development of other countries and artificially create technological barriers,” said Zhang. “China firmly opposes these behaviors.”
Zhang’s comments come on the heels of reports that the US government may try to restrict the flow of powerful artificial intelligence chips to China.
An official representing the United States at the meeting did not directly address the Chinese government’s allegations but said that “no member state should use AI to censor, restrict, prevent or disable people” – an innocuous reference to there could be Chinese use of technology for ethnic surveillance. minorities.
The meeting also included some voices from the technology industry.
Speaking to the security council via teleconference, Jack Clark, co-founder of the AI company Anthropic, urged member states not to allow private companies to dominate the development of artificial intelligence.
“We cannot leave the development of artificial intelligence to private sector actors alone,” Clark said. “The governments of the world must come together to develop a safe resource and further develop powerful AI systems as a joint effort across all parts of society, rather than one dictated entirely by a small number of firms competing with each other in the market. ”