The United Nations Security Council held a session for the first time on Tuesday on the threat posed by artificial intelligence to international peace and stability, and Secretary-General António Guterres called on a global watchdog to oversee a new technology that has raised the at least as much fear. hope
Mr Guterres warned that AI could pave the way for criminals, terrorists and other actors intent on “causing death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale”.
Last year’s launch of ChatGPT – which can create texts from prompts, imitate voice and generate photos, diagrams and videos – raised concerns about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of AI outlined to the Security Council the risks and threats — as well as the scientific and social benefits — of the emerging new technology. Much is still unknown about the technology even as its development speeds ahead, they said.
“It’s like we’re building engines without understanding the science of combustion,” said Jack Clark, co-founder of AI safety research company Anthropic. Private companies, he said, should not be the sole creators and controllers of AI
Mr Guterres said the UN watchdog should act as a governing body to regulate, monitor and enforce AI regulations in the same way other agencies oversee aviation, climate and nuclear energy.
The proposed agency would consist of experts in the field who would share their expertise with governments and administrative agencies that may lack the technical know-how to address AI threats.
But the prospect of a legally binding resolution ruling it is remote. Most diplomats, however, did he endorsed the concept of a global governance mechanism and a set of international rules.
“No country will be involved in AI, so we need to bring in and engage the broadest coalition of international actors from all sectors,” said British foreign secretary James Cleverly, who led the meeting because Britain holds the rotating presidency of the Council this month. .
Russia, departing from the majority view of the Council, expressed doubts that enough was known about the risks of AI to elevate it as a source of threats to global instability. And China’s ambassador to the United Nations, Zhang Jun, pushed back against creating a set of global laws and said international regulatory bodies must be flexible enough to allow countries to develop their own rules.
China’s ambassador, however, said his country was opposed to the use of AI as “a means to create military hegemony or undermine a country’s sovereignty.”
The military’s use of autonomous weapons in battle or in another country for assassinations has also been brought up, such as the satellite-controlled AI robot that Israel sent to Iran to kill a top nuclear scientist, Mohsen Fakhrizadeh.
Mr Guterres said the United Nations must come up with a legally binding agreement by 2026 banning the use of AI in automated warfare.
Professor Rebecca Willett, director of AI at the Data Science Institute at the University of Chicago, said in an interview that it was important, while controlling the technology, not to lose sight of the people behind it.
The systems are not fully autonomous. and the people who design them must be held accountable, she said.
“This is one of the reasons why the UN is looking at this,” said Professor Willett. “There really needs to be international consequences so that a company based in one country cannot destroy another country without violating international agreements. Real, enforceable regulation can make things better and safer.”