Business / Artificial Intelligenceeyes on AI

The United Nations Wants to Treat AI With the Same Urgency as Climate Change

A United Nations report released today proposes having the international body oversee the first truly global effort for monitoring and governing artificial intelligence.

The report, produced by the UN secretary general’s High Level Advisory Body on AI, recommends the creation of a body similar to the Intergovernmental Panel on Climate Change to gather up-to-date information on AI and its risks.

The report calls for a new policy dialog on AI so that the UN’s 193 members can discuss risks and agree upon actions. It further recommends that the UN take steps to empower poorer nations, especially those in the global south, to benefit from AI and contribute to its governance. These should include, it says, creating an AI fund to back projects in these nations, establishing AI standards and data-sharing systems, and creating resources such as training to help nations with AI governance. Some of the report’s recommendations could be facilitated by the Global Digital Compact, an existing plan to address digital and data divides between nations. It finally suggests creating an AI office within the UN dedicated to coordinating existing efforts within the UN to meet the report’s goals.

“You’ve got an international community that agrees there are both harms and risks as well as opportunities presented by AI,” says Alondra Nelson, a professor at the Institute for Advanced Study who served on the UN advisory body at the recommendation of the White House and State Department.

The remarkable abilities demonstrated by large language models and chatbots in recent years have sparked hopes of a revolution in economic productivity but have also prompted some experts to warn that AI may be developing too rapidly and could soon become difficult to control. Not long after ChatGPT appeared, many scientists and entrepreneurs signed a letter calling for a six-month pause on the technology’s development so that the risks could be assessed.

More immediate concerns include the potential for AI to automate disinformation, generate deepfake video and audio, replace workers en masse, and exacerbate societal algorithmic bias on an industrial scale. “There is a sense of urgency, and people feel we need to work together,” Nelson says.

The UN proposals reflect high interest among policymakers worldwide in regulating AI to mitigate these risks. But it also comes as major powers—especially the United States and China—jostle to lead in a technology that promises to have huge economic, scientific, and military benefits, and as these nations stake out their own visions for how it should be used and controlled.

In March, the United States introduced a resolution to the UN calling on member states to embrace the development of “safe, secure, and trustworthy AI.” In July, China introduced a resolution of its own that emphasized cooperation in the development of AI and making the technology widely available. All UN member states signed both agreements.

“AI is part of US-China competition, so there is only so much that they are going to agree on,” says Joshua Meltzer, an expert at the Brookings Institute, a Washington, DC, think tank. Key differences, he says, include what norms and values should be embodied by AI and protections around privacy and personal data.

Differences between wealthy nations’ views on AI is already causing market fissures. The EU has introduced sweeping AI regulations with data usage controls that have prompted some US companies to limit the availability of their products there.

The hands-off approach adopted by the US government has led California to propose its own AI rules. Earlier versions of these regulations were criticized by AI companies based there as too onerous, for example in how they would require firms to report their activities to the government, resulting in the rules being watered down.

Meltzer adds that AI is evolving at such a rapid pace that the UN will not be able to manage global cooperation alone. “There is clearly an important role for the UN when it comes to AI governance, but it needs to be part of a distributed kind of architecture,” with individual nations also working on it directly, he says. “You’ve got a fast-evolving technology, and the UN is clearly not set up to handle that.”

The UN report seeks to establish common ground between member states by emphasizing the importance of human rights. “Anchoring the analysis in terms of human rights is very compelling,” says Chris Russell, a professor at Oxford University in the UK who studies international AI governance. “It gives the work a strong basis in international law, a very broad remit, and a focus on concrete harms as they occur to people.”

Russell adds that there is a great deal of duplication in the work governments are doing to evaluate AI with a view to regulation. The US and UK governments have separate bodies working on probing AI models for misbehavior, for example. The UN’s efforts might avoid further redundancy. “Working internationally and pooling our efforts makes a lot of sense,” he says.

Although governments may see AI as a way to gain a strategic edge, many scientists are aligned in their concerns about AI. Earlier this week, a group of prominent academics from the West and China issued a joint call for more collaboration on AI safety following a conference on the subject held in Vienna, Austria.

Nelson, the advisory body member, says she believes government leaders can work together across important issues, too. But she says much will depend on how the UN and its member states chose to follow through on the blueprint for cooperation. “The devil will be in the details of implementation,” she says.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.