Existential Risks and Global Governance Issues Around AI and Robotics
Author
Key Points
Media Contact
For general and media inquiries and to book our experts, please contact: pr@rstreet.org
Continuous communication, coordination and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential in heading off risks as they develop and in creating and reinforcing ethical norms.
Executive Summary
There are growing concerns about how lethal autonomous weapons systems, artificial general intelligence (or “superintelligence”) or “killer robots” might give rise to new global existential risks. Continuous communication and coordination—among countries, developers, professional bodies and other stakeholders—is the most important strategy for addressing such risks.
Although global agreements and accords can help address some malicious uses of artificial intelligence (AI) or robotics, proposals calling for control through a global regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are also futile because many nations would never agree to forego developing algorithmic capabilities when adversaries are advancing their own. Therefore, the U.S. government should continue to work with other nations to address threatening uses of algorithmic or robotic technologies while simultaneously taking steps to ensure that it possesses the same technological capabilities as adversaries or rogue nonstate actors.
Many different nongovernmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns. Soft law (i.e., informal rules, norms and agreements) will also play an important role in addressing AI risks. Professional institutions and nongovernmental bodies have developed important ethical norms and expectations about acceptable uses of algorithmic technologies, and these groups also play an essential role in highlighting algorithmic risks and helping with ongoing efforts to communicate and coordinate global steps to address them.