This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity and Artificial Intelligence Working Group sessions. Additional insights and perspectives from this series are accessible here.

As artificial intelligence (AI) use cases become increasingly prevalent across industries and governments, it is hard to overstate the need for robust standards that cultivate a security-first culture. Advancements in AI highlight the significant challenge of balancing benefits against potential risks, including AI’s inherent vulnerabilities. This ongoing debate over the right balance between innovation and regulation is robust and often contentious. In response, lawmakers, academics, experts, industry leaders, government officials, and concerned citizens are actively exploring potential solutions and seeking opportunities for collaboration.

Our working group brought together experts from government, the private sector, academia, and civil society. Two key areas that emerged from our efforts and that require further exploration are crafting cohesive standards and regulations for AI safety and promoting international regulatory harmonization and collaboration. Such efforts have the potential to favorably shape where and how AI is developed and operated, with the objective of matching rapid technological advancements with clear, comprehensive, and coordinated ethical, legal, and safety guidelines. This examination of AI governance also builds upon ongoing cybersecurity considerations, tailoring these strategies to a global, multi-stakeholder landscape.

1. Crafting Cohesive Standards and Regulations for AI Security

Navigating the domestic regulatory and policy landscape for AI reveals the challenge of adapting existing laws and crafting new legislation to leverage the transformative benefits of these emerging technologies while balancing local economic priorities and accounting for myriad social contexts. Clear and consistent AI security standards are essential to establish a secure landscape for AI development and deployment across sectors and borders.

Current U.S. legal frameworks can serve as a foundation for addressing AI-related issues, but significant ambiguities remain—particularly regarding permissible actions for researchers in security-centric AI development. For instance, some policies within AI companies can discourage independent evaluation due to the risks of account suspension or legal reprisal, reflecting a broader absence of supportive legal frameworks for advancing AI security research. This ambiguity impedes the development of robust, secure AI systems and highlights the importance of explicit legal protections that foster a safe and innovative AI ecosystem.

To address this ambiguity, agency authorities like the Federal Trade Commission and the Cybersecurity and Infrastructure Security Agency, along with sector-specific agencies like the Food and Drug Administration, could use their existing authority to explore oversight mechanisms for safe and secure AI research and development. This involves crafting clear and coordinated guidelines that emphasize ethical practices and the advancement of secure AI technologies. Moreover, developing risk-based frameworks that define and document organizational risk tolerances concerning AI security and safety is key to enhancing AI system and application security.

Even as federal frameworks are in development, state and local jurisdictions have been quick to introduce their own AI regulations. For example, Colorado officials recently passed the Colorado Artificial Intelligence Act, which requires developers and deployers of AI systems to implement risk management, transparency, and governance when those systems are used to make consequential decisions that would impact consumers. Notably, the Colorado provision also recognizes areas where AI tools bring considerable benefits to consumers, such as fraud prevention and cybersecurity, and would not consider those particular use cases as high-risk. Many other states have also enacted data privacy laws that impact AI both directly and indirectly, such as rules surrounding data collection and decision-making systems that use large language models and machine learning. While well-intentioned, state and local efforts to address these risks can lead to overlapping or conflicting standards among key provisions or to compliance challenges for businesses.

The variation in state approaches highlights the need for overarching federal guidance to create a more standardized baseline from which state and local officials can better evaluate AI security risks and safety concerns. The absence of comprehensive federal solutions necessitates the adoption of sector-specific frameworks that effectively address the distinct challenges and opportunities presented by AI within different industries. These frameworks must balance values with overarching national security and economic priorities, ensuring governance remains adaptable and responsive to the diverse requirements of various industries and organizations. While federal action will help reduce the patchwork effect created by multiple—and often conflicting—state and local laws, overly intrusive federal regulations could stifle lower-level innovation. The challenge will be finding an appropriate balance between centralized and decentralized regulatory efforts.

2. Promoting International Regulatory Harmonization and Collaboration

The international arena offers a dizzying variety of AI policies and regulations, underscoring how challenging it will be to align varied approaches to AI governance so that organizations and businesses need not comply with multiple frameworks. Currently, nearly every country is debating the impact of AI, with the European Union (EU) and the United States, along with 37 other countries including China, Japan, and India already having proposed AI-related legal frameworks.

Approaches by the United States, the EU, and China represent contrasting models that exemplify these challenges. The U.S. approach, highlighted by the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, aims for open collaboration and innovation advancement. The action emphasizes innovation, national security, economic competitiveness, and the development and use of safe and secure AI. Conversely, the EU’s approach, characterized by its AI Act, seeks to establish a comprehensive regulatory framework that categorizes AI systems by risk level and applies corresponding regulatory measures to ensure safety and protect citizens’ fundamental rights. The EU approached this regulation from a risk-based perspective, seeking the safe and responsible advancement of AI technologies while valuing human dignity, privacy, nondiscrimination, and ethical standards. Some opponents consider this an overly burdensome solution. Although EU and U.S. approaches to AI governance differ in their regulatory frameworks, they align closely in values like ethical use, privacy, and democratic principles.

In contrast, China’s Interim Measures for the Management of Generative AI Services illustrates a model of state control, where AI is leveraged as a strategic tool. Beijing’s measures align AI development with Chinese Communist Party (CCP) values like social control under the auspices of the “core values of socialism” and provide political cover for exerting jurisdiction over foreign AI services capable of interacting with users in China. Measures like this demonstrate the CCP’s protective stance over China’s digital ecosystem and its desire to maintain technological sovereignty.

Not only do these approaches reflect a divergence in each nation’s priorities and values, they also complicate efforts to harmonize and collaborate—particularly in critical areas like cybersecurity. In May, the United States and China held their first-ever closed-door talks on AI, where the two countries’ incongruous perspectives came to the fore. Similarly, ongoing dialogues and collaborative efforts with the EU will be essential to address the diverse regulatory landscapes. Though we expect to see debates over AI utilization and governance continue, we hope this dialogue narrows the wide chasm of perspectives rather than deepening it.

Public-private partnerships have the potential to promote international regulatory collaboration and could bolster integration between the government, industry, and academia, creating a unified front to address emerging AI and cybersecurity challenges. Partnerships between companies, such as the recent Tech Accord pledge, could also positively influence behavioral norms around AI risk. Furthermore, facilitating information sharing among allies is key, especially around threats facing AI systems and those leveraging AI for malicious purposes. This enhanced cooperation will facilitate the development of coherent global strategies that share best practices and enhance ethical oversight. Finally, encouraging public-private collaboration for data integrity is essential to ensure the consistent application of data protection standards across borders, improving the overall integrity and security of AI applications worldwide.

Looking Forward

Our series has highlighted AI’s crucial role in bolstering cybersecurity and national security, emphasizing the need for a coordinated and balanced regulatory approach that promotes U.S. innovation and secures our technological leadership. We also underscore the global imperative of AI governance, which demands innovative strategies that transcend local frameworks.

As AI integrates more fully into everyday life, governance must be designed and updated to manage potential risks and legal hurdles while maximizing its transformative potential. AI governance frameworks must be flexible and adaptable, anticipating ongoing technological advancements and facilitating international collaboration. Transparent and robust U.S. leadership will facilitate the AI community’s emphasis on adaptability and collaboration and can ensure that AI governance is not only effective but also resilient, capable of evolving with the technology itself. This approach will allow AI to fulfill its promise as a transformative force across all aspects of society while safeguarding against potential risks.

bool(true)
bool(true)
string(2) "50"

Cyber-AI Working Group

The group considers a range of topics from uses to safeguards, intending to identify current and future cases for AI applications in cybersecurity and offer best practices for addressing concerns and weighing them against potential benefits.