This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity and Artificial Intelligence Working Group sessions. Additional insights and perspectives from this series are accessible here.

Over the last several months, the working group assessed how to best integrate artificial intelligence (AI) with cybersecurity, finding areas of profound benefit, potential, and security risk. We honed in on understanding and exercising risk tolerance within evolving governance approaches in a way that balances AI’s risks and rewards. We believe this approach also enables the creation of holistic and resilient solutions that can effectively address the complexities of our dynamic and AI-enhanced cybersecurity and digital ecosystems.

As the working group looked toward governance solutions at the nexus of AI and cybersecurity, three critical areas emerged: securing AI infrastructure and development practices, promoting responsible AI applications, and enhancing workforce efficiency and skills development. This exploration evaluates progress and identifies persistent challenges, offering tailored recommendations for policymakers charged with navigating these intricacies to responsibly promote AI advancement and harness its full potential.

1. Securing AI Infrastructure and Development Practices

Effective security measures and practices for AI systems are multi-layered, including the protection of data, models, and networked systems from incidents like unauthorized access and cyberattacks. Due to the potential security issues in this area, AI development practices must prioritize security and adhere to ethical standards throughout its lifecycle. The growing awareness among organizations, users, and policymakers about the need to implement comprehensive cybersecurity strategies that cover both physical and cyber defenses is a positive trend.

However, challenges remain in better securing AI infrastructure and development. One primary challenge is comprehensively auditing and evaluating AI system capabilities. The absence of universally adopted auditing standards and reliable metrics creates potential inconsistencies in AI evaluations, which are crucial for identifying vulnerabilities and ensuring robust cybersecurity.

We have several recommendations for addressing this challenge and supporting ongoing governance efforts. First, we recommend securing government and private sector support for research and standardization initiatives in AI safety and security. Focused efforts to develop reliable metrics for assessing the security of data, protection of models, and robustness against attacks would provide a foundation for more consistent auditing practices. Ongoing efforts, such as those by the U.S. AI Safety Institute Consortium to “[develop] guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” should be encouraged and appropriately funded. Such investments could also facilitate the creation and widespread adoption of balanced, comprehensive standards and frameworks for AI risk management, building upon existing initiatives like the AI Risk-Management Standards Profile for General Purpose AI Systems and Foundation Models and the National Institute of Standards and Technology’s AI Risk Management Framework.

Furthermore, evaluating the security risks associated with both open- and closed-source AI development is necessary to promote transparency and robust security measures that mitigate potential vulnerabilities and ethical violations. Understanding the risks and opportunities of combining large language models with other AI and legacy cybersecurity capabilities will refine the development of informed security strategies. Finally, developing AI security frameworks tailored to different industries’ unique needs and vulnerabilities can account for sector-specific risks and regulatory requirements, ensuring that AI solutions are secure and flexible.

2. Promoting Responsible AI Use

The promotion of responsible AI use encourages organizations and developers to adhere to voluntary best practices in the ethical development, deployment, and management of AI technologies, ensuring adherence to security standards and proactively counteracting potential misuse. Integrating ethical practices throughout the lifecycle of AI systems builds trust and accountability as AI applications continue to expand across critical infrastructure sectors.

Despite significant expansions in AI-driven cybersecurity applications, ongoing challenges have hindered the responsible use of AI. The absence of clear definitions and standards, particularly with key terms like “open source,” results in varied security practices that can make compliance efforts burdensome or impossible. Outdated legacy systems often cannot support emerging AI security solutions, leaving them vulnerable to exploitation. Furthermore, as cloud computing becomes increasingly integral to AI system deployment due to its scalability and efficiency, ensuring that AI applications on these platforms maintain robust cybersecurity practices has proven challenging. For instance, security vulnerabilities in AI-generated code have emerged as a top cloud security concern.

To overcome these challenges, we encourage a multifaceted approach that includes in-depth security standards and processes. Developing clear, widely accepted definitions and guidance would lead to more consistent and ethical security practices across all AI applications in the cybersecurity sector and beyond. Modernizing legacy systems to accommodate responsible AI principles will ensure these systems can support both emerging security updates and responsible use standards. Given the nascent field of AI security, monitoring the discoveries of new security issues or novel threat-actor techniques to attack AI systems will ensure organizations remain ready to protect their systems. Moreover, encouraging cloud security innovations to leverage AI for enhanced threat detection, posture management, and secure configuration enforcement will further strengthen cloud security measures. Implementing these recommendations will promote responsible AI applications in cybersecurity that mitigate both deliberate and unintentional risks and misuses.

3. Enhancing Workforce Efficiency and Skills Development

Ongoing talent shortages reflect a notable deficit of people who can understand and employ AI technologies in cybersecurity. Substantial progress has already been made in leveraging AI to enhance cybersecurity awareness, workforce efficiency, and skills development. For example, AI-driven simulations and educational platforms now provide dynamic, real-time learning environments that adapt to the learner’s pace and highlight areas that require additional focus. These advancements have also made training more accessible, allowing for a broader reach and facilitating ongoing education on the latest threats and AI developments.

Although this progress is encouraging, additional education and awareness can improve organizational leaders’ understanding of when and how to guide AI’s integration within the cyber workforce as well as across organizational practices, considering the varying recommendations and regulations that govern these implications. This is especially the case for small- and medium-sized businesses, where resource constraints and regulatory compliance challenges can limit the ability to implement AI efficiently compared to larger entities.

We recommend several solutions to respond to these challenges. Comprehensive workforce development and training on the intersection of cybersecurity laws, ethical considerations, and AI should ensure that all levels of the workforce—especially those in government and military roles as well as contractors and vendors servicing these sectors—understand the implications of deploying AI solutions within legal, ethical, and security boundaries. AI-driven training and skilling for the cybersecurity workforce should also be promoted to expedite the training process and prepare the workforce for current and future challenges. Finally, organizations should learn to leverage AI to transform cybersecurity practices through modeling, simulation, and innovation. The development and use of AI for cybersecurity applications, such as digital twins for analyzing cyber threats, should be encouraged and supported through continued investments. These complementary recommendations ensure the cybersecurity workforce is equipped with cutting-edge AI-driven solutions and remains responsive to emerging cybersecurity threats.

The Road Ahead

Clearly, AI regulations are still taking shape—even as our technological capabilities in both AI and cybersecurity continue to advance rapidly. In the next decade, we anticipate the emergence of autonomous AI agents and more sophisticated AI capability evaluations (among other developments) that will create optimism and a need for ongoing preparation.

Significant progress has been made in AI-cybersecurity governance to secure AI infrastructure and development practices, promote responsible AI applications, and enhance workforce efficiency and skills development. These efforts have laid a strong foundation for AI’s integration into cybersecurity. However, there is still a long road ahead. Collaborators across government, industry, academia, and civil society should pursue an appropriate balance between security principles and innovation. Policymakers and cybersecurity leaders, in particular, must stay proactive in updating governance frameworks and approaches to ensure the safe and innovative integration of AI technologies. By prioritizing adaptability and ongoing education in our strategic AI-cybersecurity governance approaches, we can effectively harness AI’s transformative potential to secure our technological leadership and national security.

bool(true)
bool(true)
string(2) "50"

Cyber-AI Working Group

The group considers a range of topics from uses to safeguards, intending to identify current and future cases for AI applications in cybersecurity and offer best practices for addressing concerns and weighing them against potential benefits.