Artificial intelligence is ushering in a transformative era at an accelerated pace that could fundamentally alter how our society operates. And undoubtedly, it has caught the attention of many members of U.S. Congress, mostly due to fears about how the technology might be misused or the risks associated with it rather than its potential benefits. One key AI fear surrounds data privacy and security. 

Recent action on this front came 21 June, when U.S. Sen. Chuck Schumer, D-N.Y., revealed his SAFE Innovation framework for AI, which he believes will set the stage for bipartisan development of regulations that allow the industry to safely deploy AI without stifling innovation. Schumer’s calls to protect innovation and solicit multiple perspectives and opinions before diving into AI regulation are encouraging. While Schumer’s framework is more of a high-level view than a substantive policy proposal, he briefly mentioned privacy as an issue that would be explored through “Insight Forums.”

Schumer is correct that data privacy is an area that intersects with AI and should not be ignored. However, data privacy and security risks exist outside of AI across various forms of tech and everyday practices like grocery shopping or driving to work. This makes taking broader action to protect privacy critical rather than looking only for solutions in the context of AI. The logical foundational step is to act on comprehensive data privacy and security legislation while ensuring it remains grounded in privacy principles.

AI’s privacy and security risks

As is the case with any emerging tech, there are risks and concerns but also enormous promises and benefits. As automobiles advanced, safety features progressed to make driving safer, and the same approach should apply to AI. To prevent the stifling of innovation, tailored regulations and guidelines should be developed with a mind toward desired outcomes. Ideally, for example, responsible and effective AI that promotes economic growth and fosters scientific progress.

Ultimately, there are four main areas of concern to consider as both AI-specific and broader privacy actions move forward.

The need for a comprehensive federal privacy and security law has become more urgent with the emergence of AI technology

Addressing privacy only in the context of AI ignores other important areas. AI tech, like large language models specifically, uses an immense amount of data — including sensitive data — scraped across the internet or provided to it, creating powerful tools for society, like generative AI. While AI will exponentially improve our society, these AI tools make it imperative that the U.S. protect all Americans’ data by passing a comprehensive federal privacy and security law rather than taking a piecemeal approach.

Currently, 12 states have passed comprehensive state privacy laws, creating a complex privacy law landscape that leaves millions of Americans living in other states unprotected and industry — mainly medium and small businesses — overly burdened. A comprehensive federal data privacy and security law, like the American Data Privacy and Protection Act proposed in the 117th Congress, is one of the best ways to mitigate data privacy risks before data is collected and used to train AI. In 2022, the ADPPA made significant progress but ultimately stalled out, leaving millions of Americans unprotected.

AI could benefit from a comprehensive federal privacy and security law 

NIST’s AI Framework mentions AI regulation should leverage outcome-based privacy regulatory frameworks to promote trustworthy and transparent AI technologies. Comprehensive privacy legislation would help address privacy risks present with AI through general data privacy principles. However, data privacy and security legislation should avoid becoming specific to AI. AI is best left to AI-specific frameworks and actions.

In the ADPPA, AI was implicated in several ways. For example, the ADPPA’s Section 102 would require a covered entity to receive “affirmative express consent” from the user before transferring sensitive covered data to a third party — such as AI chatbots. In addition, a fundamental privacy principle included in ADPPA is data minimization, which limits what data an entity may collect, process or transfer. Data minimization helps limit the amount of data collected in the first place. Another ADPPA strength was its incorporation of essential privacy principles, including a data retention and disposal schedule that requires “… the deletion of covered data when such data is required to be deleted by law or is no longer necessary.”

The provision specifically addressing AI was ADPPA’s Section 207, which required large data holders to conduct an algorithms impact assessment when there is a “consequential risk of harm to an individual or group of individuals” when covered data is collected, processed or transferred. However, “consequential risk” is undefined, which could cause uncertainty for businesses about whether or not an algorithm is covered. Similarly, before deployment, an algorithm design evaluation is required when an entity develops a covered algorithm to process data “in furtherance of a consequential decision.” These terms could be defined to prevent ambiguities that might chill innovation or result in confusion.  

It is essential not to let the most recent technological buzz around AI distract from the importance of broadly applicable data privacy and security protections. Those would not only help address concerns with AI but protect Americans across current and future advancements. Without federal action, companies should lean on privacy values to produce responsible and effective AI technology without stifling innovation while protecting privacy across all of their product types.