Artificial intelligence (AI) is increasingly the focus of heated social and political debates, and it is poised to become an all-encompassing policy concern in the coming months and years. A veritable “computational revolution” is underway, as AI, machine learning (ML), robotics and quantum computing come to affect every facet of our lives. As this technological revolution unfolds, legislative and regulatory interest in algorithmic systems will grow rapidly.

AI policy includes many distinct issues, each with its own nuances. This piece offers a brief sketch of seven algorithmic policy issues that will attract considerable political and regulatory attention in 2023 and beyond.

Two Types of Potential AI Regulation: Broad-based or Targeted

Before outlining some major AI policy concerns, it is worth highlighting how algorithmic regulation could take two forms: broad-basedor targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. For example, Congress considered an Algorithmic Accountability Act last year that would have imposed restrictions on any larger company that “deploys any augmented critical decision process.” The act would require developers to file “algorithmic impact assessments” with a new Bureau of Technology within the Federal Trade Commission (FTC). By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. An example of this includes bills dealing with autonomous vehicle policy, which were considered in the last several sessions of Congress but never passed.

It is possible that both types of AI regulation will advance, but targeted policy efforts likely have a greater chance of passing, at least in the short term. Broad-based measures face more challenges, including a very slow and often somewhat dysfunctional legislative process, especially for fast-moving sectors. At this time, however, neither broad nor targeted federal laws have advanced. Instead, AI governance is mostly taking the form of various “soft law” initiatives, which include various informal, iterative and collaborative solutions to governance issues. Notable soft law mechanisms include multi-stakeholder processes; “sandboxes” or experimental test-beds; industry best practices or codes of conduct; technical standards; agency workshops and guidance documents; and education and awareness-building efforts. The courts and common law solutions also supplement these informal mechanisms.

More formal algorithmic regulations may be coming to the United States, and they have already arrived across the Atlantic. The European Union (EU) has implemented a wide variety of data collection mandates that have restricted innovation and competition across the continent. These regulatory burdens have left the EU with few homegrown information technology firms. As a result, the EU now mostly focuses on exporting its mandates globally, primarily in an effort to regulate U.S. tech leaders. The EU is also pushing a new AI Act that would comprehensively regulate algorithms, adding still more red tape.

Here in the United States, many states—led by California—are advancing a variety of tech regulations and algorithmic “fairness” regulations. America’s AI innovators thus run the risk of being squeezed between costly and conflicting mandates driven by a “Brussels Effect” (EU efforts to export regulation extraterritorially) and a “California Effect” (state-by-state tech algorithmic rules, many of which will originate in California). This problem may accelerate federal interest in legislating on this front to counter or compliment those regulations. As measures advance, some of the issues or concerns discussed here will likely drive them.

Seven AI Policy Fault Lines

1) Privacy and Data Collection

Perhaps the most important AI policy fault line is also one of the oldest issues in the field of information policy: data collection practices and privacy considerations. Concerns about how data collection might be used by private or government actors has driven calls for privacy legislation for over a decade, but a comprehensive bill has not yet passed.

Because algorithmic systems depend on massive big data sets—and because so many connected “smart” devices that make up the Internet of Things (IoT) are powered by AI and ML capabilities—concerns about more widespread data collection will likely expand. AI, big data and the IoT mean we will live in a world of ambient computing. This means that algorithms will be ubiquitous, utilized in our homes and workplaces, and even on our bodies to monitor health and fitness. It is already the case that most Americans carry an algorithmic supercomputer with them at all times in the form of their smartphones.

The tracking and sensor capabilities of these and other connected devices will introduce continuous waves of policy concerns—and regulatory proposals—as new applications develop and more data is collected. Of course, that data collection is what ultimately makes algorithmic systems capable and effective. Heavy-handed regulation could, therefore, limit the potential benefits of algorithmic systems. Last year’s major privacy proposal, the American Data Protection and Privacy Act (ADPPA), already included provisions demanding that large data handlers divulge information about their algorithms and undergo algorithmic design evaluations based on amorphous fairness concerns.

2) Bias and Discrimination

Other policy concerns flow from this first issue. For example, broader data collection and ubiquitous computing leads some to fear potential discrimination and bias in sophisticated algorithmic systems. Measures like the Algorithmic Justice and Online Platform Transparency Act have been introduced to “assess whether the algorithms produce disparate outcomes based on race and other demographic factors in terms of access to housing, employment, financial services, and related matters.” Last August, the FTC proposed a new rule on commercial surveillance and data security that incorporates provisions to address algorithmic error, or discrimination. In October, the Biden administration also released a framework for an AI Bill of Rights that claims algorithmic systems are “unsafe, ineffective, or biased,” and recommended a variety of oversight steps.

Bias, however, can mean different things to different people. Luckily, a large body of law and regulation already exists that could handle some of these claims, including the Civil Rights Act, the Age Discrimination in Employment Act and the Americans with Disabilities Act. Targeted financial laws that might address algorithmic discrimination include the Fair Credit Reporting Act and Equal Credit Opportunity Act. It remains to be seen how regulators and the courts will seek to enforce these statutes or supplement them.

3) Free Speech and Disinformation

There are other amorphous discrimination concerns about how the growth of algorithmic systems might affect free speech, social interactions and even the future of deliberative democracy. There are currently very heated debates about how algorithms are being used for online content moderation, but conservatives and liberals disagree about the nature of the problem. Some conservatives believe social media algorithms are biased against their political views, while some liberals feel that social media algorithms fuel hate speech and misinformation. The Biden administration ignited a firestorm of controversy last year with its Disinformation Governance Board, which would have created a bureaucracy in the Department of Homeland Security to police some of these issues. The growth of large-language models such as ChatGPT is giving rise to still more concerns about how AI tools can be used to deceive or discriminate, even as many people are using such tools to find or generate beneficial new services.

It is unclear how legislation could be crafted to balance these conflicting perspectives, but the Protecting Americans from Dangerous Algorithms Act is a proposed bill that would have regulators oversee how “information delivery or display is ranked, ordered, promoted, recommended, [and] amplified” using algorithms. This debate is linked to the push by many on both the left and right to reform or abolish Section 230 of the Telecommunications Act of 1996, the law that shields digital platforms from liability for content they host that is posted by users. At root, Section 230 protects the editorial discretion of tech platforms, including the ways they configure their algorithms for content moderation purposes. Section 230 has generated enormous economic impact and some controversy as many blame it for any number of social problems. Major Supreme Court cases are pending that involve how social media operators use algorithms either to disseminate or screen content on their sites.

4) Kids’ Safety

Algorithms would also be regulated under many current kids’ safety bills. Online child safety is one of the oldest digital policy debates and an area that has produced a near endless flow of regulatory proposals and corresponding court cases. Some of the most important internet court cases involved First Amendment challenges to legislative efforts to regulate online content in the name of child protection.

Today, critics on both the left and right accuse technology companies of creating algorithmic systems that are intentionally addictive or funnel inappropriate content to children. Last year, California passed an Age-Appropriate Design Code that would regulate algorithmic design in the name of child safety, and many states are following California’s lead with similar proposals. Meanwhile, Congress has considered the Kids Online Safety Act, a law that would require audits of algorithmic recommendation systems that supposedly targeted or harmed children. Many additional algorithmic regulatory efforts will likely be introduced this year that are premised on protecting children. Child safety measures are both the most likely to advance, but also the most likely to face protracted constitutional challenges, like earlier internet regulatory efforts.

5) Physical Safety and Cybersecurity

Another broad category of concern about AI and ML involves the physical manifestations or uses of algorithmic systems—especially in the form of robotics and IoT devices. AI is already baked into everything from medical diagnostic devices to driverless cars to drones. Existing regulatory agencies are already considering how their existing statutory authority might cover algorithmic innovations in medicine (Food and Drug Administration) and autonomous vehicles and drones (Department of Transportation). Agencies with broader authority, like the FTC and Consumer Product Safety Commission, have also considered how algorithmic systems might be covered through existing statutes and regulations.

The National Institute of Standards and Technology (NIST) also recently released a comprehensive Artificial Intelligence Risk Management Framework, which is “a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.” This soft law effort built upon an earlier NIST Cybersecurity Framework that similarly crafted best practices for connected digital systems.

6) Industrial Policy and Workforce Issues

While most of the policy concerns surrounding AI involve questions about whether governments should limit or restrict certain uses or applications, another body of policy seeks to promote the nation’s algorithmic capabilities to ensure that the United States is prepared to meet the challenge of global competition with many other countries—especially China. Both the Obama and Trump administrations took steps to promote the development of AI technologies.

Last year, Congress passed a massive industrial policy measure—the CHIPS and Science Act—that was often described as an “anti-China” bill. Additional programs and spending have been proposed. This type of algorithmic policymaking is probably easier to advance than most regulatory initiatives.

Another class of promotional activities involves AI-related workforce issues. The oldest concerns about automation involve fears about the displacement of jobs, skills, professions and entire industrial sectors. Fear about technological unemployment is what drove the Luddites to smash machines, and similar fears persist today. For example, the Teamsters Union, which represents truck drivers, has worked to stop progress on federal driverless vehicle legislation for years. Organized opposition to other algorithmic innovations could arrive in the form of formal restrictions on automation in additional fields. Even writers and artists are expressing concern about the potential disruptive impact associated with large language models like ChatGPT and other AI-enabled art generators.

7) National Security and Law Enforcement Issues

There is a close relationship between the national security considerations surrounding AI and the industrial policy initiatives floated to bolster the nation’s computational capabilities in this field. Beyond promotional activities, however, there are growing concerns about how the military or domestic law enforcement officials might use algorithmic or robotic technologies.

A decade ago, the Campaign to Stop Killer Robots launched to pursue a multinational treaty that would ban lethal autonomous weapons systems. Hundreds of organizations and thousands of individual experts have also signed the Future of Life Institute’s Lethal Autonomous Weapons Pledge, which “call[s] upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons.”

Global control of AI risks is far more challenging than previous global technological risks, such as nuclear and chemical weapons. Those arms control efforts faced serious international coordination challenges, but algorithmic controls are far more difficult due to the intangible and quicksilver nature of digital code. Regardless, this issue will attract more attention as other countries besides China make strides in militaristic AI and robotic capabilities, creating what some regard as dangerous existential risks to global order.

For law enforcement, the specter of AI systems leading to automated justice or predictive policing raises fears about how algorithms might be used by law enforcement officials or the courts when judging or sentencing people. Governmental uses of algorithmic processes will always raise greater concern and require broader oversight because governments possess coercive powers that private actors do not.

Conclusion

This list only scratches the surface in terms of the universe of AI policy issues. Algorithmic policy considerations are now being discussed in many other fields, including education, insurance, financial services, energy markets, intellectual property, retail and trade, and more. AI is the ultimate disruptor of the status quo, both culturally and economically. Eventually, almost every sector of the economy and every facet of society will be touched by the computational revolution in some fashion. This process will accelerate and the list of AI-related policy concerns will expand rapidly as it does.

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations. There is a compelling interest in ensuring that AI innovations are developed and made widely available to society. Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.

The most sensible policy default for algorithmic systems is permissionless innovation, or the general freedom to innovate without prior restraint. This policy vision has fueled America’s stunning success in the digital revolution, and it can do the same for the computation revolution. This does not mean government has no role to play; it simply means that, generally speaking, AI innovators should not be considered guilty until proven innocent based on hypothetical worst-case fears about algorithms.

A huge body of law and many different ex-post remedies exist that can help us address algorithmic problems as they develop. And many other iterative and flexible governance solutions are being developed to address AI risks without resorting to heavy-handed, top-down controls as a first-order solution. That sort of flexible governance approach should be America’s priority when it comes to AI policy. There is no use worrying about the future if we cannot even invent it first.

Image credit: Tierney