9.3 C
Toronto
Monday, October 14, 2024

Subscribe

Artificial Intelligence, Participatory Democracy, and Responsive Government

Artificial Intelligence, Participatory Democracy, and Responsive Government

AI and Democracy
Image Credit: Chris Burnett

Editors Note: Though framed in terms of the American experience, many of the issues in this post raise issues that are relevant for Canada and other parts of the world.

Government must implement safeguards against malicious uses of AI that could misrepresent public opinion and distort policy-making.

In 2017, during a Federal Communications Commission (FCC) public comment period, bots flooded the agency with more than a million comments from fake constituents calling for the repeal of net neutrality rules. Regulators uncovered this covert attempt to subvert the policy process, but only because hundreds of thousands of the comments were uncannily similar. In the six years since, artificial intelligence (AI) tools have only grown more sophisticated, and similar efforts to deceive policymakers promise to become increasingly difficult to detect and prevent. This troubling risk and its implications for participatory government call for urgent action.

The practice of open and responsive government is integral to the American tradition. Underlying it is the principle that the public should have ample access to information and opportunity to weigh in on important policy decisions. New technologies, from the invention of the telegraph to the explosion of social media, have produced new pathways for public democratic engagement. Yet these same technologies often create new tools that can be used to misrepresent public opinion and distort policymaking. AI is no exception.

As with prior technological revolutions, developments in AI hold promise: in time, they could enhance government’s ability to understand what the public wants and to help citizens participate in policy decisions. In particular, automated summation and analysis could be deployed to augment officials’ capacity to digest public feedback. But these potential benefits can only be realized alongside guardrails to ensure that AI systems are accurate and fit for the purposes at hand, to prevent system biases from reflecting sociohistorical inequities and diminishing the voices of specific communities, and to mitigate the effects when such outcomes occur.

The FCC episode exemplifies how antagonists can use AI tools to undermine authentic public input on government decision-making. New developments in generative AI enable vastly more sophisticated mass deception efforts that effectively mimic individuals’ feedback on public policy, begetting the danger that officials will be duped or diverted by hard-to-detect AI-generated content. Using generative AI, fraudsters can produce mountains of seemingly genuine policy submissions from fake stakeholders to manipulate perceptions of public sentiment. If maliciously deployed at sufficient scale, such efforts can erode government’s ability to be responsive to the American people — and ultimately undermine the people’s trust in government. Public officials who administer elections, meanwhile, face acute risks that bad faith actors will exploit AI tools to amplify baseless concerns about the election process, distract from voter needs ahead of (or on) Election Day, or drown offices in bot-delivered document requests that are difficult to identify as AI-generated.

These hazards require targeted policy intervention. To address these concerns, government institutions should take the following steps:

  • Implement high-accuracy systems to verify human activity where feasible, with special attention to accessibility and data privacy.
  • Expand accessible opportunities for in-person participation in government decision-making, including open public hearings and town halls.
  • Consistently implement surveys directed to known constituents, such as those who have utilized government services or received public benefits, to get feedback on program implementation.
  • Modify applicable laws and rules so that agencies have the authority to disregard document requests and submissions on proposed regulations shown to have been fraudulently transmitted by bots or automated systems, including those powered by generative AI.
  • Establish guardrails to regulate governmental use of AI to analyze, summarize, or aggregate public comments or substantially assist in the drafting of rules or public policy, including by setting requirements to guard against bias and ensure accuracy.
  • Engage regularly with civic organizations to better understand how stakeholders and constituents are using AI to interact with and advocate to government.

Citizen Participation and Government Responsiveness: AI as a Tool to Distort Perceptions of Public Sentiment

As the political theorist Robert Dahl famously observed, “a key characteristic of democracy is the continuing responsiveness of the government to the preferences of its citizens.” Politicians, pundits, and scholars alike have long debated whether American government fulfills this promise. Historically, much of the focus has been on forces that undermine citizens’ ability to choose their leaders — like outright mass disenfranchisement, legislative malapportionment, and gerrymandering — along with forces that shape incentives for public officials, like the outsize role of wealthy campaign donors and the corollary risk of corruption. Yet a basic prerequisite for responsiveness is the ability of officials to assess what the public actually wants. Before policymakers can address constituent preferences, they must first be able to glean what those preferences are.

Of course, policymakers have a range of tools to help them understand what the public wants. Some of these tools, like opinion polls, are relatively insulated from distortion by AI (at least for now). But other methods are far more vulnerable to AI manipulation than polling, particularly when polling relies on methods such as address-based recruitment. And polling’s limitations — including its meager ability to capture nuanced positions and views rendered in detail — are well known, which is why methods of direct engagement with the public and solicitation of public input are still essential. Here, we briefly consider two vulnerable tools for learning citizens’ inclinations: correspondence directed to elected officials and online public comments submitted to regulatory agencies.

Elected officials from presidents to county commissioners have long relied on written communications to keep a finger on the pulse of their constituents, especially the subset of informed voters attuned to a particular issue. In the past, outside groups have tried to skew assessments of constituent sentiment through astroturfing, the process of recruiting individuals to send often-prewritten form letters to give the false appearance of bona fide grassroots sentiment. However, generative AI allows malicious actors to create, with ease and on a massive scale, unique communications from fake constituents ostensibly advocating policy positions, which greatly complicates the task of detecting efforts to skew perceptions of popular preferences.

Can elected officials discern AI-generated content from authentic, human-authored constituent mail? Recent research suggests not. In 2020, researchers conducted a field experiment in which they sent roughly 35,000 emails advocating positions on a range of policy issues to more than 7,000 state legislators. Half were written by humans; the other half were written by GPT-3, the then-cutting-edge generative AI model. To explore whether legislators could recognize AI-generated content, the study compared response rates to the human-written and AI-generated correspondence (on the theory that legislators would not waste their time responding to appeals from AI bots). On three issues, the response rates for AI- and human-written messages were statistically indistinguishable. On three other issues, the response rates to AI-generated emails were lower — but only by 2 percent on average.

This finding suggests that an antagonist capable of easily generating thousands of unique communications could potentially skew legislators’ perceptions of the issues most important to their constituents and of constituents’ views on any given issue. Moreover, many of the inconsistencies and problems that some legislators reported as helping them identify suspicious content are becoming less prevalent with more advanced AI models. For example, GPT-3 struggled with logical consistency in writing an email taking a conservative position on gun control legislation, but GPT-4’s updated algorithms are better equipped to avoid such issues. State-of-the-art open-source models, too, are now advanced enough for these writing tasks, making it difficult — if not impossible — to put the genie back in the bottle.

Generative AI could also threaten democratic responsiveness by undermining the main mechanism for incorporating public input into the regulatory process: the public comment period. Administrative agencies play a vital role in policymaking — particularly at the federal level — but their relative insulation from electoral politics can raise concerns about accountability to the public. The notice-and-comment process, while far from perfect, is one of the principal tools that unelected agency officials use to assess public preferences during agency rulemaking. Technology, especially the advent of e-rulemaking in 2002, has long been heralded as a way to address concerns about democratic accountability by “enhanc[ing] public participation . . . so as to foster better regulatory decisions.” The Regulations.gov website, the official clearinghouse for information related to agency rulemaking, invites ordinary citizens to “Make a difference. Submit your comments and let your voice be heard.”

But as the bot-driven disruption of the FCC’s net neutrality rulemaking shows, this process is susceptible to subversion. Whereas a crush of comments with suspiciously similar language patterns unmasked the net neutrality scheme, new generative AI technologies can easily overcome this problem by creating millions of unique and varied comments advancing a given policy position. As a result, agencies will find it much more difficult to determine which comments genuinely represent prevailing public sentiment, or even the views of the most directly interested stakeholders.

Recognizing the threat that malicious AI use poses, technology firms are racing to produce tools that can identify AI-generated text. However, the challenges to doing so effectively are difficult and protean. In July 2023, OpenAI discontinued its AI classifier tool because of its limited accuracy. Given the requirements regarding public comments enshrined in the Administrative Procedures Act, only a highly accurate screening tool would likely pass muster in the context of federal rulemaking processes. And even if future advances succeed in producing reliable and effective tools, access to and ability to use them may vary across offices and levels of government.

Election-Specific Challenges: AI as a Tool to Divert Resources and Distort Perceptions of Public Need

Generative AI’s potential to disrupt the work of election offices poses a major risk to democracy. Open records laws (also called sunshine laws) are vital transparency tools that let the public peer into the inner workings of government and hold officials accountable. But in past election cycles, election deniers have weaponized such laws to distract and overload officials at some election offices, supplanting the important work of election administration at crucial junctures. In the lead-up to the 2022 midterm elections, citizens mobilized by well-funded and conspiracy-addled national groups — with connections to the January 6, 2021, attack on the U.S. Capitol — inundated local election offices with document requests. Some such requests sought sensitive information like cast vote records, election security protocols, or voting machine serial numbers. While state public records laws may rightly prevent election offices from disclosing information that endangers election security, local officials do not universally know what information must or must not be disclosed in response to requests for records. In the future, election deniers and other malicious actors could step up such efforts by deploying bots powered by generative AI. State laws typically do not limit the use of bots to submit mass-produced requests, nor do they permit officials to decline to respond to automated mass-produced requests — even if the intent is to divert officials’ attention from the critical work of administering elections ahead of Election Day.

Election offices also depend on public input to deliver relevant information about elections and to prioritize tasks in the face of extremely limited resources. Election officials must know about the needs of people with disabilities, minority language groups, and other disadvantaged communities — particularly when those groups encounter issues attempting to register to vote or cast a ballot on Election Day. Officials must also be aware of common areas of confusion about how to vote and be able to effectively address concerns about election security among target groups.

Malicious use of generative AI could put those essential roles at risk. Deceptive AI-generated mass comments that distort or drown out genuine public questions and feedback could result in inadequate support for needful voters and misdirected voter education efforts. They could also add fuel to restrictive voter rules that create more barriers to voting by creating the impression that baseless fears about the election process are more prevalent; in the past, disinformation purveyors have exploited unfounded fears about election integrity to push through policies that restrict voting access without legitimate justification. AI-assisted distortion of perceived public need could thus make election officials less responsive to voters, less able to communicate vital election information, and less likely to give voters information about the matters they care about most.

Cause for (Tempered) Optimism: AI’s Potential Benefits for Participatory and Responsive Government and Needed Guardrails

If generative AI and other AI tools pose risks for participatory and responsive government, they also present opportunities — most notably their potential to help policymakers better manage and respond to comments, feedback, and questions from the public. But governmental use of AI to substantially assist in sensitive tasks also warrants standards to prevent improper, biased, or inaccurate results. (Substantially assist means employing an AI system to perform a task more complex than, say, alphabetizing documents or sorting them by date or file size.)

Government offices already use non-generative AI to provide information to the public and to respond in real time to constituents seeking information or services. Many government bodies in the United States and elsewhere use AI-powered chatbots to provide 24/7 constituent assistance. Typically not powered by generative AI, these tools tend to be rule-based chatbots that recognize keywords and churn out pre-vetted responses, or conversational agents (similar to Apple’s Siri) that use machine learning and natural language processing to assist constituents. Although chatbots often face public resistance — more than half of respondents expressed negative views about interacting with them in one survey — as they improve, these non-generative-AI technologies could help governments conserve resources and be more responsive if offices implement adequate safeguards to ensure their proper use.

Policymakers at every level of government take in a large volume of outreach from the public that strains the ability of already overstretched staff to respond. These challenges have been thoroughly documented in the context of federal rulemaking. Agencies can receive tens of thousands or even hundreds of thousands of comments on a particular proposed policy, most of which receive perfunctory treatment. Local administrative bodies similarly struggle to respond to citizen communication requests, feedback, and questions. For example, in an audit of responsiveness to requests for information about how to apply for public housing, around 40 percent of requests, on average, received no reply. Fielding outreach from the public is also a major focus for Congress and state and local legislatures. The Congressional Management Foundation has estimated that many congressional offices dedicate roughly 50 percent of staff resources to managing and responding to constituent communications.

Responding to public outreach imposes opportunity costs on government offices, but neglecting to do so is also problematic. Failure to receive a response from government officials can erode citizens’ perceptions of officials’ political efficacy and cause constituents to become more disengaged from the political process. In some cases, it can have even more immediate and tangible consequences, such as citizens not finding information or assistance needed to receive benefits they are entitled to. It can be especially troubling when nonelected government officials ignore or pass over messages from the public. In the federal rulemaking context, the well-documented tendency of regulators to discount most comments that focus on policy preferences or values in favor of a few relatively detailed technical comments (often from industry or other established stakeholders) arguably undercuts the democratic legitimacy of administrative decisions affecting millions of people’s lives. It also tends to skew policy outcomes in favor of more powerful constituencies with the resources to amass the sort of technical information that agency staff are likely to respond to.

AI could potentially address these shortcomings and improve agency rulemaking processes. Several recent articles have explored how new AI tools could help agency staff review and summarize the thousands or even millions of public comments received in high-profile rulemakings. For instance, regulators could use language models trained on the corpus of materials and comments relevant to rulemakings to assist in processing, synthesizing, and summarizing information provided by the public. This strategy could nudge regulators to consider more value-focused comments in crafting policy. AI tools could improve other aspects of the notice-and-comment process as well — for example, by helping regulators detect and screen out automated, misattributed, or otherwise spurious comments (though significant challenges with detection of AI-generated content persist, as described above). And, although American commentators have mostly focused on the use of AI in regulatory processes, national legislatures in other countries are also experimenting with ways to use AI to collect and organize citizen input.

Generative AI could also facilitate citizen engagement with government officials during the notice-and-comment process. Some analysts have suggested that AI could be used to alert potential commenters to topics of interest, help them review rule text, summarize technical literature, and even compose comments. Such tools could also help members of the public who would otherwise lack the time or confidence to participate in the public comment process. Again, however, issues related to the accuracy, bias, and quality of information produced by generative AI must be addressed.

The potential benefits notwithstanding, the use of more sophisticated machine-learning and language-processing AI tools — even those not powered by generative AI — carries significant risks. AI systems need adequate testing, vetting of training data quality, and human oversight to ensure that they provide accurate, accessible, and beneficial information to the public. Bias is a major concern. For instance, an AI bot employed by a local government might provide different answers to constituents writing in about city services based on the neighborhoods they live in. Or an AI conversational agent might struggle to process certain accents or vernacular languages — scenarios that could differentially affect poor communities, communities of color, and non-English-dominant communities.

Guardrails governing AI’s use for these purposes are imperative. Using AI to review, summarize, and screen public comments presents the same accuracy and bias concerns as its use in other contexts. People are justifiably squeamish at the thought of AI replacing human deliberation in government decision-making. One concern is that “no one at the agency [will] actually read the comments and grapple[] with their arguments.” And AI tools could be used not only to review and summarize comments but also to generate responses to justify an agency’s predetermined approach. Enforceable standards are needed to ensure that AI tools review and summarize comments accurately and without bias, and also to keep human decision-makers active in the thinking work of responding to those comments and, where warranted, incorporating them into final policy decisions.

Policy Solutions

Implement Accurate, Effective Systems to Verify Human Activity, with Special Attention to Accessibility and Data Privacy

Governing bodies should implement policies that guard against the malicious use of bots to lodge mass AI-generated comments intended to warp officials’ perceptions of public sentiment. Systems that verify human activity can achieve this in part, but they can also add friction for users, create data privacy concerns, and decrease access for Americans with disabilities. Most federal agencies invite public comments on rulemakings via Regulations.gov, which employs the reCAPTCHA human verification system to bolster the integrity of the comment submission process. (CAPTCHA stands for “completely automated public Turing test to tell computers and humans apart.”) Although not foolproof, reCAPTCHA helps distinguish between human and bot activity. Regulations.gov uses a new generation of reCAPTCHA that increases accessibility over prior generations, presenting users with a simple “Are you a robot?” checkbox as a first-order challenge while analyzing user behavior to identify signs of bot activity. However, at least one study has found that this version of reCAPTCHA still introduces obstacles for people with visual impairments; it also collects some user data in determining whether users are human.

Government entities — including Congress, state agencies, and departments that administer open records laws — should incorporate human verification systems into their processes for inviting public comments and receiving document requests. Officials should continually assess CAPTCHA and other verification methods as system capabilities evolve, selecting those that maximize security and accessibility. Officials should pay close attention to the evolving use of generative AI to defeat existing CAPTCHA systems and the need for novel systems to replace them. Laws and regulations should mandate minimal, task-necessary data collection and retention, forbid offices from sharing collected data, and compel periodic deletion of captured constituent data. Government offices and verification system vendors should also be required to disclose details about data collection and storage (without substantially compromising system efficacy by revealing to would-be fraudsters how to circumvent defenses). And government offices should provide alternate submission and verification methods for people with disabilities and those without access to technology or high-speed internet.

Create Opportunities for In-Person Participation and Direct Surveys to Known Constituents

As the problem of deceptive bot-generated public comments becomes more disruptive, governing bodies should explore modes of participation that are relatively impervious to distortion through generative AI. Opportunities for in-person participation could include public hearings, town halls, and other similar forums. Public hearings and town halls should be held in locations, within regions, and at times that allow a broad cross section of the public to participate. Governing institutions should implement best practices to increase access for people with disabilities, communities of color, and immigrant communities. Such best practices should include offering simultaneous interpretation by interpreters familiar with relevant policy terminology where non-English-dominant communities constitute a threshold percentage of the constituent public; providing American Sign Language interpreters and other appropriate public accommodations; and ensuring, where possible, that hearings are reachable by public transportation.

Because town halls and public hearings typically lack representativeness and are limited in scope, agencies and governing bodies should also consistently incorporate tools like surveys directed to known constituents — for example, those who have utilized government services or applied for public benefits — to capture public insight, sentiment, and feedback. Such surveys would offer valuable feedback on program implementation and service delivery. Officials should conduct survey outreach in a way that enhances the integrity and inclusiveness of the information-gathering process while not unduly burdening recipients. To mitigate the risk of distortion through deceptive AI use, online surveys should employ effective, accessible, and privacy-protecting human verification systems, and they could be offered on a uniform cross-agency platform.

Authorize Government Entities to Disregard Input on Proposed Regulations Transmitted via Fraudulent Use of Bots or Automated Systems

The federal Administrative Procedure Act, state administrative laws, and open records laws require that governing bodies consider submissions from the public on proposed rules within defined time spans and provide timely responses to document requests, with certain exceptions. Currently, however, no exception exists for comments and requests submitted using AI tools with the intent to distort policymaking or divert resources. When evidence strongly indicates that comments on proposed rules or records requests have been transmitted duplicitously using bots or automated systems, federal and state laws should allow agencies to decline to consider those submissions and requests.

Regulations and policies governing fraudulent AI use should cover the impersonation or replication of human activity using bots or other automated systems that significantly mispresents the scale of human involvement behind their output. Such a standard should not capture, for instance, the use of generative AI by an individual or organization to assist in the drafting of comments or the use of form letters when endorsed by actual humans — a benchmark that might negatively affect disadvantaged communities. Rather, it should implicate the use of bots or automated systems, powered by generative AI or otherwise, to transmit numerous policy submissions intended to skew official perceptions of the numbers of humans involved in submitting such content or to convey open records requests from nonexistent people. Ample notice and opportunity for appeal should be given for any such determination. Given the current limitations of detection capabilities for generative AI content, and detection tools’ documented history of misclassifying content from non-native English speakers, detection tools should not be deployed unless they meet rigorous standards.

Establish Guardrails for Governmental Use of AI to Analyze Public Comments or Substantially Assist in Drafting Rules or Public Policy

While AI presents opportunities to enhance policymakers’ responsiveness to their constituents, it can also introduce risks of bias, inaccuracy, and unreliability that must be addressed before such systems are put into use. Congress, state legislatures, and federal and state agencies should implement safeguards for governmental AI use to mitigate these risks, including enforceable requirements that apply to government use of AI to analyze public comments, interact with constituents, and substantially assist in the drafting of regulations, laws, and policies. For federal use of AI, Congress should direct the Organization of Management and Budget (OMB) to promulgate regulations setting out minimum thresholds for quality of training data and AI system accuracy — including standards that address bias deriving from reinforcement learning from human feedback, prohibit algorithmic discrimination as defined by regulation, and require a baseline level of human involvement in and supervision of government decision-making and public communications substantially aided by AI. Lawmakers should also limit the types of information that AI systems can examine when assessing public comments as part of rulemakings governed by the Administrative Procedure Act (for example, restricting the use of actual or predicted race to attempt to correlate sentiment with race).

Congress should also:

  • facilitate the development of testing methods to assess AI systems against the aforementioned requirements for the specific use of AI in analyzing public comment and drafting public policy;
  • mandate documentation by government bodies and AI system vendors of compliance with these requirements;
  • obligate government bodies to institute mitigation measures where needed, and prohibit the deployment of AI systems, including generative AI systems, for the purpose of analyzing public comment or drafting public policy, when they fail to meet established requirements and their flaws cannot be reasonably mitigated;
  • compel government bodies to continually monitor the use of AI systems after they are tested and implemented;
  • set transparency mandates for the use of AI to substantially assist in federal government decision-making and processes; and
  • require government bodies to disclose their use of AI systems, the purpose for their use, the main system parameters for sorting information or making predictions, the data sources used to train them, and the level of human involvement in and review of important AI-assisted decisions and public communications.

Consistent with President Biden’s recent executive order on AI, OMB and the Office of Personnel Management should incorporate similar directives into guidance for federal agencies. And state legislatures should adopt comparable requirements for state government use of AI systems.

Seek to Better Understand How Constituents Use Generative AI to Compose Comments on Rule-makings and Other Policy Submissions

Agencies should continually engage with civic organizations to ascertain how stakeholders from diverse communities are using AI to interface with government and to identify any gaps in public education and awareness. Civic organizations should provide guidance to constituents who wish to use generative AI to assist in composing comments on proposed rules and other policy submissions. They should offer advice on the necessary specificity of prompts, the need to review comments ahead of submission to ensure that they accurately reflect the commenters’ preferences, the prudence of fact-checking information produced by AI tools, the cultural biases inherent to some AI systems, and the possibility that AI tools will produce false or misdirected content or content that fails to reflect the constituents’ genuinely held views.

 

Mekela Panditharatne and Daniel I. Weiner
Mekela Panditharatne and Daniel I. Weiner
  Mekela Panditharatne serves as counsel for the Brennan Center’s Democracy Program, where her work focuses on election reform, election security, governance, voting, truth and information. Daniel I. Weiner serves as director of the Brennan Center’s Elections and Government Program, where he leads work on money in politics, voting and elections, government ethics, and other democracy and rule of law issues. He has authored a number of nationally recognized reports and law review articles, and writes and comments regularly in outlets such as the New York Times, the Washington Post, the Los Angeles Times, the Wall Street Journal, Politico, Slate, the Daily Beast and National Public Radio.
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Latest Articles

0
Would love your thoughts, please comment.x
()
x