Written by Milan Wiertz (external submission) and edited by Tommaso Filippini

Forget the fears that Artificial Intelligence (AI) would be used to meddle in democracy during elections: we now have good reason to fear for our democratic institutions even in between elections! With the launch of Large Language Models (LLMs) such as OpenAI’s ChatGPT, we are being flooded by publicly accessible tools that have the potential of thwarting democratic governance as we know it. Yet, although much attention has been paid to how AI might affect elections, we have only scratched the surface of what its consequences might be on citizen participation in democratic processes.

Although our modern conception of democracy is generally understood to be representative, i.e. characterised by elected individuals governing in the interest of the electorate, the last several decades have seen a push for increased citizen involvement between elections (Irvin & Stansbury, 2004). This includes the introduction of Public Consultation and Freedom of Information (FOI) Acts, compelling the bureaucratic state to request public input on proposed regulation, and to make available internal documents and communications concerning the citizenry, upon request. These changes ensure that citizens have a voice in regulatory processes and can access critical information to keep their elected representatives in check.

But what happens when digital technologies become pervasive? As pointed out by Prof. James Fishkin of Stanford University, early developments promised a world of digital governance, allowing citizens to participate in decision-making from the comfort of their own home (Fishkin, 2011). We can think of the European Union’s (EU) recent Conference on the Future of Europe, which relied heavily on online tools to receive feedback from citizens across the Union, as one of such successful implementations of digital public participation. However, opening to this new digital frontier also made our democratic institutions vulnerable to technological developments that risk severing the ties between citizens and government. This threat has now materialised in the form of AI, and our institutions are not ready to face it.

Whilst public consultations might appear mundane, and perhaps even boring, they are of vital importance whenever the issue being discussed is of great significance for the population. For instance, the 2017 battle for net neutrality in the United States saw a massive influx of public comments from concerned citizens, fearing that ending the policy would endanger their freedom to access the internet (Reardon, 2017). The battle for net neutrality, however, also illustrates how nefarious actors can abuse online public consultation to benefit their interests. The Guardian reported that, together with genuine comments, the US Federal Communications Commission (FCC) received as many as 450,000 fraudulent comments opposing the regulation, likely in an attempt to overwhelm the FCC and misrepresent the level of public support for the repeal (Rushe, 2017).

At the time, it was easy to identify these comments as they were all virtually identical, but the potential fraudsters of today have a much more potent tool in their arsenal: armed with LLMs, they could simply rehash the same message to create the illusion of man-made comments. What happens if thousands of unique-appearing comments are submitted by unscrupulous interest groups or malicious foreign actors? How can we differentiate real submissions from fraudulent ones? As of right now, we can’t (Edwards, 2023).

FOI submissions face a similar danger. Although not frequently used by individual citizens, FOIs are of critical importance to journalists, particularly when it comes to uncovering political scandals, a vital function for our democracy (Stein & Camaj, 2018). In the UK, for instance, the BBC recounts

how FOI was used to uncover the 2009 expenses scandal, which forced 7 cabinet ministers as well as 19 MPs to resign, with 2 members of the House of Lords being convicted for false accounting (Rosebaum, 2015). The scandal ultimately prompted the formation of the Independent Parliamentary Standards Authority, to monitor Members’ expenses, an important step for governmental accountability. Scandals uncovered by journalists frequently lead to institutional changes, but even when they don’t, they represent an important tool to ensure citizens are aware of any potential wrongdoings by public officials (Stein & Camaj, 2018).

The biggest danger to FOI is time and capacity. In an interview with Le Monde, the head of the French Administrative Documents Access Commission (CADA), which oversees French FOI requests, bemoaned how an increase in interest for government documents by individuals and civic society groups had led to French administration offices being overwhelmed with requests (Ferrer, 2018). The British NGO openDemocracy similarly reported how in the UK requests often took over a year to complete (Amin, 2020), and in Canada, The Globe and Mail reported that FOI requests are frequently backlogged for several years (Doolittle & Cardoso, 2023). Such long waiting periods mean that vital information is often withheld until its contents are no longer actionable. Add to this the possibility of nefarious actors using automated systems to file thousands of spurious requests, flooding these systems, whether to protect information about themselves or to disrupt democratic functioning, and we have a dangerous cocktail which current systems are not equipped to face.

What are our governments doing to address this? For now, it doesn’t appear they’re even aware of the challenge. The Dutch platform for public consultation, for instance, accepts comments from anyone who can muster a real looking email address as well as initials, a last name and a city. Very open to citizen participation – but also very open to abuse.

Possible intermediate fixes might include additional steps, such as a Captcha or Login process, as the EU’s own “Have your say” platform already does, choices which need to be carefully weighed against the risk of turning citizens from participating if friction becomes too high, in a delicate balance between ensuring citizens participation while discouraging fraudsters.

In the long term, however, more structural approaches will be needed to safeguard our institutions and ensure that citizens can defend the public interest when the government does not. Whilst creating a system capable of identifying AI-generated text would of course be ideal, other solutions need to be considered in the likely event that such systems prove unreliable.

At a time when trust in democratic governance is at an all-time low, we cannot afford to lose these crucial links between government and the population. Safeguarding our institutions will necessitate short term fixes as well as long term solutions, but this will require governments to be aware of the dangers looming, for the soundness of our democratic institutions depends on it.

___

Bibliography

Amin, L. (2020). Art of Darkness: How the Government is Undermining Freedom of Information. openDemocracy. https://www.documentcloud.org/documents/20415987-art-of darkness-opendemocracy

Doolittle, R., & Cardoso, T. (2023, June 9). How Canada’s FOI system broke under its own weight. The Globe and Mail. https://www.theglobeandmail.com/canada/article-canada-freedom-of information-laws/

Edwards, B. (2023, September 8). OpenAI confirms that AI writing detectors don’t work. Arstechnica (Wired Media Group). https://arstechnica.com/information-technology/2023/09/openai admits-that-ai-writing-detectors-dont-work/

Ferrer, M. (2018, January 18). La très difficile transparence des administrations en France. Le Monde. https://www.lemonde.fr/les decodeurs/article/2019/01/18/la-cada-la-transparence-au rabais_5411293_4355770.html

Fishkin, J. S. (2011). When the People Speak: Deliberative Democracy and Public Consultation. Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199604432.001.0001

Irvin, R. A., & Stansbury, J. (2004). Citizen Participation in Decision Making: Is It Worth the Effort? Public Administration Review, 64(1), 55–65.

ABI/INFORM Collection; Worldwide Political Science Abstracts. https://doi.org/10.1111/j.1540-6210.2004.00346.x

Reardon, M. (2017, July 20). FCC gets more than 10 million comments on net neutrality. CNET. https://www.cnet.com/tech/services-and software/fcc-gets-more-10-million-comments-on-net-neutrality/

Rosebaum, M. (2015, January 2). 10 things we found out because of Freedom of Information. BBC. https://www.bbc.co.uk/news/magazine 30645383

Rushe, D. (2017, May 26). “Pretty ridiculous”: Thousands of names stolen to attack net neutrality rules. The Guardian. https://www.theguardian.com/technology/2017/may/26/fcc-net neutrality-open-internet

Stein, L. L., & Camaj, L. (2018). Freedom of Information. Oxford Research Encyclopedia of Communication.

https://doi.org/10.1093/acrefore/9780190228613.013.97

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like