AI and Elections, Misinformation, and Human Rights

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.

Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

There’s a Lot We Can Do To Support Responsible AI. The Time To Act Is Now.

AI isn’t going anywhere — and now, as the technology quickly emerges, is the moment to support organizations and activists working to defend democracy and build a responsible AI future. Here are four reasons why funders must act to resource the organizations protecting elections, fighting misinformation, combating bias, and defending human rights in a changing digital landscape.  

1. AI-Generated Content Is Already Being Used To Impact Elections.

Around half of the world’s population have been or are going to the polls this year. This includes the US, where in 2023 AI was used to impersonate President Biden’s voice urging voters to abstain from voting and the FBI warned that AI could be used by foreign adversaries to spread disinformation and influence voters. In the UK, India, Brazil, and Mexico, voters have grappled with voice clones and fake videos spreading false information. Adopting policies and regulatory frameworks that support the responsible development and use of AI is critical to combatting bad actors’ ability to use AI-generated content to spread misinformation and undermine democracy. 

Fortunately, there are several great organizations doing just this. Aspen Tech Policy Hub, a tenant in Tides’ San Francisco office, teaches experts in STEM fields, including AI, how to develop solutions and advocate for policy changes that support social good. They’ve developed materials like policy 101 guides on AI and misinformation. Their partner program, Aspen Digital, also launched the AI Elections Advisory Council which is a non-partisan group composed of civil society and technology leaders taking steps to build democratic resilience in the face of AI. They offer helpful resources for those working in AI to minimize the potential harms of the technology through actions like informing communities of the ways AI may be used to suppress or influence their vote.  

By supporting policies and regulatory frameworks that keep up with the pace of technology, this work, alongside the research of other nonprofit AI experts, helps safeguard our democracy against bad actors.  

2. There’s a Lot We Can Do To Fight Misinformation.

AI-generated content can distort the ways we engage online and with others and even make us ask what’s real. There are two ways funders can make it easier for individuals to trust the information they consume and share: (1) funding the creation and evolution of tools that can tell AI-generated from real content and (2) training journalists and activists how to use those tools. 

One Tides grantee that has been at the forefront on this issue is WITNESS, which works closely with human rights defenders, journalists, and technologists to protect and defend human rights through video and technology. As part of this mission, WITNESS is training journalists and human rights defenders to use deepfake detection tools that can identify AI-generated content before it takes root. Armed with these tools, journalists and human rights activists can work to halt the spread of AI-generated media or quickly inform the public that a trending video or audio clip has been fabricated or modified. In a quickly evolving media landscape, WITNESS’s work to equip journalists and activists with this skillset is vital.  

3. The Grassroots Activists Who Could Shape Responsible AI Aren’t Getting the Resources They Need.

Activists and human rights organizations are all navigating this emerging landscape and almost all need support. The Center for Nonviolent Action and Strategies (CANVAS) works with social justice and human rights activists around the world to build more effective movements by sharing cutting-edge knowledge and highlighting emerging issues via workshops and university programs. This includes the annual People Power Academy, which will feature discussions on AI, digital security, navigating misinformation, and more.  

Eventually, there will be more programs to train activists on AI, just like there are now programs to help activists combat disinformation or strengthen digital security. The challenge is that grassroots organizations, especially BIPOC-led organizations, are often the most underfunded. Philanthropy must play a larger role in supporting the work of these organizations as they advance responsible AI. This is why in 2023, Tides Center supported the launch of the Center for Artificial Intelligence and Human Rights, a grassroots initiative in Asia working to protect human rights in the quickly evolving digital landscape.   

4. To Fight Biased AI, We Must Amplify the Voices of the Communities Most Likely To Experience Harm.

AI is only as good as the data it trains on, raising concerns that its algorithms may inadvertently perpetuate or exacerbate existing systemic biases leading to discriminatory outcomes in criminal justice, healthcare, and more. Yet funders can support efforts that make AI more inclusive and representative of society at large while also strengthening AI accountability.  

The New Media Ventures Education Fund at Tides Foundation, which supports startups tackling challenges facing democracy, is a prime example of that duality. Long-term investors in AI, they are one of the leading funders centering ethical AI in their investment strategy to mitigate bias and systemic harms at the outset of developing technologies. They have invested in groups like Algorithmic Justice League, which raises awareness about the impacts of AI, builds the voice and choice of the most impacted communities, and galvanizes key stakeholders to mitigate AI harms and biases. They’ve also invested in Reliabl, which helps organizations develop localized, community-driven AI models, leading to more accurate algorithms, less biased AI, and greater community trust. These are just two of the many New Media Ventures-backed organizations developing AI tech with systems-impacted communities in mind; so we know that mitigating harm is mostly a matter of intention and proximity, and more importantly that it’s possible.  

Our Recommendations for Funders Who Care About AI and Social Good:

1. Get Involved Now.

If you want AI to be a tool for social good rather than harm, the time to act is now. It’s much more difficult to roll back harmful practices and platforms than it is to support just and equitable structures from the start. 

2. Invest in Grassroots Organizations.

Invest in grassroots organizations that are proposing policies and frameworks for responsible AI, leading AI for good initiatives, mitigating misinformation and bias, and contributing to greater diversity in tech. Consider supporting organizations like Aspen Tech Policy Hub, WITNESS, CANVAS, Algorithmic Justice League, Reliabl, and others at the forefront of responsible AI. 

3. Support Policy Efforts Focused on Ethical and Just Regulation of AI Tools.

Funders can both provide funding to support this work and use their influence to advance policy change.  

Tides works with social change leaders and funders across our ecosystem that are advancing responsible AI from multiple angles. Contact us to learn more about these partners and their work.  

News & Press