Podcast Summary
Impact of AI on 2024 Democratic Elections: Social Media and Information Warfare: The 2024 democratic elections will see significant AI manipulation in social media and information warfare, increasing societal polarization and distrust, and posing deeper risks to democratic institutions.
The 2024 democratic elections will witness a significant impact from the new wave of AI, particularly in the realms of social media and information warfare. With over 2 billion people undergoing democratic elections across 70 countries, including major democracies like the US and the UK, the potential for AI manipulation is vast. Social media has seen a proliferation of new entrants and decentralization, leading to more people across more platforms. The addition of sophisticated generative AI technology has significantly reduced the cost of content creation, enabling the dissemination of unreality. These shifts intersect with increased societal polarization and distrust, creating a volatile situation. While misinformation and disinformation are concerns, Carl Miller emphasizes deeper risks, including the manipulation of public perception and the erosion of trust in democratic institutions. The concept of information warfare and influence operations is crucial to understanding these risks, as they extend beyond just false information on the internet.
Information warfare: A new and sophisticated form of conflict: Military, state, for-profit entities, and consultants manipulate info to influence outcomes using cognitive psychology, economic inducements, coercion, and physical assets. Affects elections, geopolitical advantage, and more. Diverse threat actors include states and non-states. Deepfakes a concern but impact unclear. Strategies and tactics evolving rapidly.
Information warfare is a new theater of war, and it's becoming increasingly sophisticated and multi-faceted. Military and state actors, as well as for-profit entities and consultants, are using a wide range of tactics, including cognitive psychology, economic inducements, coercion, and even physical assets, to manipulate information and influence outcomes. This is not just a social media phenomenon; it's a global trend that affects elections, geopolitical advantage, and more. The threat actors behind these influence operations are diverse and include both state and non-state actors. Deepfakes and other forms of manipulated media are a growing concern, but it's unclear how much impact they're having. Overall, the strategies and tactics of information warfare are evolving rapidly, and it's essential to stay informed and adapt to these changing realities.
Determining the provenance of AI-generated content is a challenge: While watermarking AI-generated content is a step forward, it doesn't solve the issue of determining the authenticity and origin of manipulated content. The human element of meaning and identity remains crucial in influence operations.
While deepfakes and other AI-generated content may change the landscape of influence operations subtly by making fake images and videos cheaper and slightly more convincing, the real game-changer lies in the social connections and meaning behind human interactions. Watermarking AI-generated content, as proposed by President Biden, is an important step, but not a panacea. It may add an intermediate period of confusion as both good and bad actors use watermarked and unwatermarked content. The ongoing challenge lies in determining the provenance of content, especially when it's edited on phones or created using open-source tools. The weaponization of friendship through AI is also a concern, with the potential for manipulation in elections and other areas. However, the human element of meaning and identity remains the most significant factor in influence operations.
AI-powered 'friends' could manipulate beliefs and behaviors: AI is becoming capable of creating one-on-one relationships, posing a risk of indistinguishable influence on individuals' beliefs and behaviors.
The future of illicit influence through AI may not lie in broadcasting messages to large groups, but rather in establishing multitudes of direct, one-to-one relationships with target audiences. These relationships, which could be automated or semi-automated, would function as "perfect friends" that sympathize, celebrate, and subtly suggest ideas over time. The cost of creating these fake friendships is rapidly approaching zero, making it an increasingly powerful tool for influencers. This development is particularly concerning because, as Renee mentioned, our most impactful relationships often come from personal connections, such as parents, best friends, or romantic partners. By outsourcing this influential technology, we risk being unable to distinguish between genuine and artificial relationships, potentially leading to significant and lasting changes in individuals' beliefs and behaviors. This trend was already evident in 2020, with actors building one-on-one relationships on platforms like Discord and Twitch. As these technologies continue to evolve, it will be increasingly challenging for researchers and users to detect and counteract the influence of these AI-powered "friends."
Russian IRA used social media to form relationships and influence activists: The Russian IRA effectively used social media to build online relationships with activists, guiding beliefs and confirming biases, highlighting the importance of being cautious in democratic debate and the slippery nature of truth online.
The Internet Research Agency (IRA) in Russia used social media platforms like Facebook and Messenger to engage with activists and form relationships, positioning themselves as helpful and supportive. These online relationships were influential, even if some activists suspected something was off. This is not unique to the digital age, but the ease of online interactions makes it more effective. The problem is not just about disinformation, but about confirming beliefs and guiding people in a certain direction. The slippery nature of truth in democratic debate means it will always be contested, and we should be cautious about think tanks defining what's true or false. The IRA's tactics of forming online relationships and influencing beliefs have emerged as a significant concern in the digital age.
Exposing covert influence operations: Collaboration between online researchers and investigative journalists, state and platform action, and data access are crucial to combat hidden, covert, professionalized, and sustained influence operations by autocratic military or intelligence bureaucracies.
Instead of focusing solely on disinformation, we need to shift our attention to identifying and exposing hidden, covert, professionalized, and sustained influence operations by autocratic military or intelligence bureaucracies. These actors pose a significant threat to democratic information environments and require a multi-faceted approach to combat. Online researchers and investigative journalists must collaborate to uncover the organizational and financial realities behind these information maneuvers. Additionally, states and platforms need to work together to reveal the identities behind these accounts and require more information during account setup. One effective intervention identified is removing the reshare button after content has been shared once, as viral content is more likely to spread harmful information. Friction in account setup and potential restrictions during election periods could also help mitigate the impact of these operations. The most crucial need identified is access to data for advanced analytics, which is increasingly becoming unreachable or expensive for researchers.
Regulatory gap due to lack of accessible data for researchers: The absence of accessible data for researchers could hinder informed decision-making and effective protection against online threats during elections, leaving us in a regulatory gap where we are less protected against increasingly sophisticated offensive measures.
The absence of accessible data for researchers could significantly hinder informed decision-making and evidence-based responses to online harms and election interference in the coming year. Regulatory actions, such as the Digital Services Act, are expected but may not come into effect in time for the upcoming elections. The loss of this research ecosystem could leave us in a regulatory gap, where we are less protected against increasingly sophisticated offensive measures. Additionally, alternative solutions like Twitter's community notes have limitations and can be susceptible to manipulation. It is crucial to acknowledge these challenges and work towards addressing them to ensure effective protection against online threats during elections.
The danger of viral rumors and AI-generated fake information: Malicious actors can use AI to spread false information, leading to a crisis of trust and social divides. Addressing the root causes is crucial, focusing on verification and recognizing the importance of elections.
The gap between viral rumors and the truth can lead to a crisis of trust and social divides, exacerbated by technology. Malicious actors can use AI to generate fake information, which can be quickly discredited, leading to a crisis of trust. This can be particularly effective in creating deep, identity-based communities and reinforcing social divides. My biggest fear is that these actors will weaponize relationships and exploit people's sense of alienation and loneliness to drag them into parallel, conspiratorial worlds that undermine the legitimacy of elections. The focus should be on addressing the root causes of these issues, rather than simply trying to detect and push out lies. This requires a shift in how we verify knowledge and manage public life, and a recognition that these elections matter and are not foregone conclusions.
Think like an attacker to counteract info threats: Policymakers should use imagination, consider unconventional techniques, and focus on offensive measures like denying access, financial sanctions, and criminal laws to effectively counteract info threats.
Policymakers should think like attackers to effectively understand and counteract information threats. This means considering unconventional techniques and various vectors of influence, such as Wikipedia, local media outlets, and influencer bribing. Defenders are less effective at understanding influence across all its dimensions, so imagination and a comprehensive approach are essential. The focus should be on offensive measures, including state actions like denying access to technology, financial sanctions, and criminal laws, to make these operations less effective and profitable. While data access is crucial, the main solutions lie in asymmetric non-informational responses. The current balance is in favor of influence operators, and changing this requires a diverse range of actors, including states, think tanks, and law enforcement agencies.
Exposing Propaganda: A Forgotten Art: Historically effective strategies against disinformation involve transparently exposing operations and educating the public on propaganda techniques. Prioritize these methods over technological solutions to build trust and understanding.
As democracies grapple with the formidable threat of disinformation and propaganda, there is an urgent need to levy serious costs against those who engage in such activities. Historically, there have been effective strategies, such as those employed by the Active Measures Working Group in the 1980s and the Institute For Propaganda Analysis in the 1930s, which involved transparently exposing these operations to the public and educating them on how to recognize propaganda techniques. These strategies, which focus on building trust and understanding, are critical in the face of rapidly evolving technology. While technological solutions, like watermarking, are important, they alone cannot solve the problem. Instead, we must prioritize transparent programs that explain the emotional resonance of disinformation and propaganda, helping individuals to recognize and respond effectively. This lost art of rhetoric education, which was once a patriotic duty, is crucial in our current moment.
Increasing transparency and reducing harmful engagement on social media: Platforms should open APIs to researchers and journalists, implement digital morality blackouts, pause engagement on uncontrolled features during elections, and explore circuit breakers to reduce engagement during sensitive periods.
While we face greater threats in the digital realm due to the advancement of AI and reduced protections, there are practical steps we can take to increase transparency and reduce harmful engagement on social media platforms. Platforms like Twitter and Facebook should open their research feeds and ads APIs to academic researchers and journalists for scrutiny. Additionally, digital morality blackouts could be implemented during sensitive periods to prevent the spread of potentially harmful content. Platforms could also pause or reduce engagement on uncontrolled features during elections or other sensitive times. The use of circuit breakers, which pause trading during stock market crashes, could be a model for social media platforms to reduce engagement during sensitive periods. It's crucial that these efforts are bipartisan, transparent, and aimed at reducing the "engagement monster" that creates an unwinnable game. The Center For Humane Technology, a non-profit organization, is working towards a humane future in the field of AI and encourages undivided attention from its audience.