Podcast Summary
Addressing tech issues needs collective action, especially from tech companies: Understand and address tech issues to change the system, as individual targeting practices continue to spread, profiting tech companies
The issues the Center for Humane Technology addresses, such as election integrity, social isolation, and information environment toxification, require collective action from everyone, especially those within technology companies. The podcast, though growing rapidly, is just a small part of the solution. The methods of persuasion shown at Cambridge Analytica, like targeting individuals with customized messages, are now accessible to any candidate with a Facebook account. These practices will continue to spread, regardless of data security. Facebook, as the arms dealer, profits from their deployment. It's crucial for everyone to understand and address these issues to change the system.
Revolutionizing Political Engagement with Data-Driven Social Media: The Obama campaign's use of social media for one-to-one interaction led to exponential growth in political engagement, and AI systems are now taking it further with real-time diarization and matching of sales calls to successful strategies.
Data-driven targeted messaging through social media revolutionized political engagement and outreach during the Obama campaign. Prior to this, campaigns primarily used email for fundraising and speeches for reaching voters. However, the Obama campaign's use of social media led to exponential growth in political engagement, with thousands and tens of thousands of people attending rallies instead of just tens or hundreds. This was achieved through one-to-one interaction platforms, such as real-time text Q&A during debate watch parties. Now, AI systems are taking this a step further, with real-time diarization and matching of sales calls to successful strategies, creating an asymmetric power for the salesperson. This technology, which is already being used in B2B sales, is likely to be adopted for political campaigns in the future. Overall, the use of data and targeted messaging has significantly transformed the way campaigns reach and engage with voters.
Using Data to Expand Political Reach: Advanced data tools enabled political campaigns to proactively find new audiences and expand their reach by identifying individuals with similar behavioral patterns and interests to their existing supporters.
The use of data and targeting in political campaigns has significantly evolved over the years. In 2010, campaigns relied on manual methods to target their messages to supporters based on their known interests and past engagements. This was a reactive approach, as campaigns could only reach out to those who had already shown support. However, with the development of tools like lookalike modeling and the Friends API, campaigns gained the ability to proactively find new audiences and expand their reach. These tools allowed campaigns to identify individuals with similar behavioral patterns and interests to their existing supporters, effectively finding "doppelgangers" in the crowd. The power of these tools became even more significant during the 2012 elections when over 40,000 developers were given access to most people on the platform's personal data through the Friends API. The exponential growth in advertising tools over the following years enabled campaigns to target audiences based on various categories, including race and religion. The key difference between the Obama campaigns in 2012 and 2016 was the intention of the messaging. While the 2012 campaign focused on speaking to supporters they already had, the 2016 campaign used these advanced tools to find new supporters and shape their perceptions.
Negative campaigning and voter suppression tactics in 2016 elections were more advanced and pervasive: Negative messaging was the focus of entire organizations in 2016, leading to a vulnerable democratic process. Tech companies have the power to mitigate negative effects with transformative decisions.
The use of negative messaging, counter campaigning, and voter suppression tactics in political campaigns, particularly during the 2016 U.S. election and Brexit, was significantly more advanced and pervasive compared to the 2012 election. Obama's campaign had a strict policy against negative messaging, and all messages pushed out were positive and encouraging. However, in 2016, negative messaging was the focus of entire organizations, leading to a lack of accountability and a vulnerable democratic process. The lack of enforcement of election laws on technology platforms allowed for widespread abuse, with consequences that could impact the outcome of elections. Companies like Facebook and Twitter have the power to make transformative decisions, such as banning political advertising or implementing blackout periods before elections, to mitigate the negative effects of unregulated use of algorithmic machine optimized toxic speech.
Tech companies fear acknowledging potential dangers and instability of their ad systems: Tech companies hesitate to halt political ads due to implications of systemic issues and underestimation of individual susceptibility to persuasion, leading to potential consequences on public perception and behaviors.
Tech companies' reluctance to turn off political ads during critical times, even for a small percentage of time, stems from the fear of admitting the potential danger and instability of their advertising systems. This fear is rooted in the possibility that acknowledging the exponential number of targeted ads and the machines behind them could imply a fundamental issue with the entire system. Furthermore, individuals may underestimate their own susceptibility to persuasion, making it challenging to change their biases or behaviors through messaging campaigns. The impact of persuasive messaging can be measured by observing changes in people's activities and searches after being exposed to specific ads. Over time, biases shaped by such campaigns can influence behaviors, making it crucial for companies to consider the potential consequences of their advertising practices. For instance, the "Defeat Crooked Hillary" campaign, originating from Trump's phrases and developed by Cambridge Analytica, demonstrates how cognitive binding through negative phrases can significantly impact public perception and support for political candidates.
Exploiting Biases in Political Campaigns: Political campaigns can manipulate people's biases, particularly fear, to influence their perceptions and behaviors. This was demonstrated during the 2016 US elections with targeted messaging based on personality traits.
Our biases, particularly in the context of political campaigns, can be exploited in profound ways to influence people's perceptions and behaviors. The use of fear-based messaging, for instance, can be particularly effective for individuals high in neuroticism, a personality trait characterized by emotional instability and susceptibility to fear. This was demonstrated during the 2016 US elections when the Crooked Hillary campaign targeted neurotics with fear-based messaging, while using hopeful and assertive messages for other personality types. This tactic, which can be likened to relentlessly hammering on a person's perceived weakness, raises concerns about the ethical implications of such manipulation in our democracies. The fact that the same speaker can deliver two completely contradictory messages to different audiences is a concerning sign of potential untrustworthiness and sociopathic behavior.
Automated sociopathy in digital advertising leads to societal incoherence: Digital advertising algorithms prioritize content that generates clicks, often inflammatory or false, leading to societal confusion and division
The mass infrastructure for automated sociopathy in digital advertising, as described in the discussion, leads to societal incoherence and an inability to agree on truths due to the constant delivery of contradictory messages. Micro targeting, or human targeting as it should be called, uses supercomputers and vast amounts of data to find the right brains to target and sell the bullets to the highest bidder. The algorithms prioritize content that generates more clicks, often inflammatory or fear-based, leading to a surge of false or harmful information. The example given was Facebook's attempt to automate trending topics, which resulted in the promotion of fake news articles. With millions of advertisers and trillions of ad combinations running daily, the machines lack the ability to determine if the ads are true, good, or helpful to society. This creates an unmanageable problem that can lead to societal confusion and division.
Understanding the limitations of AI in making moral decisions: AI can't understand context, morality, or societal impact, but creating trustworthy moral decision-making capabilities is crucial to ensure safe and ethical AI use in politics and other areas.
While machines can process vast amounts of data and make decisions based on patterns, they lack the ability to understand context, morality, or societal impact. The push for automation and machine decision-making in areas like politics and children's health comes down to profitability, but if we deny machines the capacity to make critically important decisions, we limit their use. The challenge is to create trustworthy trending or moral decision-making capabilities for AI, which could serve as a "unit test" for its ability to approximate good human decision-making. The Cambridge Analytica scandal, which involved manipulating political campaigns in over 50 countries, illustrates the potential dangers of unchecked machine decision-making in politics. It's crucial to recognize these limitations and work towards creating safe and ethical AI systems.
Manipulating voter behavior through psychological operations research: Psychological operations research can create targeted campaigns that significantly influence voter behavior on a large scale by identifying people's motivations, cultural backgrounds, and levers of persuasion.
Psychological operations (psyops) research, used by companies like Cambridge Analytica, can significantly influence voter behavior on a large scale. This research identifies people's motivations, cultural backgrounds, and levers of persuasion to create targeted campaigns. For instance, in Trinidad and Tobago, a youth movement called "Do So" was constructed to discourage voting, particularly among persuadable youth. The movement spread through memes, videos, graffiti, and demonstrations, leading to a significant decrease in youth voter turnout. However, this tactic had an unexpected consequence: the Indian youth, who were culturally inclined to vote with their families, still went to the polls, ensuring the winning party's victory. This example illustrates how psychological manipulation can spill out into the real world, leading to significant political consequences. Additionally, the SCL Group, Cambridge Analytica's parent company, has been involved in destabilizing governments and installing corrupt leaders in countries like Indonesia. The unifying marker of these manipulative movements is a precise, rapid message that quickly turns into protests, often spreading faster than organic movements.
The power of memes and organic movements: Memes and organic movements can lead to rapid mobilization and impact, but ethical considerations are crucial to prevent manipulation and respect individuals' values.
The power of memes and organic movements can lead to rapid mobilization and widespread impact, but ethical considerations are crucial to ensure respect for individuals' values and prevent manipulation. The example of Trinidad and Tobago's crossing hands meme illustrates this, as it spread virally and inspired various forms of expression without external influence. However, the conversation around ethical persuasion raises concerns about the potential for persuaders to impose their values on the persuadees, leading to a loss of individual autonomy. This issue is particularly relevant in the context of social media platforms, where the lack of legal and regulatory frameworks and the absence of consistent enforcement of content standards create opportunities for abuse. The situation is further complicated by the fact that political figures, including those with a history of spreading disinformation, are not held to the same standards as individuals. As we look ahead to 2020, it is essential that technologists and policymakers address these challenges and find ways to promote ethical persuasion and protect individuals from harmful content.
Balancing Free Speech and Productive Political Discourse on Social Media: Social media platforms must address concerns of misuse and manipulation in political messaging while ensuring equal opportunities for users, requiring significant changes in laws, regulations, education, and technology. AI reliance could lead to privacy concerns and manipulation, necessitating responsibility from platforms and individuals.
Social media platforms are at a crossroads, with Facebook and Twitter currently allowing unfettered political messaging while the potential for misuse and manipulation is a significant concern. A potential solution suggested was the implementation of a "mass fairness doctrine," where each platform would provide equal speaking opportunities for users, ensuring political discourse remains productive and free from manipulation. However, this requires significant changes in laws, regulations, education, and technology. Another concerning issue is the increasing reliance on AI for communication and information, which could lead to privacy concerns and the potential for manipulation by those with malicious intentions. It's crucial for social media platforms to take responsibility for constructing the social world they've created and to introduce measures that promote fairness, transparency, and consent. Additionally, individuals must prioritize owning their data and receiving proper education to navigate the digital landscape effectively.
Manipulating messages with machine learning: Machine learning can learn and mimic individual writing styles to create persuasive, targeted messages, potentially leading to socially divisive consequences.
Machine learning technology, including style transfer for text, has the potential to create personalized messages that can manipulate individuals, leading to a socially divisive and unsustainable business model. This technology, which can learn and mimic an individual's writing style, can be used to create targeted messages that are uniquely persuasive to each person. This is akin to Iago in Shakespeare's Othello, who strategically gossips to manipulate people and create distrust. If left unchecked, this technology could divide society by spreading misinformation and making it impossible for individuals to compare notes and realize the artificial nature of the divide. Therefore, it is crucial to be aware of this technology and its potential dangers, and to advocate for transparency and ethical use of such advanced AI capabilities.