Logo
    Search

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    enDecember 21, 2023

    Podcast Summary

    • Impact of AI on 2024 Democratic Elections: Social Media and Information WarfareThe 2024 democratic elections will see significant AI manipulation in social media and information warfare, increasing societal polarization and distrust, and posing deeper risks to democratic institutions.

      The 2024 democratic elections will witness a significant impact from the new wave of AI, particularly in the realms of social media and information warfare. With over 2 billion people undergoing democratic elections across 70 countries, including major democracies like the US and the UK, the potential for AI manipulation is vast. Social media has seen a proliferation of new entrants and decentralization, leading to more people across more platforms. The addition of sophisticated generative AI technology has significantly reduced the cost of content creation, enabling the dissemination of unreality. These shifts intersect with increased societal polarization and distrust, creating a volatile situation. While misinformation and disinformation are concerns, Carl Miller emphasizes deeper risks, including the manipulation of public perception and the erosion of trust in democratic institutions. The concept of information warfare and influence operations is crucial to understanding these risks, as they extend beyond just false information on the internet.

    • Information warfare: A new and sophisticated form of conflictMilitary, state, for-profit entities, and consultants manipulate info to influence outcomes using cognitive psychology, economic inducements, coercion, and physical assets. Affects elections, geopolitical advantage, and more. Diverse threat actors include states and non-states. Deepfakes a concern but impact unclear. Strategies and tactics evolving rapidly.

      Information warfare is a new theater of war, and it's becoming increasingly sophisticated and multi-faceted. Military and state actors, as well as for-profit entities and consultants, are using a wide range of tactics, including cognitive psychology, economic inducements, coercion, and even physical assets, to manipulate information and influence outcomes. This is not just a social media phenomenon; it's a global trend that affects elections, geopolitical advantage, and more. The threat actors behind these influence operations are diverse and include both state and non-state actors. Deepfakes and other forms of manipulated media are a growing concern, but it's unclear how much impact they're having. Overall, the strategies and tactics of information warfare are evolving rapidly, and it's essential to stay informed and adapt to these changing realities.

    • Determining the provenance of AI-generated content is a challengeWhile watermarking AI-generated content is a step forward, it doesn't solve the issue of determining the authenticity and origin of manipulated content. The human element of meaning and identity remains crucial in influence operations.

      While deepfakes and other AI-generated content may change the landscape of influence operations subtly by making fake images and videos cheaper and slightly more convincing, the real game-changer lies in the social connections and meaning behind human interactions. Watermarking AI-generated content, as proposed by President Biden, is an important step, but not a panacea. It may add an intermediate period of confusion as both good and bad actors use watermarked and unwatermarked content. The ongoing challenge lies in determining the provenance of content, especially when it's edited on phones or created using open-source tools. The weaponization of friendship through AI is also a concern, with the potential for manipulation in elections and other areas. However, the human element of meaning and identity remains the most significant factor in influence operations.

    • AI-powered 'friends' could manipulate beliefs and behaviorsAI is becoming capable of creating one-on-one relationships, posing a risk of indistinguishable influence on individuals' beliefs and behaviors.

      The future of illicit influence through AI may not lie in broadcasting messages to large groups, but rather in establishing multitudes of direct, one-to-one relationships with target audiences. These relationships, which could be automated or semi-automated, would function as "perfect friends" that sympathize, celebrate, and subtly suggest ideas over time. The cost of creating these fake friendships is rapidly approaching zero, making it an increasingly powerful tool for influencers. This development is particularly concerning because, as Renee mentioned, our most impactful relationships often come from personal connections, such as parents, best friends, or romantic partners. By outsourcing this influential technology, we risk being unable to distinguish between genuine and artificial relationships, potentially leading to significant and lasting changes in individuals' beliefs and behaviors. This trend was already evident in 2020, with actors building one-on-one relationships on platforms like Discord and Twitch. As these technologies continue to evolve, it will be increasingly challenging for researchers and users to detect and counteract the influence of these AI-powered "friends."

    • Russian IRA used social media to form relationships and influence activistsThe Russian IRA effectively used social media to build online relationships with activists, guiding beliefs and confirming biases, highlighting the importance of being cautious in democratic debate and the slippery nature of truth online.

      The Internet Research Agency (IRA) in Russia used social media platforms like Facebook and Messenger to engage with activists and form relationships, positioning themselves as helpful and supportive. These online relationships were influential, even if some activists suspected something was off. This is not unique to the digital age, but the ease of online interactions makes it more effective. The problem is not just about disinformation, but about confirming beliefs and guiding people in a certain direction. The slippery nature of truth in democratic debate means it will always be contested, and we should be cautious about think tanks defining what's true or false. The IRA's tactics of forming online relationships and influencing beliefs have emerged as a significant concern in the digital age.

    • Exposing covert influence operationsCollaboration between online researchers and investigative journalists, state and platform action, and data access are crucial to combat hidden, covert, professionalized, and sustained influence operations by autocratic military or intelligence bureaucracies.

      Instead of focusing solely on disinformation, we need to shift our attention to identifying and exposing hidden, covert, professionalized, and sustained influence operations by autocratic military or intelligence bureaucracies. These actors pose a significant threat to democratic information environments and require a multi-faceted approach to combat. Online researchers and investigative journalists must collaborate to uncover the organizational and financial realities behind these information maneuvers. Additionally, states and platforms need to work together to reveal the identities behind these accounts and require more information during account setup. One effective intervention identified is removing the reshare button after content has been shared once, as viral content is more likely to spread harmful information. Friction in account setup and potential restrictions during election periods could also help mitigate the impact of these operations. The most crucial need identified is access to data for advanced analytics, which is increasingly becoming unreachable or expensive for researchers.

    • Regulatory gap due to lack of accessible data for researchersThe absence of accessible data for researchers could hinder informed decision-making and effective protection against online threats during elections, leaving us in a regulatory gap where we are less protected against increasingly sophisticated offensive measures.

      The absence of accessible data for researchers could significantly hinder informed decision-making and evidence-based responses to online harms and election interference in the coming year. Regulatory actions, such as the Digital Services Act, are expected but may not come into effect in time for the upcoming elections. The loss of this research ecosystem could leave us in a regulatory gap, where we are less protected against increasingly sophisticated offensive measures. Additionally, alternative solutions like Twitter's community notes have limitations and can be susceptible to manipulation. It is crucial to acknowledge these challenges and work towards addressing them to ensure effective protection against online threats during elections.

    • The danger of viral rumors and AI-generated fake informationMalicious actors can use AI to spread false information, leading to a crisis of trust and social divides. Addressing the root causes is crucial, focusing on verification and recognizing the importance of elections.

      The gap between viral rumors and the truth can lead to a crisis of trust and social divides, exacerbated by technology. Malicious actors can use AI to generate fake information, which can be quickly discredited, leading to a crisis of trust. This can be particularly effective in creating deep, identity-based communities and reinforcing social divides. My biggest fear is that these actors will weaponize relationships and exploit people's sense of alienation and loneliness to drag them into parallel, conspiratorial worlds that undermine the legitimacy of elections. The focus should be on addressing the root causes of these issues, rather than simply trying to detect and push out lies. This requires a shift in how we verify knowledge and manage public life, and a recognition that these elections matter and are not foregone conclusions.

    • Think like an attacker to counteract info threatsPolicymakers should use imagination, consider unconventional techniques, and focus on offensive measures like denying access, financial sanctions, and criminal laws to effectively counteract info threats.

      Policymakers should think like attackers to effectively understand and counteract information threats. This means considering unconventional techniques and various vectors of influence, such as Wikipedia, local media outlets, and influencer bribing. Defenders are less effective at understanding influence across all its dimensions, so imagination and a comprehensive approach are essential. The focus should be on offensive measures, including state actions like denying access to technology, financial sanctions, and criminal laws, to make these operations less effective and profitable. While data access is crucial, the main solutions lie in asymmetric non-informational responses. The current balance is in favor of influence operators, and changing this requires a diverse range of actors, including states, think tanks, and law enforcement agencies.

    • Exposing Propaganda: A Forgotten ArtHistorically effective strategies against disinformation involve transparently exposing operations and educating the public on propaganda techniques. Prioritize these methods over technological solutions to build trust and understanding.

      As democracies grapple with the formidable threat of disinformation and propaganda, there is an urgent need to levy serious costs against those who engage in such activities. Historically, there have been effective strategies, such as those employed by the Active Measures Working Group in the 1980s and the Institute For Propaganda Analysis in the 1930s, which involved transparently exposing these operations to the public and educating them on how to recognize propaganda techniques. These strategies, which focus on building trust and understanding, are critical in the face of rapidly evolving technology. While technological solutions, like watermarking, are important, they alone cannot solve the problem. Instead, we must prioritize transparent programs that explain the emotional resonance of disinformation and propaganda, helping individuals to recognize and respond effectively. This lost art of rhetoric education, which was once a patriotic duty, is crucial in our current moment.

    • Increasing transparency and reducing harmful engagement on social mediaPlatforms should open APIs to researchers and journalists, implement digital morality blackouts, pause engagement on uncontrolled features during elections, and explore circuit breakers to reduce engagement during sensitive periods.

      While we face greater threats in the digital realm due to the advancement of AI and reduced protections, there are practical steps we can take to increase transparency and reduce harmful engagement on social media platforms. Platforms like Twitter and Facebook should open their research feeds and ads APIs to academic researchers and journalists for scrutiny. Additionally, digital morality blackouts could be implemented during sensitive periods to prevent the spread of potentially harmful content. Platforms could also pause or reduce engagement on uncontrolled features during elections or other sensitive times. The use of circuit breakers, which pause trading during stock market crashes, could be a model for social media platforms to reduce engagement during sensitive periods. It's crucial that these efforts are bipartisan, transparent, and aimed at reducing the "engagement monster" that creates an unwinnable game. The Center For Humane Technology, a non-profit organization, is working towards a humane future in the field of AI and encourages undivided attention from its audience.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    E23: E23: Robby Soave and Nate Hochman CLASH on big tech and the role of government in the 21st century

    E23: E23: Robby Soave and Nate Hochman CLASH on big tech and the role of government in the 21st century

    It's a Republic if you can keep it. Not if Mark Zuckerberg can.

    On a new "Right Now with Stephen Kent," Stephen sits down with Robby Soave of Reason magazine and writer Nate Hochman for a conversation about how much of a threat Facebook, Twitter, and Google pose to our democracy; if and how we should amend Section 230; the murky area of who gets to censor what on social media; if Donald Trump lost his influence after he was de-platformed; and whether big tech's influence on our day-to-day lives will worsen in the future.

    Subscribe to Rightly and catch more details about the episode below. Make sure to sign up for Unfettered, our new newsletter, available now.

    Newsletter signup:
    https://www.getrevue.co/profile/rightlyaj/issues/right-now-unfettered-7-23-699749

    ---- Content of This Episode ----
    00:00 Episode start
    00:05 Terms of Service
    01:48 Welcome Robby Soave
    03:20 Nate Hochman joins the fray
    04:25 Robby wrote the book on Tech Panic
    06:30 Nate’s concerned, but not panicking
    11:51 Are social media companies publishers?
    14:15 Proposals for Section 230 reforms
    17:37 So what is censorship exactly?
    19:45 The decider of who gets to say what
    22:40 Bring back the Fairness Doctrine?
    30:00 The slippery slope of social media regulation
    34:30 Drawing the line on unchecked power
    41:50 What the future holds
    44:45 Good news on books, moving to The Swamp, and getting out of the house

    ---- Reading List ----

    Tech Panic: Why We Shouldn't Fear Facebook and the Future By Robby Soave https://www.simonandschuster.com/books/Tech-Panic/Robby-Soave/9781982159597

    Executive Order on Promoting Competition in the American Economy https://www.whitehouse.gov/briefing-room/presidential-actions/2021/07/09/executive-order-on-promoting-competition-in-the-american-economy/

    Federal judge blocks Florida’s new social media law targeting ‘big tech’ companies (Miami Herald) https://www.miamiherald.com/news/politics-government/state-politics/article252492548.html

    GOP-sponsored bill to stop Big Tech companies from censoring users dies (Austin Business Journal)
    https://www.bizjournals.com/austin/news/2021/06/03/tech-censorship-bill-in-texas-fails.html

    Biden’s Antitrust Team Signals a Big Swing at Corporate Titans (The New York TImes)
    https://www.nytimes.com/2021/07/24/business/biden-antitrust-amazon-google.html

    Trump's Class Action Lawsuit Against Facebook, Twitter, and YouTube Is an Absurd Farce (Reason)
    https://reason.com/2021/07/07/trump-class-action-lawsuit-facebook-twitter-youtube/

    The Government Should Stop Telling Facebook To Suppress COVID-19 'Misinformation' (Reason)
    https://reason.com/2021/07/15/covid-19-vaccines-misinformation-jen-psaki-white-house-biden/

    Conservative courts could rescue tech (Axios)
    https://www.axios.com/conservative-courts-tech-antitrust-c9eab980-7f7d-4d78-81f9-2e60f606ba83.html

    Right or Left, You Should Be Worried About Big Tech Censorship (Electronic Frontier Foundation)
    https://www.eff.org/deeplinks/2021/07/right-or-left-you-should-be-worried-about-big-tech-censorship

    Top House antitrust Republican forms 'Freedom from Big Tech Caucus' (The Hill) https://thehill.com/policy/technology/563344-top-house-antitrust-republican-forms-freedom-from-big-tech-caucus

    Facebook blocks woman’s ‘why are men so dumb’ comment as ‘hate speech’ (New York Post)
    https://nypost.com/2021/07/26/facebook-blocks-womans-men-are-dumb-comment-as-hate-speech/

    ---- Plugs for our guests ----

    Follow Robby Soave on Twitter:
    https://twitter.com/robbysoave

    Follow Nate Hochman on Twitter:
    https://twitter.com/njhochman

    Watch the latest episode of "Cancel This!", a new Right Now series:
    https://youtu.be/-IMhg1A_Wd4

    #Facebook #Twitter #Google

    Tech News: NPR Says X Isn't Worth It

    Tech News: NPR Says X Isn't Worth It

    Last April, National Public Radio (NPR) closed out its Twitter accounts. Six months later, NPR leadership says that the impact to NPR has been negligible. Could this mean more organizations will say goodbye to Musk's platform? Plus, news about a massive DDoS attack, why we'll be seeing a lot more of AI in the near future, and an update on Sam Bankman-Fried's court case.

    See omnystudio.com/listener for privacy information.

    Frances Haugen: How One Whistle Makes a Difference

    Frances Haugen: How One Whistle Makes a Difference

    Just as Facebook was on the verge of becoming Meta Platforms, Inc. in late 2021, a scathing series of articles was published by the Wall Street Journal. The reporting was based on internal documents that detailed the ways Facebook’s platforms “are riddled with flaws that cause harm, often in ways that only the company fully understands.” The source for these internal documents — some tens of thousands of pages — became known as The Facebook Whistleblower.  The name behind these revelations is ex-Facebook product manager Frances Haugen. 

    On this episode, Haugen reveals why she came forward, what she hopes to accomplish with her new book, The Power of One, and what she sees as the perils — and promise — of an ever-changing technology landscape that requires transparency to keep itself honest.

    See omnystudio.com/listener for privacy information.

    Chatbots for Civic Engagement, with Simon Day

    Chatbots for Civic Engagement, with Simon Day

    Don't forget to sign up for the free Axios newsletter, and tag your best and worst examples of government social media posts with #SMandPwins and #SMandPfails on Twitter!

    Simon Day, co-founder of Apptivism, discusses how chatbots are used to increase civic engagement. By interacting with a chatbot on Facebook Messenger, citizens can give their opinion on policies from their computers or smartphones. Policymakers can then analyze the data from chatbot interactions to better shape policy. Simon breaks down how these chatbots work and describes how Apptivism is helping governments use this new technology.

    Tech News: SpaceX's Starship Gets Off the Ground, Then Explodes

    Tech News: SpaceX's Starship Gets Off the Ground, Then Explodes

    The test launch of SpaceX's Starship was a success despite the fact that the launch vehicle later exploded high above the Gulf of Mexico. There are more battles happening around the subject of artificial intelligence. And if you used Facebook between 2007 and 2022, you can file a claim to get a share in a massive class action lawsuit.

    See omnystudio.com/listener for privacy information.