Logo
    Search

    Podcast Summary

    • History of new technologies used for good and bad purposesUnderstanding historical context of new technologies and their potential dangers is crucial for navigating intersection of centralized and decentralized power

      While decentralization can bring about innovative technologies and seemingly positive changes, it also comes with potential dangers, particularly when it comes to dangerous technologies falling into the wrong hands. In her book "Power to the People," Audrey Kurth Cronin explores the history of new technologies and how they have been used for both good and bad purposes. From the invention of dynamite to the rise of social media, individuals and small groups have harnessed new technologies intended for good, only to use them for violent and dangerous purposes. It's essential to understand this historical context to navigate the intersection of centralized and decentralized power in today's world. While the conversation around the dangers of new technologies can be dark, it's crucial to acknowledge these risks and find ways to mitigate them, ultimately working towards a positive future.

    • Alfred Nobel's Regret: The Invention of DynamiteInnovators should consider the potential negative consequences of their inventions and strive to mitigate harm.

      The inventor of dynamite, Alfred Nobel, experienced a profound sense of guilt and reckoning when he realized the devastating consequences of his invention. Before dynamite, building infrastructure was a labor-intensive and dangerous process. Nobel's invention revolutionized construction but also brought new dangers. He came from a background of poverty and worked on military weaponry. After witnessing the horrors of war and the increasing use of dynamite, Nobel became deeply troubled and founded the Nobel Peace Prize as a way to make amends. Today, we can draw parallels to the tech industry where founders may initially see only the benefits of their inventions, but eventually come to terms with the negative consequences. It's important for innovators to acknowledge and address the potential downsides of their creations.

    • Tech Outpaces Society's Ability to Regulate ItTech companies have the power to cause harm but lack responsibility, and tech leaders could use their wealth and influence to rebuild social fabric instead.

      The rapid advancement of technology outpaces society's ability to regulate it, leading to potential harm or misuse. The example given is the sale of dynamite, where anyone could buy it without question, leading to potential dangerous consequences. Similarly, tech companies today, driven by maximizing shareholder value, have the power to divide society and cause harm, but lack responsibility. The speaker suggests that tech leaders could use their wealth and influence to rebuild social fabric instead. The accelerating deployment of technologies, some of which may not appear harmful but can be repurposed for harm, presents a meta problem. This situation is compared to a bowling alley with two gutters, where catastrophes (such as a viral meme inciting violence) are the decentralized capacity for anyone to cause exponential damage. The challenge is to manage this accelerating deployment of technologies and prevent them from being used for harm.

    • Navigating the Balance Between Decentralized Technologies and Surveillance in a Digital Open SocietyUnderstanding the intersection of social media, drones, AI, and decentralized technologies is crucial for creating a digital open society that recognizes the potential of decentralized tech while avoiding a closed or authoritarian surveillance state.

      We are facing a challenge in navigating the digital world, where decentralized technologies are becoming more accessible to individuals, posing a risk of catastrophic consequences, while governments are increasingly monitoring and surveilling to prevent such outcomes. It's crucial to find a balance between these two extremes and create a digital open society that recognizes the acceleration of decentralized tech capacities while avoiding a closed or authoritarian surveillance state. This requires understanding the intersection of mobilization through social media and communication, increased reach by technologies like drones, and systems integration using AI and other tools that give unprecedented power to small groups. These concepts, while abstract, are essential in finding the middle ground and governing in this new era.

    • Technology's Impact on Power and ConflictTechnology enables individuals and non-state groups to mobilize, reach, and operate autonomously, leading to increased violence and political impact. It's crucial to use these powers wisely to prevent self-termination.

      The use of technology in mobilization, reach, and autonomy has significantly changed the dynamics of power and conflict. Mobilization through social media and propaganda can inspire individuals to carry out violence, leading to an increase in anger and violence. Reach through technology like drones and AI allows non-state actors to project lethal force and have political impact, even without large armies. Autonomy and integration of technologies enable non-state groups to operate under the radar and have a political impact that hollows things out from within. It's important to remember that the decentralization of these godlike powers into everyone's hands comes with the responsibility to wield them wisely and prevent potential self-termination of our species.

    • The history of dynamite and its regulationEurope's proactive response to dynamite's risks through regulation serves as a model for addressing tech's harms. Both governments and tech companies must work together to understand and mitigate tech's downsides.

      The history of dynamite serves as a cautionary tale about the importance of regulating new technologies before they cause widespread harm. Europe's response to the danger posed by dynamite involved regulation, which helped to mitigate the risks. In contrast, the United States initially focused on xenophobia and immigration control, missing the root cause of the problem. It wasn't until the railroad industry stepped in that progress was made. This history may be relevant to our current situation, where new institutions are needed to regulate technology effectively. However, both government and tech companies have a responsibility to work together to understand the downsides of technology and mitigate them. Tech companies, in particular, have a duty to conduct research on the potential harms of their platforms and take steps to address them, even if it means sacrificing short-term competitiveness. Effective government institutions and tech companies working in partnership are essential for navigating the challenges of the future.

    • Discussing the potential for decentralized catastrophic destruction in the digital ageTech companies must prioritize public interest over profit and consider regulatory requirements, externalities funds, and shifting corporate priorities to prevent catastrophic consequences from decentralized technologies like social media.

      As technology advances, the potential for decentralized catastrophic destruction grows. This is not just limited to military-grade weapons, but also includes accessible tools like social media platforms. The discussion highlighted the need for tech companies to understand their responsibilities towards public interest and prioritize it over profit. A historical parallel was drawn between the social upheaval during the industrial revolution and the current digital age, where technologies can be used in dangerous ways during periods of political and social unrest. Possible solutions include regulatory requirements, externalities funds, and a shift in corporate priorities. TikTok, as an example, could lead to catastrophic consequences when a viral meme spreads misinformation or incites violence. The hope is that we can find solutions without resorting to violence, as was the case during the industrial revolution when FDR came into power and brought about change.

    • The democratization of technology brings destructive power to more handsIndividuals now have easier access to destructive technologies, requiring awareness and safeguards to mitigate risks while maximizing benefits.

      The democratization of technology has given more people the capacity to cause catastrophic harm. This is not just limited to states or organizations, but also individuals with access to tools like hacking software, drones, and facial recognition technology. The ease of access to these technologies has increased exponentially, making it harder to contain potential threats. This decentralization of destructive power can be challenging to process and can lead to psychological pitfalls. It's important for individuals to be aware of the dangers and take steps to protect against them, such as educating ourselves and advocating for regulations and safeguards. However, it's also crucial to remember that technology can be used for positive purposes as well. The challenge is to find a balance and ensure that the benefits outweigh the risks.

    • Creating a caring, compassionate society in a decentralized worldFocus on developing prudence, build a society of sousveillance, and prioritize caring for those in need to create a win-win game in a decentralized world.

      In a decentralized world where anyone can cause catastrophic damage, it's essential to ensure no one is left behind. This means creating a caring, compassionate society where everyone has agency and dignity. We cannot play a win-lose game; instead, we must create a win-win game that benefits as many people as possible. The Department of Defense should focus on helping us develop prudence, rather than creating a totalitarian surveillance state. This can be achieved through a society of sousveillance, where people watch out for each other within their communities. The media played a crucial role in mitigating the spread of dangerous ideas in the past, and they will need to do so again in the future. They went through an upgrade process involving ethical standards and social contagion theory, and we can learn from their experience. Ultimately, caring for those in our own lives who are struggling is a step towards building a more resilient and compassionate society.

    • Media's responsibility to report factually and ethicallyThe lack of responsibility framework for individuals with large social media followings can lead to dangerous and contagious memes and ideas.

      The responsibility of media outlets to report factually and ethically began to be institutionalized in response to the wave of violent incidents, such as hijackings and assassinations, that were found to inspire copycat attacks due to their widespread publicity. However, in today's decentralized social media landscape, where individuals with large followings have the power to influence masses, the lack of a responsibility framework for these individuals can lead to dangerous and contagious memes and ideas. The decoupling of rights from responsibilities in the context of social media is a philosophical error that needs to be addressed to prevent potential catastrophes.

    • Navigating the complex relationship between technological innovation and potential misuse by terroristsEnsure a balanced approach to technological innovation, using it for good while mitigating risks, and maintaining ongoing dialogue and collaboration between various stakeholders.

      Learning from this conversation with Audrey Kurth Cronin is the urgent need to navigate the complex relationship between technological innovation and potential misuse by terrorists. While open technological innovation offers many benefits, it also poses significant risks. On one hand, there's the potential for incredible progress and empowerment of individuals and communities. On the other hand, there's the danger of overreach, power abuse, and dystopian outcomes. It's crucial that we strive for a balanced approach, ensuring that technological innovation is used for good while mitigating the risks. Audrey's book, "Power to the People," provides valuable historical perspective and insights into these issues. As a global security expert, she emphasizes the importance of understanding the context and implications of technological innovations, especially in the context of terrorism. Overall, this conversation highlights the need for ongoing dialogue and collaboration between various stakeholders to ensure a humane and secure future.

    Recent Episodes from Your Undivided Attention

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Pelosi Boosts CCP, Athletes Debase Themselves in China, Jobs Blowout; Guests Dr. Gorka and John Carney

    Pelosi Boosts CCP, Athletes Debase Themselves in China, Jobs Blowout; Guests Dr. Gorka and John Carney

    On this Hawaiian Shirt Friday edition of Breitbart News Daily, we discuss more American athletes debasing themselves and disgracing our country during the Genocide Games in Communist China. While the people seem to increasingly understand how problematic China is, our institutions are still acting like it's business as usual. Nancy Pelosi, for example, suggested that athletes should not speak out against the CCP's atrocities; perhaps she'd prefer they save their breath for criticizing the Bad Orange Man Donald Trump. L.A. Mayor Eric Garcetti claims he held his breath when walking around maskless at the Rams game last weekend, apparently for hours. An ISIS leader is dead and Big Joey wants credit, but we won't give it to him. Facebook is getting slaughtered on Wall Street. And, the President of the United States refers to black people as "colored children," apparently to prove he is literally prehistoric. Two guests today: Dr. Sebastian Gorka on the importance of boycotting China, why he thinks South Dakota Gov. Kristi Noem should not be the Vice Presidential pick in 2024, and why the Department of Homeland Security's re-importation of illegal aliens into America's heartland is the biggest scandal no one is talking about.  Then, Breitbart News Economics and Finance Editor John Carney joins the show to discuss the blowout jobs numbers that no one saw coming, especially us.

    Nobel Prize winner Maria Ressa on how social media is pushing journalism—and democracy—to the brink

    Nobel Prize winner Maria Ressa on how social media is pushing journalism—and democracy—to the brink

    The Nobel Committee has awarded its 2021 Peace Prize to Maria Ressa for being a fearless defender of independent journalism and freedom of expression in the Philippines, and particularly for her work exposing the human rights abuses of authoritarian President Rodrigo Duterte. But the prize is also a de facto acknowledgement that Ressa has become something of a one-woman personification of the struggles, perils, and promise of journalism in the age of social media. 

    A longtime investigative reporter and bureau chief for CNN, she began thinking about how social networks could be used for both good and evil while covering terrorism and seeing how it was used to drive both radicalism and build movements for positive change. She originally founded Rappler, her Manila-based online news organization, as a Facebook page, but now she says that one-time Harvard student Mark Zuckerberg’s dominance as a worldwide distributor of news has become a boon to repressive regimes and a threat to democracy worldwide. 

    Rappler’s mission statement is to speak truth to power and build communities of action for a better world—but for Ressa, speaking truth to power has come at a high personal cost. She has been subjected to harassment, criminal and civil legal action, and even arrest, even as she has refused to back off even an inch. When we spoke for this interview, Ressa was just finishing a visiting fellowship at the Kennedy School, where she was affiliated with both the Shorenstein Center on Media, Politics, and Public Policy and the Center for Public Leadership. 

    About our Guest:

    Maria Ressa has been a journalist in Asia for 35 years and co-founded Rappler, the top digital only news site that is leading the fight for press freedom in the Philippines. For her courage and work on disinformation, Ressa was named Time Magazine’s 2018 Person of the Year, was among its 100 Most Influential People of 2019, and has also been named one of Time’s Most Influential Women of the Century. She was also part of BBC’s 100 most inspiring and influential women of 2019 and Prospect magazine’s world’s top 50 thinkers. In 2020, she received the Journalist of the Year award, the John Aubuchon Press Freedom Award, the Most Resilient Journalist Award, the Tucholsky Prize, the Truth to Power Award, and the Four Freedoms Award.

    Before founding Rappler, Maria focused on investigating terrorism in Southeast Asia. She opened and ran CNN’s Manila Bureau for nearly a decade before opening the network’s Jakarta Bureau, which she ran from 1995 to 2005. She wrote Seeds of Terror: An Eyewitness Account of al-Qaeda’s Newest Center of Operations in Southeast Asia and From Bin Laden to Facebook: 10 Days of Abduction, 10 Years of Terrorism.

    PolicyCast is a production of Harvard Kennedy School and is hosted by Staff Writer and Producer Ralph Ranalli

    PolicyCast is edited by Ralph Ranalli and co-produced by Susan Hughes. Natalie Montaner is our webmaster and social media strategist. Our designers are Lydia Rosenberg and Delane Meadows.

    For more information please visit our web page or contact us at PolicyCast@hks.harvard.edu.

    Radicalization and Foreign Fighters: The Story of Lukas, with Karolina Dam

    Radicalization and Foreign Fighters: The Story of Lukas, with Karolina Dam

    Karolina Dam, founder of the NGO Sons and Daughters of the World, joins the podcast this week to tell the story of her son, Lukas. Lukas is a Danish citizen who became radicalized in Copenhagen, fled to Syria, and joined ISIS. We discuss how Facebook groups are used to recruit potential terrorists, the role that social media can play in deradicalization, and the types of communication that take place between a foreign fighter and his mother.

    How a pro-Trump platform became an extremist haven

    How a pro-Trump platform became an extremist haven
    GETTR was supposed to be a Trump world alternative to Twitter. But it’s being inundated with extremist content, including beheading videos and terrorist propaganda. POLITICO’s Mark Scott reports. Plus, Florida shatters its Covid hospitalization record. And the Biden administration hits its vaccination goal … a month late. Mark Scott is POLITICO's Chief Technology Correspondent. Jeremy Siegel is a host for POLITICO Dispatch. Irene Noguchi is the executive producer of POLITICO audio. Jenny Ament is the senior producer of POLITICO audio. Raghu Manavalan is a senior editor for POLITICO audio. Read more: Jihadists flood pro-Trump social network with propaganda