Logo
    Search

    Podcast Summary

    • The importance of expressing doubts and questioning technology's impactFounder of Apture, Tristan Harris, shares how the Doubt Club inspired him to publicly challenge the social media industry and create the Center for Humane Technology, emphasizing the need for open dialogue about technology's impact on society.

      Expressing doubts and questioning the impact of technology on society is essential for creating positive change. The founder of Apture, Tristan Harris, shares his experience of starting a company with a noble mission but recognizing the need for a safe space to express doubts. He talks about the inspiration he drew from the Doubt Club, which provided a platform for startup founders to openly discuss their concerns. Later, Harris, along with his co-founders, launched the Center for Humane Technology to publicly challenge the social media industry and address the implications of technology on society. The experience of hearing Daniel Schmachtenberger on the Future Thinkers podcast was a pivotal moment for Harris, reinforcing the importance of questioning the narratives we tell ourselves about technology and its role in the world.

    • Perverse incentives in economic systems lead to negative outcomesUnderstanding the interconnected nature of societal costs and corporate profits reveals the need for a new paradigm prioritizing people and planet over profit.

      Our current economic systems create perverse incentives that lead to negative outcomes for the environment, health, technology, and more. These issues are not separate, but rather part of interconnected "generator functions of existential risk." Companies profit privately while societal costs are externalized. For example, social media platforms generate revenue by extracting attention, but the societal costs of broken relationships, polarization, and mental health issues do not appear on their balance sheets. This realization can be disenchanting, as it highlights the challenges of addressing these issues, but also clarifying and empowering, as it reveals the root causes. The Center for Human Technology responded to this insight by creating the Ledger of Harms project, which aims to accumulate the unaccounted costs of technology on society's balance sheet. Ultimately, understanding this system helps us see the interconnected nature of the challenges we face and the need for a new paradigm that prioritizes the well-being of people and the planet over profit.

    • Abstracting, extracting, and commodifying complex systemsReducing complex systems to a single metric or number can lead to unintended consequences, such as depletion and harm to individuals and society. Instead, recognizing the infinite potential and interconnectedness of complex systems can help us make more thoughtful decisions and create sustainable solutions.

      We often undervalue complex systems by reducing them to a single metric or number, leading to unintended consequences. This was discussed in relation to a podcast episode about the value of a tree, which was abstracted, extracted, and commodified, resulting in depletion and pollution. Similarly, in the context of technology, our attention is abstracted, extracted, and commodified, leading to depletion and potential harm to individuals and society as a whole. The challenge is to shift our perspective and recognize the infinite potential and interconnectedness of complex systems, including ourselves and the natural world. This perspective can help us make more thoughtful decisions and create sustainable solutions for the long term.

    • The commodification of attention on social mediaSocial media companies profit from our attention, leading to a system that incentivizes negative content and harms individuals' well-being and societal norms. We need to address the root causes and create solutions that align with human values.

      The way we consume content online, particularly on social media, is being abstracted and commodified into predictable units of attention, leading to both depletion and pollution of individuals' well-being and societal norms. Social media companies profit from our attention, selling it to advertisers at a predictable rate, creating a system that incentivizes highly engaging, often negative content. This realization can be terrifying, but also clarifying. However, understanding the issue is not enough; we need to address the root causes. It was during this period of awakening that the idea for the Center for Humane Technology (CST) began to take shape. I remember feeling torn between recognizing the problem and not knowing how to effectively address it. I continued to work on my product, believing that I could make a bigger impact than just criticizing from the sidelines. However, it became clear that a broader conversation was needed, leading us to create the CST podcast and explore the interconnected systems at play. Ultimately, it became clear that if we care about technology and its impact on society, we must be aware of these underlying systems and work towards creating solutions that align with human values.

    • The Narrow Focus on Tech Startups Overshadows Other Important PursuitsThe drive to start tech companies as the only path to success can limit our ability to address complex issues like technology, climate change, and inequality. Embracing a more holistic approach and considering various pursuits can lead to a more effective impact.

      Our current paradigm, shaped by institutions like Stanford, pushes individuals towards starting tech companies as the only path to success, often overshadowing other important pursuits like nonprofits or personal interests. However, when addressing complex issues like technology, climate change, and inequality, this narrow focus can leave us feeling cynical and uncertain about how to make a difference. Tristan Harris, the founder of the Center for Humane Technology, faced this tension as he worked on tech reform while grappling with the realization that even solving specific tech issues wouldn't address the larger, interconnected crises. This emotional and intellectual struggle highlights the need for a broader perspective and a more holistic approach to addressing these pressing issues.

    • Understanding generator functions of world's problemsRecognizing systemic issues and working towards fundamental solutions can empower and build community despite feeling isolating.

      Understanding the underlying mechanisms or "generator functions" of the world's problems can help us feel more agency and tackle issues at their root. This perspective, which was discussed between Tristan Harris and Daniel Schmachtenberger, can be isolating as it challenges common ways of viewing the world. However, recognizing that many people share this understanding can help build community and provide a sense of validation. By acknowledging the existence of systemic issues like growth imperative tied to abstraction, extraction, depletion, and pollution, we can work towards solutions that fundamentally change the way these systems operate. This shift in perspective can be empowering and help bridge the feeling of alienation or isolation that comes with gaining a deeper understanding of complex global issues.

    • Technology and mental health: A complex ecosystemAddressing technology issues and mental health requires recognizing our limited knowledge and collective action as part of a larger ecosystem of change

      Technology issues and mental health are interconnected, and addressing them requires acknowledging and addressing the fundamental systems that underlie both. The speaker, inspired by systems theorist Donella Meadows, emphasizes the importance of recognizing our limited knowledge and the need for collective action to push in the same direction. Changing complex systems is a continual process, and there is no easy solution or master plan. Through this podcast, the Center for Human Technology invites listeners to join them on this journey of understanding and tackling the interconnected issues of technology and mental health as part of a larger ecosystem of change.

    • Acknowledging doubts can lead to new insights and perspectivesExpressing doubts as a leader or expert can lead to new ideas and solutions, despite the uncertainty, and fosters open communication and learning within teams and organizations.

      Expressing doubts and uncertainties, even as a leader or expert, can be valuable and lead to new insights and perspectives. Founders, guests on podcasts, or anyone in a position of authority may not always share their doubts with their teams or investors, but acknowledging and discussing doubts openly can serve everyone. The world is constantly changing, and it's impossible to predict everything that will happen. It's important to remember that expressing doubts doesn't equate to a lack of knowledge or certainty. In fact, embracing doubt can lead to new ideas and solutions. Eiza, for instance, works on the Earth Species Project, which aims to decode the languages of nonhuman species using machine learning. This project is inspired by Donella Meadows' call for a paradigm shift in our relationship to the planet. Eiza admits that she doesn't know if the project will succeed, but the uncertainty and the potential for change are what make the endeavor worthwhile. It's a delicate dance to express doubts responsibly, but it's a necessary conversation to have. By sharing our doubts, we can learn from each other and navigate the uncertainty together.

    • Transformative moments driving sustainability and environmental awarenessEmbrace doubt, form a 'doubt club', and advocate for humane technology to navigate complexities and make informed decisions for sustainability and environmental awareness

      Transformative moments in history, such as the distribution of images from space or the moon landing, have galvanized shifts towards sustainability and environmental awareness. These moments challenged humanity's self-image and led to the creation of organizations like the EPA and NOAA, as well as the passage of significant environmental legislation. Quantum physicist Richard Feynman emphasized the importance of embracing doubt and uncertainty in scientific progress. To strengthen this capacity, Feynman suggested forming a "doubt club" with trusted peers. While we may not have all the answers, we can recommend resources and ask good questions to help make informed decisions. Tristan Harris, a magician turned persuasive technology expert, and his colleagues at the Center for Humane Technology advocate for humane technology that respects human attention and well-being. By acknowledging our limitations and working together, we can navigate the complexities of our world and make a positive impact.

    • Learn to build technology that prioritizes well-beingThe Center For Humane Technology offers a free course for product teams to equip them with the knowledge to create technology that prioritizes humanity's needs and addresses pressing challenges.

      Technology, particularly in the realm of existential risk, requires a new kind of problem-solving mindset. To help prepare for this, the Center For Humane Technology is launching a free course called Foundations of Humane Technology. This course aims to equip product teams with the knowledge to build technology that prioritizes well-being and contributes to addressing humanity's most pressing challenges. By signing up for updates at humanetech.com, individuals can join this important initiative. The Center For Humane Technology, a nonprofit organization, produces the podcast "Your Undivided Attention" to catalyze a humane future. The podcast is supported by various generous lead supporters, including the Omidyar Network, Craig Newmark Philanthropies, and the Evolve Foundation. So, in essence, this discussion underscores the need for a shift in perspective when it comes to technology development, and the new course offers a practical way to begin this transformation.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Episode 14: What's going on with "The Social Dilemma?" Part 2

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    How do you make a film that impacts more than 100 million people in 190 countries in 30 languages?

    This week on Your Undivided Attention, we're going behind the curtain on The Social Dilemma — the Netflix documentary about the dark consequences of the social media business model, which featured the Center for Humane Technology. On the heels of the film's 1-year anniversary and winning of 2 Emmy Awards, we're talking with Exposure Labs' Director Jeff Orlowski-Yang and Producer Larissa Rhodes. What moved Jeff and Larissa to shift their focus from climate change to social media? How did the film transform countless lives, including ours and possibly yours? What might we do differently if we were producing the film today? 

    Join us as we explore the reverberations of The Social Dilemma — which we're still feeling the effects of over one year later. 

    A Conversation with Facebook Whistleblower Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    We are now in social media's Big Tobacco moment. And that’s largely thanks to the courage of one woman: Frances Haugen.

    Frances is a specialist in algorithmic product management. She worked at Google, Pinterest, and Yelp before joining Facebook — first as a Product Manager on Civic Misinformation, and then on the Counter-Espionage team. But what she saw at Facebook was that the company consistently and knowingly prioritized profits over public safety. So Frances made the courageous decision to blow the whistle — which resulted in the biggest disclosure in the history of Facebook, and in the history of social media.

    In this special interview, co-hosts Tristan and Aza go behind the headlines with Frances herself. We go deeper into the problems she exposed, discuss potential solutions, and explore her motivations — along with why she fundamentally believes change is possible. We also announce an exciting campaign being launched by the Center for Humane Technology — to use this window of opportunity to make Facebook safer.

    WhatsApp-ening in the Netherlands? Social Media, GroenLinks, and the 2018 Dutch Local Elections, with Hanneke Bruinsma

    WhatsApp-ening in the Netherlands? Social Media, GroenLinks, and the 2018 Dutch Local Elections, with Hanneke Bruinsma

    Help us out by signing up to the free Axios newsletter to get your daily dose of tech and politics!

    Hanneke Bruinsma, local politician for the green party GroenLinks in the Netherlands, joins the show to discuss how her party is using social media in the upcoming Dutch municipal elections. We discuss how GroenLinks party members in the Overbetuwe municipality are using Facebook and Twitter to campaign, and in particular we focus on WhatsApp as a new medium to encourage activism - or "Apptivism" - among local residents.