Logo
    Search

    Podcast Summary

    • Metacrisis: Our Collective Inability to Solve Complex ProblemsThe metacrisis, a term used to describe our collective inability to effectively solve complex problems, is a root cause of many pressing issues such as addiction, disinformation, polarization, and climate change. Reframing these issues as problems with our problem-solving capabilities could be a key leverage point for intervention.

      Many of the world's pressing issues, such as addiction, disinformation, polarization, and climate change, may be symptoms of a larger, underlying problem, which some call the metacrisis. This metacrisis is rooted in our collective inability to effectively solve complex problems. As inventor and engineer Charles Kettering famously said, "a problem well stated is a problem half solved." Therefore, reframing these issues as problems with our problem-solving capabilities could be a key leverage point for intervention. Daniel Schmachtenberger, a problem-solving expert and my dear friend, joins us on this episode to explore this idea further. In the coming weeks, we will be releasing two versions of the episode and hosting a podcast club where Daniel and I will engage in a live dialogue with our audience about this topic. Stay tuned for details in the show notes.

    • Unintended Consequences of Well-Intended SolutionsWell-intended solutions can have unintended negative consequences. It's crucial to consider long-term effects and strive for solutions that minimize harm and promote positive outcomes.

      Well-intended solutions to problems can unintendedly create new and sometimes even worse issues. This was evident in the example given about trying to protect elephants from poaching in Africa, which led poachers to target other endangered animals instead. Similarly, efforts to solve hunger through unsustainable commercial agriculture can harm the environment and even worsen the long-term issue. In the digital world, the goal of organizing information and making it accessible through search engines has led to unintended consequences, such as breaking social solidarity and epistemic capacity necessary for democracy. These examples illustrate the importance of considering the potential long-term effects of our actions and striving for solutions that minimize harm and promote positive outcomes.

    • Three primary sources of existential riskRival risk dynamics, environmental degradation, and exponential technology pose significant risks to human civilization's long-term survival and require proactive management

      Existential risks to human civilization stem from three primary sources: rival risk dynamics, the subsuming of our substrate, and exponential technology. Rival risk dynamics, such as arms races and the tragedy of the commons, create a race for power and resources that can lead to self-destruction. The subsuming of our substrate, or the environments that support human civilization, is eroded by rivalrous dynamics, leading to degradation of resources like the environment, attention, and social trust. Exponential technology, which grows and improves at an exponential rate, creates increasingly greater risks as the power and capabilities of technology outpace our ability to manage them. These three generator functions of existential risk have contributed to the current level of unmanaged global existential risk, despite the fact that catastrophic risks were not a significant concern for most people before World War II. Understanding these underlying drivers is crucial for addressing and mitigating existential risks to ensure the long-term survival and flourishing of human civilization.

    • From local to global catastrophic riskThe technological advancements after World War 2 led to a shift from local to global catastrophic risk, necessitating the creation of a globalized world system to prevent another major war.

      The world before and after World War 2 was fundamentally different due to the shift from local to global catastrophic risk. Before World War 2, individual kingdoms faced existential risks, but these were limited to specific regions. However, with the technological advancements during and after World War 2, the potential for catastrophic risk became a global concern for the first time. This led to the creation of a globalized world system, including mutual assured destruction, globalization, and economic interdependence, to prevent another major war. However, the post-World War 2 solutions have limitations, as the exponential growth of the economy and the availability of new, difficult-to-monitor catastrophe weapons have created new challenges. The old solutions have ended, and new ones must be found to address the current and emerging catastrophic risks.

    • A new cultural enlightenment for managing global risksDaniel proposes a new societal model to manage global risks caused by exponential technologies, focusing on individual rights and responsibilities, increased access to education, and civic engagement through technology.

      According to Daniel, the current global institutions, including the Bretton Woods agreements and the United Nations, are no longer sufficient to manage existential risks caused by exponential technologies. Instead, we are heading towards two undesirable outcomes: oppression or chaos. To avoid these, Daniel proposes a new cultural enlightenment where individuals have both rights and responsibilities. This can be achieved by increasing access to education and incentivizing civic engagement through technology. The goal is to create a society where individuals are maximally informed and engaged in their own governance. This new attractor would enable us to effectively manage global risks without succumbing to oppression or chaos.

    • Embracing logic, science, and rationality with exponential technologiesPersonalize education, increase media literacy, foster intrinsic motivation, reimagine economics and education, and develop creativity, critical thinking, and emotional intelligence in a post-AI world.

      As we move from a culture rooted in superstition and irrationality to one that embraces logic, science, and rationality, we must harness exponential technologies to build new systems of collective intelligence and social technology. This includes using attention tech to personalize education, increase media literacy, and foster intrinsic motivation. With the impending automation of jobs, we must reimagine economics and education, shifting from a labor-focused economy to one where people don't need jobs. The role of humans in a post-AI, robotic automation world lies in fostering creativity, critical thinking, and emotional intelligence – skills that machines cannot replicate.

    • Embracing technology while maintaining transparency and accountabilityUsing uncorruptible ledgers like blockchain for government spending and contracts can increase transparency, reduce corruption, and lead to more efficient use of resources.

      As technology continues to evolve, it's essential to reimagine education, work, and social systems within this new context. Daniel's perspective emphasizes the need to answer existential questions about meaningful human life and a good civilization that maximizes opportunities for all. While China is addressing these questions for a closed society, Daniel encourages a digital open society solution. However, we must approach these challenges with humility and recognize that we don't have all the answers. To mitigate risks, we need humane technology that supports cultural wisdom in managing exponential tech. Daniel proposes using uncorruptible ledgers like blockchain for government spending and contracts to increase transparency and reduce corruption. This could lead to more efficient use of resources, better national security, and less taxes. Additionally, uncorruptible ledgers can provide provenance on supply chains, making them closed loop and internalizing externalities. Overall, embracing technology while maintaining transparency and accountability is crucial for creating a good civilization in this rapidly changing world.

    • Harnessing Technology for a More Equitable and Transparent WorldBlockchain prevents corruption, open data ensures transparency, AI generates deep fakes and propositions, Consilience Project inspires innovation and solves global problems, use technology responsibly for a more open, efficient, and equitable society

      Emerging technologies, while they have the potential to create significant risks and challenges, can also be harnessed to address complex issues and build a more equitable and transparent world. The use of blockchain technology to prevent corruption, for instance, can lead to real justice and historical accuracy. In the realm of science, an open data platform ensures transparency and accountability, enabling automatic correction of errors. Artificial intelligence, on the other hand, can generate deep fakes and undermine the epistemic commons, but it can also be used to analyze sentiment and generate propositions based on shared values. The Consilience Project, with its focus on interconnectivity and underlying drivers, exemplifies the potential for technology to inspire innovation and tackle global problems. Ultimately, the key is to recognize the potential of these technologies and use them responsibly to create a more open, efficient, and equitable society.

    • Understanding the causes of societal issuesExamining the core drivers behind societal challenges can lead to progress and innovative solutions. Examples include using transparent technology and collective voting systems.

      Understanding the underlying causes of complex societal issues is essential for finding solutions. The current state of the world can seem overwhelming with numerous problems, but by examining the core drivers behind them, we can make progress. The breakdown in sense-making and control of human behavior, as highlighted by the documentary "The Social Dilemma," is a significant contributor to many of the challenges we face. By increasing our cultural understanding of these issues, we can begin to innovate and employ technology in a conscious and inspiring way. The goal is not to create a small group of people to build the future, but to establish design constraints and engage in meaningful conversations. Examples of progress include the use of transparent blockchain technology and collective voting systems. As Charles Kettering famously said, "A problem well understood is half solved."

    • A call to cultural enlightenment to address technological dangersEmbrace the challenges of technological dangers and reorient your life for a better future

      The current technological landscape requires a cultural shift towards understanding the potential dangers and reorienting our lives to create a better future. Daniel Schmachtenberger, a founding member of the Consilience Project, encourages listeners to view this process as a form of cultural enlightenment. His personal experience with the core drivers of these issues led him to dedicate his life to preventing them. Schmachtenberger urges the audience to gain the strength to face these challenges head-on and orient their lives accordingly if they take the risks seriously. Your Undivided Attention, produced by the Center For Humane Technology, is a podcast that aims to inspire listeners to do just that. The team behind the podcast includes Stephanie Lep as the executive producer, Natalie Jones and Noor Al Samrahi as senior and associate producers, Dan Ketme as editor at large, and original music and sound design by Ryan Hayes Holiday and David Sestoy. The podcast is supported by various foundations and individuals, and Tristan Harris, the host, expresses gratitude to the listeners for their undivided attention.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    [Unedited] A Problem Well-Stated is Half-Solved — with Daniel Schmachtenberger

    [Unedited] A Problem Well-Stated is Half-Solved — with Daniel Schmachtenberger

    We’ve explored many different problems on Your Undivided Attention — addiction, disinformation, polarization, climate change, and more. But what if many of these problems are actually symptoms of the same meta-problem, or meta-crisis? And what if a key leverage point for intervening in this meta-crisis is improving our collective capacity to problem-solve?

    Our guest Daniel Schmachtenberger guides us through his vision for a new form of global coordination to help us address our global existential challenges. Daniel is a founding member of the Consilience Project, aimed at facilitating new forms of collective intelligence and governance to strengthen open societies. He's also a friend and mentor of Tristan Harris. 

    This insight-packed episode introduces key frames we look forward to using in future episodes. For this reason, we highly encourage you to listen to this unedited version along with the edited version

    We also invite you to join Daniel and Tristan at our Podcast Club! It will be on Friday, July 9th from 2-3:30pm PDT / 5-6:30pm EDT. Check here for details.

    Spotlight — The Facebook Files with Tristan Harris, Frank Luntz, and Daniel Schmachtenberger

    Spotlight — The Facebook Files with Tristan Harris, Frank Luntz, and Daniel Schmachtenberger

    On September 13th, the Wall Street Journal released The Facebook Files, an ongoing investigation of the extent to which Facebook's problems are meticulously known inside the company — all the way up to Mark Zuckerberg. Pollster Frank Luntz invited Tristan Harris along with friend and mentor Daniel Schmachtenberger to discuss the implications in a live webinar. 

    In this bonus episode of Your Undivided Attention, Tristan and Daniel amplify the scope of the public conversation about The Facebook Files beyond the platform, and into its business model, our regulatory structure, and human nature itself.

    A Facebook Whistleblower — with Sophie Zhang

    A Facebook Whistleblower — with Sophie Zhang

    In September of 2020, on her last day at Facebook, data scientist Sophie Zhang posted a 7,900-word memo to the company's internal site. In it, she described the anguish and guilt she had experienced over the last two and a half years. She'd spent much of that time almost single-handedly trying to rein in fake activity on the platform by nefarious world leaders in small countries. Sometimes she received help and attention from higher-ups; sometimes she got silence and inaction. “I joined Facebook from the start intending to change it from the inside,” she said, but “I was still very naive at the time.” 

    We don’t have a lot of information about how things operate inside the major tech platforms, and most former employees aren’t free to speak about their experience. It’s easy to fill that void with inferences about what might be motivating a company — greed, apathy, disorganization or ignorance, for example — but the truth is usually far messier and more nuanced. Sophie turned down a $64,000 severance package to avoid signing a non-disparagement agreement. In this episode of Your Undivided Attention, she explains to Tristan Harris and Aza Raskin how she ended up here, and offers ideas about what could be done at these companies to prevent similar kinds of harm in the future.

    Against the tide: tech for social cohesion

    Against the tide: tech for social cohesion

    It’s no secret that digital technology, in particular social media, stokes division in society and sometimes provokes violent conflict. Toxic polarization prevents us from solving problems, from making decisions together, from being constructive in our approach. 

    In In this episode, we’ll explore the dangers of social media, but we’ll also talk about ways technology can be used to build bridges and promote social cohesion., we’ll explore the In this episode, we’ll explore the dangers of social media, but we’ll also talk about ways technology can be used to build bridges and promote social cohesion. 

    Guest Shamil Idriss, is the CEO of Search for Common Ground. SFCG is the largest peacebuilding organization in the world, and has a long history of using media in reconciliation efforts. Almost fifteen years ago, Shamil established a virtual exchange program connecting young adults in Europe and North America to their peers in the Middle East, and he’s been working at the intersection of peacebuilding and tech ever since. In February, Shamil helped launch the Council on Technology and Social Cohesion to foster collaboration between peacebuilders and tech workers.  Shamil says it’s crucial for the peacebuilding field to understand technology’s dangers AND to harness its potential for good.

    Follow Shamil Idriss on Twitter @ShamilIdriss.

    HOW TO RATE AND REVIEW MAKING PEACE VISIBLE

    In Apple Podcasts on iPhone 

    Tap on the show name (Making Peace Visible) to navigate to the main podcast page.

    Scroll down to the "Ratings and Reviews" section

    To leave a rating only, tap on the stars

    To leave a review, tap "Write a Review"

     

    In Spotify

    (Note: Spotify ratings are currently only available on mobile.)

    Tap on the show name (Making Peace Visible) to navigate to the main podcast page.

    Tap on the star icon under the podcast description to rate the show.

     

    In Podcast Addict

    (Note: you may need to sign in before leaving a review.)

    From the episode page: On the top left above the show description, click "Post review."

    From the main podcast page

    Tap "Reviews" on the top left.

    On the Reviews page,  tap the icon of a pen and paper in the top right corner of the screen.

     

    ABOUT THE SHOW

    Making Peace Visible is a project of War Stories Peace Stories. Our mission is to bring journalists and peacebuilders together to re-imagine the way the news media covers peace and conflict, and to facilitate expanded coverage of global peace and reconciliation efforts. Join the conversation on Twitter: @warstoriespeace. Write to us at jsimon@warstoriespeacestories.org

    Making Peace Visible is hosted by Jamil Simon, and produced by Andrea Muraskin, with help from Faith McClure. Music in this episode is by Blue Dot Sessions, Meavy Boy, and Bill Vortex. 

    ABOUT THE SHOW

    Making Peace Visible is a project of War Stories Peace Stories. Making Peace Visible is hosted by Jamil Simon and produced by Andrea Muraskin, with help from Faith McClure. Learn more at warstoriespeacestories.org. 

    We want to learn more about our listeners. Take this 3-minute survey to help us improve the show! 

    Support this podcast and the War Stories Peace Stories project

    Stranger than Fiction — with Claire Wardle

    Stranger than Fiction — with Claire Wardle

    How can tech companies help flatten the curve? First and foremost, they must address the lethal misinformation and disinformation circulating on their platforms. The problem goes much deeper than fake news, according to Claire Wardle, co-founder and executive director of First Draft. She studies the gray zones of information warfare, where bad actors mix facts with falsehoods, news with gossip, and sincerity with satire. “Most of this stuff isn't fake and most of this stuff isn't news,” Claire argues. If these subtler forms of misinformation go unaddressed, tech companies may not only fail to flatten the curve — they could raise it higher.