Logo
    Search

    Podcast Summary

    • Technology's transformative effect on us and ethical implicationsTechnology alters who we are and what we value, raising ethical concerns about irreversible transformations and whose preferences should be prioritized.

      Technology's influence on us goes beyond steering our attention and time, it fundamentally transforms who we are and what we value, raising significant ethical implications. Philosophosopher LA Paul's metaphor of becoming a vampire illustrates this, as the decision to transform is irreversible and alters our preferences. The challenge lies in evaluating such transformative experiences beforehand, especially when the persuader is the one transforming us. Social media, for instance, may be turning us into attention-seeking vampires, and whose preferences should we prioritize – the person before or after transformation? The only ethical persuasion occurs when the persuader's goals align with the persuadee's, but what if the persuader transforms us into someone desiring the very thing we were persuaded into? We must critically evaluate the testimony of others who have undergone such transformations, considering how the transformation might alter our perspective. This is a complex issue, and it's crucial to recognize the profound impact technology has on our identities and values.

    • Deciding on Major Life Changes: Parenting and VampirismMaking significant life decisions involves considering uncertainty, potential self-transformation, and the influence of technology and social media. There's no clear rule for choosing between current and future selves.

      Making significant life decisions, such as becoming a parent or a vampire, presents unique challenges due to the uncertainty of the experience and the potential for personal transformation. Research suggests that parents may experience lower moment-to-moment happiness than those without children, but this doesn't necessarily mean they're unhappy. Instead, the value of suffering and becoming a parent might not be accurately captured by the "happier" label. Similarly, the decision to become a vampire or undergo other major life changes lacks clear guidelines, making it a self-interested choice. However, the lack of information about the experience and the potential for a significantly different self-identity make informed decision-making difficult. The question then becomes: which self should make the choice – the current self or the future self? There's no clear principle decision rule for these situations, adding to the complexity. Additionally, new experiences can be influenced by technology and social media, making the decision even more complicated.

    • How Technology Influences Our Preferences and BeliefsTechnology use can shape our preferences and beliefs, leading to transformative changes in who we are. It's important to be aware of these influences and consider their potential impact on society.

      Our preferences and beliefs can be influenced in ways we don't fully understand, both by our own experiences and by external stimuli. This is particularly relevant in the context of technology use, where we voluntarily engage with platforms but are also influenced by the fact that the entire world is using them. This can lead to transformative changes in who we are, making us value things differently than we once did. This is a natural part of personal growth, but it's important to be aware of these influences and consider their potential impact on society as a whole. The speaker uses the example of professional development to illustrate this point, highlighting how intensive training can lead to a fundamentally different self with new preferences. However, when it comes to technology, we often don't think carefully enough about how it's shaping us, potentially leading to unintended consequences.

    • Technology's Unexpected TransformationsRecognize that technology's impact on individuals and society is not neutral, and consider potential consequences before blindly accepting changes.

      Technology, particularly social media, has the power to significantly transform individuals and society as a whole, often in unexpected ways. Examples like the Facebook like button, LinkedIn profiles, and Instagram influencers demonstrate how these platforms have altered the way we seek validation, define professionalism, and value influence. However, it's crucial to recognize that technology is not neutral, and these transformations are not always positive. As Jaron Lanier points out, even small changes can have a significant impact over time, leading to compounding effects. Therefore, it's essential to consider the potential consequences of new technologies and values that emerge and strive for clear-headed decision-making. We should not blindly accept these changes without careful thought and consideration, as they can lead to societal transformations akin to a "vampire-like change."

    • The subtle influence of new experiences and technologiesUnderstanding and acknowledging the power of new technologies to change our preferences and thinking is crucial, as well as exploring philosophical questions and promoting transparency to ensure individual control.

      As we navigate new experiences and technologies, we underestimate their ability to subtly influence and change our preferences and thinking. Pre-commitment strategies, like avoiding certain technologies, may not be effective due to their seductive nature. It's crucial for individuals and societies to acknowledge responsibility for these changes and explore philosophical questions regarding good versus bad actions, who decides, and the importance of transparency. Tech companies should identify and understand these structural facts about action and influence, and provide transparent processes for individuals to have control over their influence.

    • Informed consent and ethical use of persuasive technologyPersuasive tech can positively influence behavior, but ethical concerns arise when it's used deceitfully or without informed consent. Establishing clear regulations and transparency is crucial for ethical use and informed consent.

      While persuasive technology can be used to positively influence behavior through transparent nudges, it becomes problematic when it's used deceitfully or without informed consent. The concept of informed consent raises complex questions, particularly when it comes to understanding the potential consequences of certain actions and trusting the expertise of those making the suggestions. However, the lack of regulation in the tech industry regarding social impact is concerning, as the potential for large-scale behavioral changes is significant. It's essential to establish a proportional level of responsibility and transparency for those designing and implementing persuasive technologies to ensure informed consent and ethical use.

    • Ethical concerns of manipulating users on social media without informed consentTechnology's ability to manipulate and experiment on social media users without informed consent raises ethical questions, impacting individuals and requiring external vetting and modeling.

      While people may give consent to use social media platforms like Facebook, there are ethical concerns regarding the manipulation and experimentation that can occur without informed consent. The impact of these experiments on individuals is not always clear, and there is a need for external vetting and modeling before using people in this way. As technology advances and becomes more capable of understanding our preferences and behaviors, it raises philosophical questions about what it means to have informed consent and what we should optimize for in a world where technology knows us better than we know ourselves. The idea of optimizing for lifelong development and increased self-awareness is an interesting proposition, but it raises questions about who makes the decisions and on what basis. Ultimately, it's important to consider the ethical implications of technology's role in shaping our lives and to ensure that it aligns with our values and goals.

    • The outcomes of technology are unpredictableTechnology's impact on us is ongoing and complex, with potential for both positive and negative consequences. Awareness and active engagement are crucial to assess and mitigate potential risks.

      Technology and interactions with it can significantly change us, but the outcomes are not predetermined or simple. The process is ongoing and unpredictable, and it's essential to be aware of this to assess the consequences and avoid potential negative outcomes. The Zen story of "Maybe" illustrates the unknowability of complex systems and the constant change that can bring both good and bad fortune. Another related example is the transformation brought by cognitive decline, which can lead to a loss of self-awareness and potentially a more contented life in ignorance. Ultimately, we need to recognize that technology is not a simple tool for maximizing our utility but a continuous interaction that requires our active engagement and assessment.

    • Is Happiness Worth the Cost of Cognitive Decline?The ethical discussion surrounding cognitive change should consider the value of cognitive capacities and the distinction between pre- and post-change selves, not just the transformed agent's perspective.

      While there are clear negative consequences to cognitive decline or change brought about by various factors, such as technology or becoming a parent, it's not always clear whether these changes are inherently bad. The speaker raises the hypothetical example of undergoing a frontal lobotomy, which would bring about happiness and contentment but also result in a significant loss of cognitive capacities. The question then becomes, is it worth sacrificing intellectual abilities for happiness? The speaker suggests that there is no straightforward answer to this question, and that the ethical discussion must shift from focusing solely on the transformed agent's perspective to considering other factors, such as the value of cognitive capacities and the principle distinction between the pre- and post-change selves. Ultimately, the speaker argues that there is no clear-cut way to determine which way of being is better, and that the ethical discussion must be nuanced and complex to account for these complexities.

    • Consider the long-term impacts of our actions on complex systemsAvoid transformative changes that limit the sustainability of complex systems and strive for choices that promote their continuity and health

      We have a responsibility to consider the long-term impacts of our actions on complex systems, such as nature or civilization. Transformative changes that limit the sustainability of these systems, even if the exact outcomes are uncertain, should be avoided. Being omni-considerate, or able to make distinctions and considerations for various stakeholders and consequences, can help build trust and make better decisions. Ultimately, we must strive to make choices that promote the continuity and health of these systems, rather than causing irreversible damage. The world is complex and chaotic, and we cannot know everything, but making more considerate choices is a step in the right direction. Humility in the face of the unknown and the complexity of the world is essential.

    • Exploring the ethical implications of technology's impact on ourselves and societyReflect on the value of technology in our lives, strive for a humane approach to its development and use, and consider the ethical implications of tech companies' decisions.

      Our use of technology can have profound impacts on ourselves and society, and it's crucial to consider the ethical implications of these impacts. The decision-making processes of tech companies may not always have individuals' best interests at heart, but the technology itself can also bring about positive change. The philosopher LA Paul, who explores questions about the self, decision making, and essence, emphasizes the complexity of this issue. The Center for Humane Technology is hiring for two full-time roles, Director of Mobilization and Digital Manager, for those passionate about promoting responsible and humane technology. Ultimately, we as individuals must reflect on the value of technology in our lives and strive for a humane approach to its development and use.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    You Will Never Breathe the Same Again — with James Nestor

    You Will Never Breathe the Same Again — with James Nestor

    When author and journalist James Nestor began researching a piece on free diving, he was stunned. He found that free divers could hold their breath for up to 8 minutes at a time, and dive to depths of 350 feet on a single breath. As he dug into the history of breath, he discovered that our industrialized lives have led to improper and mindless breathing, with cascading consequences from sleep apnea to reduced mobility. He also discovered an entire world of extraordinary feats achieved through proper and mindful breathing — including healing scoliosis, rejuvenating organs, halting snoring, and even enabling greater sovereignty in our use of technology. What is the transformative potential of breath? And what is the relationship between proper breathing and humane technology?

    Stranger than Fiction — with Claire Wardle

    Stranger than Fiction — with Claire Wardle

    How can tech companies help flatten the curve? First and foremost, they must address the lethal misinformation and disinformation circulating on their platforms. The problem goes much deeper than fake news, according to Claire Wardle, co-founder and executive director of First Draft. She studies the gray zones of information warfare, where bad actors mix facts with falsehoods, news with gossip, and sincerity with satire. “Most of this stuff isn't fake and most of this stuff isn't news,” Claire argues. If these subtler forms of misinformation go unaddressed, tech companies may not only fail to flatten the curve — they could raise it higher. 

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    How do you make a film that impacts more than 100 million people in 190 countries in 30 languages?

    This week on Your Undivided Attention, we're going behind the curtain on The Social Dilemma — the Netflix documentary about the dark consequences of the social media business model, which featured the Center for Humane Technology. On the heels of the film's 1-year anniversary and winning of 2 Emmy Awards, we're talking with Exposure Labs' Director Jeff Orlowski-Yang and Producer Larissa Rhodes. What moved Jeff and Larissa to shift their focus from climate change to social media? How did the film transform countless lives, including ours and possibly yours? What might we do differently if we were producing the film today? 

    Join us as we explore the reverberations of The Social Dilemma — which we're still feeling the effects of over one year later. 

    A Renegade Solution to Extractive Economics — with Kate Raworth

    A Renegade Solution to Extractive Economics — with Kate Raworth

    When Kate Raworth began studying economics, she was disappointed that the mainstream version of the discipline didn’t fully address many of the world issues that she wanted to tackle, such as human rights and environmental destruction. She left the field, but was inspired to jump back in after the financial crisis of 2008, when she saw an opportunity to introduce fresh perspectives. She sat down and drew a chart in the shape of a doughnut, which provided a way to think about our economic system while accounting for the impact to the world around us, as well as for humans’ baseline needs. Kate’s framing can teach us a lot about how to transform the economic model of the technology industry, helping us move from a system that values addicted, narcissistic, polarized humans to one that values healthy, loving and collaborative relationships. Her book, “Doughnut Economics: Seven Ways to Think Like a 21st Century Economist,” gives us a guide for transitioning from a 20th-century paradigm to an evolved 21st-century one that will address our existential-scale problems.

    A Facebook Whistleblower — with Sophie Zhang

    A Facebook Whistleblower — with Sophie Zhang

    In September of 2020, on her last day at Facebook, data scientist Sophie Zhang posted a 7,900-word memo to the company's internal site. In it, she described the anguish and guilt she had experienced over the last two and a half years. She'd spent much of that time almost single-handedly trying to rein in fake activity on the platform by nefarious world leaders in small countries. Sometimes she received help and attention from higher-ups; sometimes she got silence and inaction. “I joined Facebook from the start intending to change it from the inside,” she said, but “I was still very naive at the time.” 

    We don’t have a lot of information about how things operate inside the major tech platforms, and most former employees aren’t free to speak about their experience. It’s easy to fill that void with inferences about what might be motivating a company — greed, apathy, disorganization or ignorance, for example — but the truth is usually far messier and more nuanced. Sophie turned down a $64,000 severance package to avoid signing a non-disparagement agreement. In this episode of Your Undivided Attention, she explains to Tristan Harris and Aza Raskin how she ended up here, and offers ideas about what could be done at these companies to prevent similar kinds of harm in the future.