Logo
    Search

    Podcast Summary

    • Social Media's Harmful Consequences and the Urgent Need for RegulationThe lack of regulation in social media platforms has led to harmful consequences, particularly for young people, highlighting the urgent need for societal and legal intervention in the rapidly advancing field of AI.

      The lack of regulation in social media platforms has led to harmful consequences, particularly for young people and their families. This issue is of great concern as we navigate the rapidly advancing field of AI, which some argue was introduced to society through social media. The ongoing lawsuits against social media companies provide valuable insights for potential regulation of AI tools and applications. Laura Marquez Garrett, an attorney at the Social Media Victims Law Center, shares her personal journey into this work, which began with the revelations from the Facebook whistleblower and the tragic case of a 9-year-old girl who died by suicide after using social media. The realization that the creators of these platforms would not let their own children use them underscores the urgent need for societal and legal intervention.

    • Social media companies' responsibility for third-party content evolving from Section 230 to product liabilityHistorically shielded from liability for third-party speech, social media companies are increasingly being held accountable for promoting harmful content to users, particularly through targeted content and design features

      The debate surrounding social media companies' responsibility for third-party content has evolved from a focus on Section 230 of the Communications Decency Act to product liability. Historically, Section 230 shields these companies from liability for third-party speech. However, recent concerns, such as those raised in the documentary "The Social Dilemma," have highlighted the addictive nature of these platforms and their role in promoting harmful content to users, especially vulnerable ones. This shift in focus argues that these companies are not just neutral platforms but are actively harming users, often through targeted content and design features. The burden of proof in these cases varies, but the argument is that users are not actively seeking out harmful content; instead, it is being pushed onto them by the platforms.

    • New challenges for regulating social media companiesDespite Section 230 immunity, social media companies' use of real-time technology and algorithms make it difficult to hold them accountable for past actions due to data retention and product changes.

      The unique nature of real-time technology and algorithms used by social media companies poses new challenges for regulation and accountability. These companies have operated under the belief of immunity due to Section 230, but that's not the case. The lack of retention of data and the ability to change how their products function at any given moment makes it difficult to hold them accountable for past actions. For instance, Snap's quick add feature, which they claim requires mutual friends or contacts, has been reported to expose young users to predatory behavior. The inconsistency between companies' statements and users' experiences highlights the need for transparency and regulation. Ultimately, the addictive nature of social media and its potential harms require a deeper examination and action.

    • Social media addiction's devastating consequences, including suicide, for childrenSocial media addiction can lead to suicide for children, even those without prior mental health issues. Companies avoid regulation, leaving litigation as the last resort for parents seeking change.

      Social media addiction among children can lead to devastating consequences, including suicide, even for those with no prior mental health issues. Dr. Anna Lembke, a doctor in Orange County, shares that taking away a child's social media access can be a loss trigger, causing such strong dependence that some children see death as the only option. This addiction not only worsens preexisting conditions but also affects children without any prior issues. The harms of social media on children have been known for years, but companies have managed to avoid regulation. Litigation is the last resort for many parents seeking change, as it holds these companies accountable for the negative impacts of their products on children's lives.

    • Tech Companies Prioritize Profits Over People's WellbeingWithout regulation, tech companies may prioritize profits over people's wellbeing, leading to harmful content or practices, especially for vulnerable populations. Litigation can make it more expensive for these companies to cause harm, making it a moral issue.

      Without regulation, tech companies may prioritize profits over people's wellbeing. They might not self-regulate, leading to harmful content or practices that can cause significant harm, especially to vulnerable populations like children. The use of Section 230 as a shield from accountability only exacerbates the issue. A vivid example of this is the case of Meta (formerly Facebook) and its role in promoting disordered eating content to young girls, like Alexis Spence. Meta's algorithms send users content based on their interests, and without proper parental controls, young girls can access harmful content. In Alexis's case, she was exposed to thinspo and proanorexia content, leading her down a dangerous path of self-harm and body dysmorphia. The companies' actions are morally wrong, but in a values-blind market economy, they only see costs and dollar signs. Litigation is an effective way to make it more expensive for these companies to cause harm, making it a moral issue rather than just a cost of doing business.

    • Social media's dark side: Delivering harmful content to vulnerable usersSocial media platforms deliver harmful and extreme content to young users despite their search history indicating a preference for positive content, leading to tragic consequences.

      Social media platforms like TikTok, despite users' search history indicating a desire for uplifting content, have been accused of delivering harmful and extreme content, including suicide-related material, to young users. This shift in content, often following a traumatic event or breakup, can lead to tragic consequences. The cases of Chase Nasca and Mason Edens serve as sobering reminders of this issue. Both young men, after experiencing a breakup, were exposed to a constant stream of violent and suicidal content despite their search history indicating a preference for motivational speeches and workout tips. Tragically, both took their own lives, with Chase doing so in 2021 and Mason in 2022. The fact that these young people were not actively seeking this content yet still received it raises concerns about the role of social media companies in delivering harmful content to vulnerable users. Content moderators, who are tasked with reviewing this extreme content, are also negatively impacted by the constant exposure to such material. The issue is not about what users post on the internet, but rather about the targeted delivery of harmful content to young users.

    • Social media platforms show users content they can't resist, leading to negative consequencesSocial media algorithms can be addictive, potentially leading to negative consequences like addiction and even death for young users. Parental involvement and advocacy are crucial to address these issues.

      Social media platforms like TikTok and Snapchat are not simply giving users what they want, but rather showing them content they can't help but engage with, potentially leading to negative consequences, including addiction and even the end of lives. The use of terms like "likes" and "engagement" can be misleading, and the algorithms that drive these platforms can be difficult for parents to control, especially when it comes to younger users. The case of Snapchat and the fentanyl crisis is particularly concerning, as it marks the first time in U.S. history that overdose deaths among kids aged 13 to 18 have increased, and there have been reports of drug deals taking place on the app. The speaker urges for a stronger parental involvement and advocacy movement to address these issues and put an end to the harmful effects of social media on young lives.

    • Snapchat's Data Destruction Policies Enable Drug DealsSnapchat's unique features and deliberate data destruction policies make it a significant platform for selling counterfeit drugs, particularly fentanyl, to unsuspecting teenagers, with potentially fatal consequences.

      Snapchat, due to its unique features and policies, has become a significant platform for the sale and distribution of deadly counterfeit drugs, particularly fentanyl, to unsuspecting teenagers. Over 70% of known cases involve this social media app. Snapchat's ephemeral messaging is not the primary issue; instead, it's the company's deliberate destruction of data on the back end that enables dealers to evade detection. Product features like the ability to delete saved messages from another user's account and Snap Maps, which can be used for verification, contribute to this problem. Moreover, Snap's My Eyes Only data vault, which is inaccessible to law enforcement and parents, further complicates matters. Dealers exploit these features to sell drugs to children without fear of being caught. In some cases, children unknowingly encounter drug deals on Snapchat, and the app seems to promote engagement with such content. The consequences can be fatal, with thousands of teenagers reportedly dying from fentanyl poisoning after purchasing counterfeit drugs through the app.

    • Social Media and Illegal Drug Sales: A Harmful TrendSocial media platforms like Snapchat enable illegal drug sales, particularly fentanyl, leading to harm and even death among young people. Legal loopholes hinder accountability, and a multi-faceted approach is needed to address this complex issue.

      The use of social media platforms like Snapchat in the sale and distribution of illegal drugs, particularly fentanyl, is a significant issue leading to harm and even death among young people. Parents have reportedly met with Snap executives who have attempted to use legal loopholes to avoid accountability. The progression from marijuana and vaping to harder drugs is a common pattern. The limitations of litigation as a tool to address these issues are that it primarily focuses on harm to vulnerable individuals and may not be sufficient to address the complexities of 21st century technology. Upgrading the approach to a 21st century model might involve a combination of legal, technological, and societal solutions. The continued existence of social media platforms that contribute to harm but do not offer any clear benefits is a concerning issue that requires further discussion and action.

    • Speak up against harmful tech practices for childrenEncourage parents and concerned citizens to be vocal and push for regulation against tech companies' potentially harmful practices for children, using various platforms like social media, letters, and protests.

      When children are being harmed, it's essential for society to take notice, ask questions, and demand answers. The current situation with tech companies and their potentially harmful practices is complex, but regulation could be a potential solution. However, these companies have historically resisted regulation, making it a challenge. Laura Monaco Templeton, a guest on the podcast, emphasizes the importance of parents and concerned citizens taking action. She encourages everyone to be "overprotective moms" and speak up, using various platforms like social media, letters, and protests. The influence of these tech companies in DC and their attempts to shift blame to parents are unacceptable. It's crucial for all of us to raise our voices and push for change. As the Center For Humane Technology's podcast, Your Undivided Attention, emphasizes, we need to get loud and make our concerns heard.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    The Social Dilemma: A Group Discussion

    The Social Dilemma: A Group Discussion

    On Oct. 6th, 2020 host Prerna Manchanda hosted a virtual conversation with the Technically Spiritual community to discuss the documentary ‘The Social Dilemma’. The group discusses the impacts of social media on our mental health and contemplate solutions that promote individual and collective healing. This episode presents the highlights from the discussion.

    Website

    Newsletter

    Instagram

    Twitter

     

    Frances Haugen: How One Whistle Makes a Difference

    Frances Haugen: How One Whistle Makes a Difference

    Just as Facebook was on the verge of becoming Meta Platforms, Inc. in late 2021, a scathing series of articles was published by the Wall Street Journal. The reporting was based on internal documents that detailed the ways Facebook’s platforms “are riddled with flaws that cause harm, often in ways that only the company fully understands.” The source for these internal documents — some tens of thousands of pages — became known as The Facebook Whistleblower.  The name behind these revelations is ex-Facebook product manager Frances Haugen. 

    On this episode, Haugen reveals why she came forward, what she hopes to accomplish with her new book, The Power of One, and what she sees as the perils — and promise — of an ever-changing technology landscape that requires transparency to keep itself honest.

    See omnystudio.com/listener for privacy information.

    A Conversation with Facebook Whistleblower Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    We are now in social media's Big Tobacco moment. And that’s largely thanks to the courage of one woman: Frances Haugen.

    Frances is a specialist in algorithmic product management. She worked at Google, Pinterest, and Yelp before joining Facebook — first as a Product Manager on Civic Misinformation, and then on the Counter-Espionage team. But what she saw at Facebook was that the company consistently and knowingly prioritized profits over public safety. So Frances made the courageous decision to blow the whistle — which resulted in the biggest disclosure in the history of Facebook, and in the history of social media.

    In this special interview, co-hosts Tristan and Aza go behind the headlines with Frances herself. We go deeper into the problems she exposed, discuss potential solutions, and explore her motivations — along with why she fundamentally believes change is possible. We also announce an exciting campaign being launched by the Center for Humane Technology — to use this window of opportunity to make Facebook safer.

    How kids are marketed to on social media platforms, and prioritizing well-being in childhood

    How kids are marketed to on social media platforms, and prioritizing well-being in childhood
    In this week's episode of the Where Parents Talk podcast, Lianne Castelino speaks to Josh Golin, father of one and Executive Director of Fairplay about the commercialization of childhood through social media and other platforms, and what technology companies, government, lawmakers, parents, educators must do to prioritize well-being among kids.