Logo
    Search

    Podcast Summary

    • Facebook's Prioritization of Profits Over Public SafetyFacebook whistleblower Frances Haugen revealed the company's focus on profits, sparking debates about social media's role in society and tech firms' responsibility to prioritize the public good.

      Facebook whistleblower Frances Haugen exposed the company's prioritization of profits over public safety, leading to a major disclosure and raising concerns about the ethical implications of social media. Haugen, a former product manager, witnessed these conflicts firsthand and decided to blow the whistle despite potential consequences. Her actions have sparked debates about the role of social media in society and the responsibility of tech companies to prioritize the public good. Despite the backlash, Haugen remains hopeful that change is possible and believes that her actions can help Facebook make necessary improvements. The conversation around her whistleblowing serves as a reminder of the power and influence of social media, as well as the importance of factual information and ethical business practices.

    • Facebook's global responsibility for online safety and integrityFacebook's expansion into new regions requires addressing unique challenges and understanding local norms for online safety and integrity. Transparency and education are crucial for bridging the gap between the tech industry and the rest of the world.

      As technology companies like Facebook expand their reach to new parts of the world, they take on a greater responsibility for the safety and integrity of the information that their users encounter. This is particularly true in areas where a free and open Internet has not fully developed, making it harder for alternatives to emerge. The speaker, who worked on Facebook's civic integrity team, shares concerns about the darker realities that exist in other parts of the world, which often go unnoticed by the wider public. The speaker also highlights the importance of understanding the unique experiences and norms of Internet users in different parts of the world, as well as the economies of scale that come into play when addressing safety and integrity issues in new languages and regions. Ultimately, the speaker emphasizes the need for greater transparency and education to help bridge the gap in understanding between those working in the tech industry and the rest of the world.

    • Facebook's limited investment in safety and combating misinformation in smaller languages and communitiesFacebook's focus on revenue growth and engagement hinders adequate investment in safety features, particularly in smaller languages and communities, putting public safety at risk.

      Despite Facebook's immense profit, they have not invested adequately in ensuring safety and combating misinformation on their platform, particularly in smaller languages and communities. This is due to the fixed cost of programming new languages and the smaller revenue potential in these areas. Whistleblowers have reported numerous attempts by foreign governments to abuse the platform and mislead their citizens, but Facebook's limited resources have only allowed them to address a fraction of these cases. The public's safety and lives are at stake, yet there is no oversight or opportunity for public input on the level of investment needed. Facebook's mindset as a "scrappy startup" has hindered them from making necessary investments to fully address these issues. Despite their own researchers proposing solutions, Facebook often chooses not to implement them if it would negatively impact revenue growth or engagement.

    • Facebook prioritizes growth over safety measuresFacebook's reluctance to implement safety measures, such as increasing friction in sharing or addressing extreme usage, is due to growth concerns and the fear of being perceived as dangerous.

      Facebook's unwillingness to implement certain measures, despite knowing their positive impact, is driven by a prioritization of growth and an avoidance of acknowledging potential danger. For instance, the decision not to increase friction in the sharing process, like Twitter does, is due to the slight impact on growth and the fear of being perceived as dangerous. Another example is the extreme usage of a small number of users, particularly in terms of content production and invitations to groups, which disproportionately affects the spread of misinformation. However, Facebook's lack of transparency about rate limits makes it challenging to address these issues effectively. The complexity of dealing with extreme usage at scale is a challenge that requires attention, but Facebook's current approach focuses on growth at all costs, often framing issues in false dichotomies.

    • Facebook's engagement-based ranking algorithm leads to unwanted spread of extreme contentFacebook's group invitation and content injection system can spread divisive and polarizing content to large numbers of users, and the engagement-based ranking algorithm exacerbates this issue. A more human-scaled approach, such as smaller, topic-focused rooms, might be more effective in preventing the unwanted spread of content.

      The current group invitation and content injection system on Facebook can lead to the unwanted spread of extreme content to large numbers of users, even if they haven't explicitly joined the group. This can create a "perfect storm" for mass distribution of divisive and polarizing content. The speaker suggests that Facebook's engagement-based ranking algorithm, which prioritizes content that generates strong reactions, exacerbates this issue. They also mention that Facebook's shift towards prioritizing groups was driven by declining user engagement, but that a more human-scaled approach, such as smaller, topic-focused rooms, might be more effective in preventing the unwanted spread of content. The speaker emphasizes that most people are unaware of how Facebook builds its systems and that the company relies on user data and past interactions to make content recommendations. However, not all users engage with Facebook in the same way, and this can lead to inaccurate predictions and the unwanted spread of extreme content.

    • Impact of Heavy Facebook Users on Algorithm and SocietyHeavy Facebook users, who consume large amounts of content, have a disproportionate impact on the platform's algorithm and society. Their behavior can lead to a 'gradient of anxiety' and the spread of misinformation, making transparency and control over the platform crucial.

      The behavior of heavy Facebook users, who consume thousands of posts per day, has a disproportionate impact on the platform's algorithm compared to average users. This is because Facebook's strategies for dealing with misinformation and harmful content, such as demoting it in feeds, become less effective for those who consume large amounts of content. Additionally, people who are socially isolated and vulnerable, such as those who have recently experienced loss or moved to a new city, are more likely to consume large amounts of Facebook content and be influenced by it. This can lead to a "gradient of anxiety" where the most anxious users pass on their anxiety to other users. The documents also reveal that external researchers and political parties have clued in on these patterns and noticed changes in Facebook's algorithm, leading to a need for more transparency and control over the platform's content. Overall, these findings highlight the importance of understanding the impact of social media algorithms on individual users and society as a whole.

    • The negative impact of engagement-based ranking on social mediaEngagement-based ranking on social media can lead to societal polarization, breakdown of meaningful conversations, and promotion of extreme political views. A shift towards chronological ranking could help mitigate these issues, but requires simultaneous adoption by all major platforms and government oversight.

      Engagement-based ranking on social media platforms, which prioritizes divisive and extreme content, can have detrimental effects on society. This includes the polarization of society, the breakdown of meaningful conversations, and the promotion of extreme political views. The speaker argues that a chronological ranking system, where content is displayed in the order it was posted, could help mitigate these issues. However, for this to be effective, it would require all major social media platforms to adopt this system simultaneously. The speaker also suggests that government oversight and regulation, such as requiring platforms to publish data on their algorithms and holding them accountable for their choices, could help encourage a shift towards chronological ranking. Ultimately, the goal is to create social media systems that foster constructive conversations and perspective synthesis, rather than perpetuating division and gridlock.

    • Retaining control over online focusIndividuals should prioritize personal sovereignty and carefully consider online content, rather than relying on AI recommendations. Collaboration and a long-term perspective are key to addressing issues with platforms like Facebook.

      Individuals should retain control over what they focus on online, rather than relying on engagement-based AI recommendations, especially from platforms like Facebook. The speaker, Francis Haugen, emphasizes the importance of personal sovereignty and care in addressing the issues with these platforms. She believes that collaboration and a long-term perspective are more effective than anger and combativeness. Haugen's approach is driven by a desire to heal Facebook and bring more voices to the problem-solving process. She encourages a constructive conversation rather than a demonizing one, acknowledging that change is a long and slow process. The term "morally bankrupt" used by Haugen should be understood in a financial sense, meaning that platforms can get in over their heads and struggle to manage the content on their sites.

    • Collaborating to address issues with social media giantsAcknowledge past mistakes, commit to making amends, and approach with hope and collaboration for long-term success in making social media a positive force.

      Addressing the issues with social media giants like Facebook requires a collaborative approach. Instead of being angry, we need to work together to find solutions. This involves acknowledging past mistakes and committing to making amends. Inspiration can be drawn from historical figures like Gandhi and Nelson Mandela, who successfully tackled seemingly insurmountable challenges through peaceful resistance and diligent work. The key is to approach the issue with hope, rather than anger, and to view those involved as collaborative members of society, rather than misunderstood or persecuted figures. This approach allows for a long-term commitment to social change, which is necessary given the potential impact of social media on millions of lives. It's important to remember that working on these issues at companies like Facebook is a crucial contribution to making social media a positive force in our world.

    • Facebook needs to take meaningful action and admit to shortcomingsFacebook must invite external help, implement stricter regulations, and undergo independent oversight to address issues and regain public trust

      For Facebook to effectively address the issues it faces and regain public trust, it needs to take meaningful action and admit to any shortcomings. This could involve declaring moral bankruptcy, inviting external help, and implementing stricter regulations. The current situation calls for continuous improvement and collaboration with researchers to develop privacy-preserving techniques. Facebook's past actions, such as lying to Congress and downplaying the negative impacts of its platforms, have hindered progress. Independent oversight and regulatory bodies are necessary to ensure accountability and drive meaningful change. Without these measures, the problems on Facebook, including misinformation and the concentration of toxic content, are likely to worsen and pose a significant risk to individuals and society.

    • Addressing the Growth Rate of Harms on Social MediaFocusing on increasing the growth rate of solutions to combat harms on social media through transparency, access to data, and strategic interventions. A simple yet effective solution: limiting the number of reshares on social media.

      The growth rate of harms on social media platforms, such as Facebook, is currently outpacing the growth rate of solutions. This is a significant concern, as it could lead to a world where these platforms make our democracies weaker rather than stronger. To address this issue, we need to focus on increasing the growth rate of solutions faster than the tech companies that are causing the harms. This can be achieved through transparency, access to data, and strategic interventions at various leverage points within the complex system of social media platforms. One example of a small but impactful solution is limiting the number of reshares on social media. According to a Facebook data scientist, this simple platform change could be more effective in combating misinformation and toxic content than many of the more complex solutions that have been implemented so far. The idea is that as content gets reshared further down the line, it tends to get worse on average. By limiting the number of reshares, we can prevent the spread of harmful content and reduce its impact on users. This is just one example of how small changes can have a big impact on the complex system of social media platforms and our society beyond.

    • Requiring copy-and-paste for content after a certain number of hops could decrease the spread of toxic or low-quality content.Introducing a copy-and-paste requirement for content after a certain number of hops could decrease the amount of reshares, leading to less toxic or low-quality content on social media and potentially increasing profits for the platform by promoting high-quality content.

      Reducing the ease of sharing content on social media platforms, even by just a small degree, could significantly decrease the spread of toxic or low-quality content. This could be achieved by introducing a copy-and-paste requirement for content after a certain number of hops in a sharing chain. While this might seem like a small change, it could have a big impact on the amount of reshares, which in turn would decrease Facebook's profits. Additionally, implementing this change could make the world a safer place by promoting the sharing of genuinely high-quality content, rather than the mass spread of noise. This idea is inspired by Steve Jobs' belief that if something is truly worth sharing, people will go the extra mile to do so by copying and pasting links, rather than just hitting the instant reshare button. This tiny change might not be a one-click fix to the world, but it could be a step in the right direction towards reducing the noise and making social media a safer and more meaningful space.

    • Facebook Limits Sharing Levels to Decrease Polarizing ContentFacebook is limiting the number of sharing levels per post to two to decrease extreme polarizing divisive content, promoting safer and more enjoyable social media use. Join the Center for Humane Technology's campaign at oneclicksafer.tech to support this change.

      Facebook is making a change to decrease extreme polarizing divisive content by limiting the number of levels of sharing per post to two. This content-neutral, language-neutral solution aims to make the platform less reactive and more thoughtful, promoting safer and more enjoyable social media use. The Center for Humane Technology is advocating for this change and encourages individuals to join their campaign at oneclicksafer.tech. Frances Haugen, a social media algorithm expert and whistleblower, believes in designing social media that brings out the best in humanity and supports her through donations to Whistleblower Aid. The Center for Humane Technology, led by Tristan Harris, is dedicated to catalyzing a humane future and producing the podcast "Your Undivided Attention."

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    Behind the Curtain on The Social Dilemma — with Jeff Orlowski-Yang and Larissa Rhodes

    How do you make a film that impacts more than 100 million people in 190 countries in 30 languages?

    This week on Your Undivided Attention, we're going behind the curtain on The Social Dilemma — the Netflix documentary about the dark consequences of the social media business model, which featured the Center for Humane Technology. On the heels of the film's 1-year anniversary and winning of 2 Emmy Awards, we're talking with Exposure Labs' Director Jeff Orlowski-Yang and Producer Larissa Rhodes. What moved Jeff and Larissa to shift their focus from climate change to social media? How did the film transform countless lives, including ours and possibly yours? What might we do differently if we were producing the film today? 

    Join us as we explore the reverberations of The Social Dilemma — which we're still feeling the effects of over one year later. 

    Spotlight — A Whirlwind Week of Whistleblowing

    Spotlight — A Whirlwind Week of Whistleblowing

    In seven years of working on the problems of runaway technology, we’ve never experienced a week like this! In this bonus episode of Your Undivided Attention, we recap this whirlwind of a week — from Facebook whistleblower France Haugen going public on 60 Minutes on Sunday, to the massive outage of Facebook, Instagram, and WhatsApp on Monday, to Haugen’s riveting Congressional testimony on Tuesday. We also make some exciting announcements — including our planned episode with Haugen up next, the Yale social media reform panel we’re participating in on Thursday, and a campaign we’re launching to pressure Facebook to make one immediate change. 

    This week it truly feels like we’re making history — and you’re a part of it.

    Disinformation Then and Now — with Camille François

    Disinformation Then and Now — with Camille François

    Disinformation researchers have been fighting two battles over the last decade: one to combat and contain harmful information, and one to convince the world that these manipulations have an offline impact that requires complex, nuanced solutions. Camille François, Chief Information Officer at the cybersecurity company Graphika and an affiliate of the Harvard Berkman Klein Center for Internet & Society, believes that our common understanding of the problem has recently reached a new level. In this interview, she catalogues the key changes she observed between studying Russian interference in the 2016 U.S. election and helping convene and operate the Election Integrity Partnership watchdog group before, during and after the 2020 election. “I'm optimistic, because I think that things that have taken quite a long time to land are finally landing, and because I think that we do have a diverse set of expertise at the table,” she says. Camille and Tristan Harris dissect the challenges and talk about the path forward to a healthy information ecosystem.

    Making tech work better with Frances Haugen

    Making tech work better with Frances Haugen

    This week, Facebook whistleblower Frances Haugen joined Match Volume with Isis Leung, Grace Galante, and Julia Kim. Frances Haugen is an advocate for accountability and transparency in social media and a former Facebook employee. She disclosed thousands of Facebook's internal documents to the public to expose the company's ethical shortcomings. In this episode, Haugen shared what led to her courageous decision to blow the whistle on Facebook and her thoughts on the misinformation on social media platforms.

    Spotlight — The Facebook Files with Tristan Harris, Frank Luntz, and Daniel Schmachtenberger

    Spotlight — The Facebook Files with Tristan Harris, Frank Luntz, and Daniel Schmachtenberger

    On September 13th, the Wall Street Journal released The Facebook Files, an ongoing investigation of the extent to which Facebook's problems are meticulously known inside the company — all the way up to Mark Zuckerberg. Pollster Frank Luntz invited Tristan Harris along with friend and mentor Daniel Schmachtenberger to discuss the implications in a live webinar. 

    In this bonus episode of Your Undivided Attention, Tristan and Daniel amplify the scope of the public conversation about The Facebook Files beyond the platform, and into its business model, our regulatory structure, and human nature itself.