Logo
    Search

    Podcast Summary

    • Perception Gap Between Democrats and RepublicansDemocrats and Republicans overestimate the number of extremists on the other side, leading to toxicity and hindering consensus efforts. Misperceptions can fuel further polarization.

      There exists a significant perception gap between Democrats and Republicans in the United States, with both sides overestimating the number of extremists on the other side. According to research from More in Common, a nonprofit organization, Democrats believe that about 1 in 3 Republicans support reasonable gun control, but it's closer to 65-70%. Similarly, Democrats thought less than 50% of Republicans consider racism a problem, but it's actually closer to 75-80%. These misperceptions can lead to toxicity in politics and hinder efforts to find consensus. For instance, while less than 5% on either side felt that physical violence would be justified, each side believed about 50% of the other side would justify violence. These perception gaps are harmful and can fuel further polarization. As technologists, measuring and addressing these perception gaps can provide an objective measure for fighting distortion in the information landscape.

    • Polarization: A Vicious Cycle of Misunderstanding and HostilityPolarization deepens as individuals spend more time in echo chambers, reinforcing beliefs and increasing hostility towards the other side, widening perception gaps and making it harder to see each other as human. Bridging the divide requires open and honest dialogue and recognizing the value of diverse perspectives.

      The polarization in American politics is fueled by a vicious cycle of misunderstanding and hostility between Democrats and Republicans. This cycle is driven by the increasing perception gap between the two sides, which leads to more negative views and extreme descriptions of each other. As individuals become more engaged in political debates, they spend more time in echo chambers that reinforce their beliefs and increase their hostility towards the other side. This, in turn, widens their perception gaps and further entrenches their identities, making it more difficult to see the other side as human or coming from a good place. The result is a deepening cycle of polarization that can ultimately lead to the death of democracy. It's important to recognize this cycle and work towards creating the space for conversations and finding common ground, as understanding and empathy are essential for healing and moving towards virtuous upward spirals. Additionally, it's crucial to remember that a significant portion of the population, including nonvoters and infrequent voters, can provide valuable insights into more realistic understandings of others. The emotions of disgust and confusion towards the other side highlight the need for more open and honest dialogue to bridge the divide.

    • Recognizing and Addressing Perception GapsMisunderstandings and perception gaps can lead to confusion and conflict. Seek clarity through conversation and curiosity to understand true intentions. Bridging gaps fosters understanding and reduces conflict.

      Misunderstandings and perception gaps between individuals or groups can lead to confusion and conflict. It's essential to recognize the potential for exaggeration of beliefs and seek clarity through conversation or other means to understand the true intentions of others. The film "The Social Dilemma" highlights this phenomenon, where individuals on different sides of an issue may be exposed to different information, leading to confusion and a desire to understand the other perspective. The concept of curiosity is crucial in bridging these gaps, as it allows for discovery and the potential for unexpected connections. For instance, the perception gap between Democrats and Republicans regarding the number of LGBTQ individuals and high earners in each party is significant. Additionally, it's essential to remember that we also misunderstand our own sides, leading to skewed perceptions of the other. Therefore, recognizing and addressing these perception gaps is crucial for fostering understanding and reducing conflict in our polarized landscape.

    • Education and Media Consumption Impact Perception Gaps Between Democrats and RepublicansHigher education levels widen Democrats' perception gaps from reality, while media consumption expands Republicans' perception gaps.

      Despite common beliefs, there is more common ground between Democrats and Republicans on key issues than perceived. However, education levels and media consumption can actually widen the perception gap between the two parties. Contrary to expectations, individuals with higher education levels tend to have larger perception gaps as Democrats, while Republicans' perception gaps remain relatively constant. For Democrats, the more education they have, the more their perception of Republicans' views diverges from reality. So-called "social group homogeneity" also plays a role, as Democrats with higher education levels are more likely to surround themselves with like-minded individuals, reinforcing their beliefs. For Republicans, media consumption is a significant factor. The more Republicans consume news, the wider their perception gap becomes. Conservative media ecosystems, such as Fox News, 1 American News Network, Newsmax, Drudge Report, and the Wall Street Journal, tend to reinforce each other's viewpoints, creating a self-reinforcing bubble. Liberal media outlets, on the other hand, tend to have greater diversity in their sources and are less strongly tied to perception gaps.

    • Perception of media bias among Democrats and RepublicansBoth Democrats and Republicans perceive media bias, with Democrats feeling conservative media is against them and conservatives believing liberal media favors them. This perception gap can hinder efforts to bridge the political divide.

      While there are differences in intergroup relationships and media consumption between Democrats and Republicans, the perception of media bias is a significant issue for both sides. Democrats feel that conservative media has a bias against them, while conservatives strongly believe that liberal media has a bias towards them. This perception gap can lead to pressure to conform beliefs and inhibit attempts to reach across the political divide. The media ecosystem is divided into distinct camps, with each side rewarding different behaviors – agreement and unity on the right, and factual accuracy on the left. The tech companies, with their unique access to data, could potentially help address the issue of unfair representation in social media by measuring perception gaps and promoting a more democratic representation of diverse voices.

    • Measuring and addressing perception gaps on a national level through social mediaSocial media platforms acting as democratic fiduciaries, measuring and addressing perception gaps, and making it a measurable metric could lead to more productive conversations and reduced polarization.

      Measuring and addressing perception gaps on a national level through social media platforms could lead to more productive conversations and reduced polarization. This idea, as discussed, could involve platforms like Twitter or Facebook acting as democratic fiduciaries by providing a service that highlights shared objects of agreement and perception gaps. These gaps, which are often misrepresented when focusing on both sides, can prevent meaningful dialogue. Instead, by consistently measuring and addressing these gaps, we can begin to bridge the divide and build conversations on a foundation of shared understanding. Additionally, the idea of making perception gap reduction a measurable metric for social media platforms, with potential consequences for non-compliance, could incentivize progress in this area. This approach could be a significant step towards reducing affective polarization and fostering more accurate and productive conversations.

    • Perception gaps in social mediaSocial media can lead to perception gaps due to distorted perspectives, causing a disconnect from accurate information and healthy discourse. Strategies to reduce these gaps and technologies to correct distortions are essential.

      The consumption of media, particularly on social platforms like Facebook and Twitter, can lead to increased perception gaps among individuals and communities. This is due in part to the distorted perspectives these platforms can foster, causing a disconnect from accurate information and healthy discourse. Understanding this dynamic and identifying effective strategies to reduce perception gaps could significantly enhance the power and utility of social media tools. The Murray Gell Mann effect, a phenomenon where accurate information is overlooked or forgotten in favor of distorted narratives, further highlights the importance of reliable media and the need for more accurate representation. However, even with awareness of perception gaps, individuals can still fall prey to the emotional pull of biased content, making it crucial to develop technologies that correct, rather than reinforce, these distortions.

    • Amplifying Extreme Voices and the Impact on DemocracyPlatforms can make efforts to make moderate voices more visible and promote clearer understanding to bridge the divide and foster more productive online conversations.

      The current state of social media and the way it amplifies extreme voices is contributing to political division and exhaustion among less extreme individuals. This can lead to a lack of engagement and a sense of not belonging, which is detrimental to healthy democracy. To address this issue, platforms could make efforts to make moderate voices more visible and provide a sense of safety and belonging for individuals to share their more reasonable perspectives. This could involve showing the number of people who hold similar views or penalizing low-quality content that contributes to perception gaps. By making the invisible visible and promoting content that encourages clearer understanding, we can help bridge the divide and foster more productive and inclusive online conversations.

    • Designing interventions for productive online discussionsUnderstanding the goals and rules of conversational spaces can help create interventions that foster productive online discussions. Small-scale tests and identity-focused conversations have shown promise, but more research is needed to identify effective content.

      Addressing perception gaps and reducing polarization in online discussions requires careful consideration of the context and the conversational space. The use of small-scale tests and identity-focused conversations have shown promise in bringing people together around non-political topics. The creation of rules of engagement and defining the parameters of the conversation can help change the nature of the discussion and make it more productive. For instance, in science, the goal is truth-seeking, and the competition encourages disproving current understandings to advance knowledge. In contrast, in a courtroom, the goal is to figure out facts, but the rules of engagement limit what can be said. By understanding the goals and rules of different conversational spaces, we can design interventions that foster productive and non-manipulative discussions. However, more research is needed to identify specific content that effectively encourages positive interactions without making people feel played or persuaded.

    • Focusing on reducing perception gaps through objective methodsInstead of debating free speech vs censorship, we should use objective methods like machine learning classifiers to measure misperceptions and reduce perception gaps on social media platforms.

      The ongoing debates about free speech versus censorship on social media platforms are unproductive and have been repeated for centuries. Instead, we should focus on finding objective ways to measure misperceptions and reduce perception gaps. Current methods, such as shadow banning and content moderation, are often seen as biased and can anger certain communities. A more effective solution could be using machine learning classifiers to analyze the impact of content on perception gaps for specific populations. This approach would provide objective data and help platforms make more humane, fair, and ethical decisions. Additionally, social media platforms are currently getting it wrong in predictable ways by showing us more extreme versions of reality, leading to increased perception gaps and significant costs for both users and platforms. A shift towards objective assessment of content impact on meta-perceptions would have massive benefits for all involved.

    • Tech companies as global coordination infrastructureSocial media platforms have the power to bridge perception gaps and promote healthier discourse, but their current state harms societal cohesion and warps perceptions

      Tech companies, particularly social media platforms like Facebook and Twitter, have the potential to become global coordination infrastructure for creating common ground and reducing perception gaps on various issues, from local to international levels. This could lead to healthier discourse, less extremes, and more collaboration on topics such as climate change and COVID response. However, the current state of these platforms is harming societal cohesion by warping perceptions and creating tension points. Institutional players are hesitant to engage in important issues due to the fear of not being in line with their community's perceived views. By addressing these perception gaps, politicians and other actors could better respond to the needs of their constituents and help shift the land to align with a more accurate representation of reality.

    • Allowing users to flag and indicate right actions could help scale community normsEmpowering users to flag and indicate right actions could help scale community norms and reduce online conflict, potentially through a system that levels up trusted users to have moderating powers.

      Social media platforms like Twitter, Facebook, and Instagram are fostering a culture of amplifying moments of disagreement and conflict, creating a negative cycle that is the opposite of the experience of in-person interactions. These moments of drama are made permanent and broadcast to millions, leading to a constant identification of "antagonists" in our online narratives. The scale of conversations happening online makes it seemingly impossible to effectively address this issue through traditional peacekeeping methods. However, an approach inspired by HuffPo's past community moderation model could potentially offer a solution. This model involves allowing users to flag and indicate what they believe is the right thing to do, and if statistically aligned with the actual moderators, users can be leveled up to have moderating powers. This could help to quickly and transparently scale the norms of a small group to a larger community, with the ability to revoke powers if they are abused.

    • Effective methods for healthy and representative conversations in digital spacesTraining individuals to de-escalate conflicts and moderate conversations, small-group deliberative democracy models, bridge perception gaps, reduce polarization, foster shared responsibility, invest in interventions for democratic societies to outcompete digital authoritarianism, effective policy-making, listening-oriented society

      There is a need for more effective methods to facilitate healthy and representative conversations, especially in digital spaces. This can be achieved through interventions such as training individuals to de-escalate conflicts and moderate conversations, as well as implementing small-group deliberative democracy models. These approaches can help bridge perception gaps, reduce polarization, and foster a sense of shared responsibility among participants. Additionally, investing in these interventions is crucial for democracies to outcompete digital authoritarian societies and create a digital democratic society with a hyper-focus on finding common ground. This could lead to more effective policy-making based on widely agreed-upon issues and a more listening-oriented society overall.

    • Reducing polarization in digital spaceTo create a healthy democratic digital space, reduce exposure to content exacerbating perception gaps and increase exposure to content with lower gaps, ensuring a democratic and trustworthy process.

      To create a healthy democratic digital space, we need to move beyond the current polarizing frames and find agreement on broad parameters. This can be achieved by reducing the frequency or exposure of content that exacerbates perception gaps and increasing exposure to content associated with lower perception gaps. This approach is beneficial for both individuals and tech companies, as it leads to a more coherent and harmonious digital environment. It's important to ensure that this process is democratic and trustworthy, with credible individuals and institutions involved. This approach can help us compete with digital authoritarianism and prevent the chaos that currently dominates our politics. The tech companies can implement this objectively without the pressure of constant content moderation, leading to a more sustainable and healthy digital ecosystem.

    • Shifting focus from speaking to listening in digital communicationIn a world of decreased speech costs and increased listening costs, we need to prioritize listening to create meaningful conversations and foster a 'listening society' where we deeply understand and respect each other's perspectives.

      We need to shift the focus from speaking to listening in our digital communication platforms like Twitter. Danah Boyd, a leading researcher in the field of technology and society, emphasized the importance of this shift in a recent podcast conversation. She highlighted how the cost of speech has decreased, while the cost of listening has increased. Objective metrics and authentic questions are potential solutions to help us hear each other better and create more meaningful conversations. Boyd's work revolves around the concept of a "listening society," where we aim to deeply understand and respect one another's perspectives. This idea is especially relevant in today's polarized world, where many people feel unheard and unseen. The lack of dignity in how we see and represent each other in media and everyday life is a pervasive issue that a listening society could help address. The upcoming data from a study on dignity experiences further underscores this need. Overall, the podcast conversation left me feeling inspired about the potential of a listening society and the role technology platforms can play in fostering it.

    • Exploring the Perception Gap and Advancing Humane TechnologyThe Perception Gap report reveals the disconnect between how people think technology impacts them versus reality, and the Center For Humane Technology is working to bridge this gap through conversations, resources, and partnerships.

      If you're interested in learning more about the Perception Gap and how to advance humane technology, there are several steps you can take. First, visit perceptiongap.us to download the report and engage with the material in greater depth. Second, the Center For Humane Technology is hosting conversations with podcast guests and their allies after most episodes, providing a chance to connect directly with those working to advance humane technology. To get involved, go to humaneetech.com/getdashinvolved. The Center For Humane Technology is dedicated to making these podcasts lead to real change and is supported by generous lead supporters such as the Omidyar Network, Craig Newmark Philanthropies, Vol Foundation, and the Patrick J McGovern Foundation.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Do You Want to Become a Vampire? — with L.A. Paul

    Do You Want to Become a Vampire? — with L.A. Paul

    How do we decide whether to undergo a transformative experience when we don’t know how that experience will change us? This is the central question explored by Yale philosopher and cognitive scientist L.A. Paul

    Paul uses the prospect of becoming a vampire to illustrate the conundrum: let's say Dracula offers you the chance to become a vampire. You might be confident you'll love it, but you also know you'll become a different person with different preferences. Whose preferences do you prioritize: yours now, or yours after becoming a vampire? Similarly, whose preferences do we prioritize when deciding how to engage with technology and social media: ours now, or ours after becoming users — to the point of potentially becoming attention-seeking vampires? 

    In this episode with L.A. Paul, we're raising the stakes of the social media conversation — from technology that steers our time and attention, to technology that fundamentally transforms who we are and what we want. Tune in as Paul, Tristan Harris, and Aza Raskin explore the complexity of transformative experiences, and how to approach their ethical design.

    46: Antifragility, Gut Feelings, and the Myth of Pure Evil (with Jonathan Haidt) - Rebroadcast

    46: Antifragility, Gut Feelings, and the Myth of Pure Evil (with Jonathan Haidt) - Rebroadcast
    (First Broadcast - 4th November 2019) Does that which doesn’t kill you make you weaker? Should we always follow our emotions? Is life a battle between good people and bad people? And critically, what might the adoption of these three popular, but unwise, ideas be doing to a rising generation of young adults? Jonathan Haidt joins Igor and Charles to discuss the three great untruths of modern life, the nature of antifragility, the 'great awokening,' rising violence on US university campuses, and the origin story of the Heterodox Academy. Igor suggests that diversity can help some projects while hindering others, Jon shares his ultimate conflict-resolving ninja skill, and Charles learns that conservative voters come in radically different shapes and sizes. Special Guest: Jonathan Haidt.

    Come Together Right Now — with Shamil Idriss

    Come Together Right Now — with Shamil Idriss

    How many technologists have traveled to Niger, or the Balkans, or Rwanda, to learn the lessons of peacebuilding? Technology and social media are creating patterns and pathways of conflict that few people anticipated or even imagined just a decade ago. And we need to act quickly to contain the effects, but we don't have to reinvent the wheel. There are people, such as this episode’s guest, Shamil Idriss, CEO of the organization Search for Common Ground, who have been training for years to understand human beings and learn how to help them connect and begin healing processes. These experts can share their insights and help us figure out how to apply them to our new digital habitats. “Peace moves at the speed of trust, and trust can’t be fast-tracked,” says Shamil. Real change is possible, but as he explains, it takes patience, care, and creativity to get there. 

    Mr. Harris Zooms to Washington

    Mr. Harris Zooms to Washington

    Back in January 2020, Tristan Harris went to Washington, D.C. to testify before the U.S. Congress on the harms of social media. A few weeks ago, he returned — virtually — for another hearing, Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. He testified alongside Dr. Joan Donovan, Research Director at the Harvard Kennedy School’s Shorenstein Center on Media Politics and Public Policy and the heads of policy from Facebook, YouTube and Twitter. The senators’ animated questioning demonstrated a deeper understanding of how these companies’ fundamental business models and design properties fuel hate and misinformation, and many of the lawmakers expressed a desire and willingness to take regulatory action. But, there’s still room for a more focused conversation. “It’s not about whether they filter out bad content,” says Tristan, “but really whether the entire business model of capturing human performance is a good way to organize society.” In this episode, a follow-up to last year’s “Mr. Harris Goes to Washington,” Tristan and Aza Raskin debrief about what was different this time, and what work lies ahead to pave the way for effective policy. 

    You Will Never Breathe the Same Again — with James Nestor

    You Will Never Breathe the Same Again — with James Nestor

    When author and journalist James Nestor began researching a piece on free diving, he was stunned. He found that free divers could hold their breath for up to 8 minutes at a time, and dive to depths of 350 feet on a single breath. As he dug into the history of breath, he discovered that our industrialized lives have led to improper and mindless breathing, with cascading consequences from sleep apnea to reduced mobility. He also discovered an entire world of extraordinary feats achieved through proper and mindful breathing — including healing scoliosis, rejuvenating organs, halting snoring, and even enabling greater sovereignty in our use of technology. What is the transformative potential of breath? And what is the relationship between proper breathing and humane technology?