Logo
    Search

    Podcast Summary

    • Impact of Tech Companies' Business Models on Society and Democracy Discussed in Congressional HearingThe business model of tech companies, particularly social media platforms, selling attention through algorithms and amplification raises concerns for democracy and mental health, necessitating long-term federal oversight.

      The business models of tech companies, particularly social media platforms, have a significant impact on the discourse and mental health of users, and this issue requires long-term federal oversight due to the potential risks to democracy. During a recent congressional hearing, Tristan Harris testified about algorithms and amplification, and was joined by representatives from tech companies and experts on disinformation. While senators asked insightful questions about the business model of selling attention, Harris argued that the entire model of capturing human performance for cheap attention may not be a good way to organize society. The platforms responded with arguments around the margins, but the underlying issue of the business model's impact on society remains a pressing concern.

    • The argument for removing harmful content is persuasive but misleadingPlatforms focus on explicit violations but ignore subtle ways of transforming users into attention-seekers, which can have significant impacts on individuals and society.

      During a congressional hearing, tech platform representatives argued that they are making significant strides in removing harmful content from their platforms using AI and hiring more content moderators. They emphasized their community standards and quarterly reports on content removal. However, the speaker during the discussion pointed out that the argument can be persuasive but misleading. The focus is on explicit violations of community guidelines, but the real issue lies in the subtle ways these platforms can transform users into attention-seeking individuals who are sensationalized, divided, and outraged. The speaker highlighted that even though the proportion of violative views on YouTube is less than 2%, it's crucial to recognize the impact of the design model that fuels the platforms' business. The speaker emphasized the importance of recognizing the power of framing and questioning the underlying assumptions of the problem statement.

    • Transparency in recommendation algorithms for harmful contentRevealing the frequency of algorithmic recommendations leading to harmful content could increase tech companies' liability and promote greater transparency

      Transparency and accountability in the recommendation algorithms used by tech companies, particularly regarding the promotion of harmful content, is a crucial step towards holding these platforms liable for the spread of toxic material. However, defining what constitutes a recommendation and how often it occurs can be complex. For instance, a tweet that appears in your feed due to an algorithm is still a recommendation, even if you weren't actively scrolling. Revealing the number of times such recommendations have led to the promotion of harmful content could be a significant key to unlocking greater responsibility and liability for tech companies. This shift from transparency to liability would mean that the toxic content that affects society would also become a liability on the companies' balance sheets. However, it's important to remember that the larger issue lies upstream, with the culture and narrative views of reality that shape our daily informational and conversational environments. Despite efforts to remove harmful content, the sheer volume of new content being uploaded daily makes it an impossible task to catch every instance. Ultimately, this issue underscores the need for ongoing dialogue and collaboration between tech companies, policymakers, and society as a whole to address the complex challenges posed by the digital information age.

    • Tech Platforms' Business Models Foster Anxious and Disinformed SocietySenators raise concerns about the impact of tech platforms' business models on human behavior, arguing that they exploit human emotions and attention for profit, leading to a polarized, anxious, and disinformed society. The issue is not just about content policies but the fundamental design of these systems.

      The current business models of tech platforms, which prioritize human attention and engagement, inadvertently foster a polarized, anxious, and disinformed society. Senators have raised concerns about the impact of these models on human behavior, arguing that they are turning us into a new domesticated species that is incompatible with a healthy civilization. The issue is not just about content policies, but the fundamental design of these systems. For instance, staged animal rescue videos on YouTube, which are not harmful but get millions of views due to the incentive to create a social performance, highlight this problem. These videos are not the result of a bad apple, but the system's design. Shoshana Zuboff, in "The Social Dilemma," makes an insightful point that we don't find it radical to ban the sale of human organs or orphans. Similarly, it's important to reconsider the ethical implications of tech platforms' business models, which exploit human emotions and attention for profit. It's crucial to acknowledge the unintended consequences of these models, such as miscategorization and polarization, and give credit to those, like Guillaume Chaslow, who have been advocating for change for years. The conversation around tech ethics needs to shift from specific content policies to a more fundamental reevaluation of the role these platforms play in shaping our humanity.

    • Tech companies altering human behavior with detrimental effectsTech companies' business models threaten democracy and government decision-making, requiring action to strengthen democratic institutions.

      The business models of tech companies like TikTok, Facebook, and Google rely on subtly altering human behavior, which can have detrimental effects on society and government decision-making. This can be compared to the Cold War investment in continuity of government, as the US government's ability to function and make decisions is being undermined by the divisive and polarizing nature of social media. Furthermore, the rise of digital closed societies like China, which are using technology to strengthen their autocratic regimes, poses a threat to open societies that are allowing market forces to degrade democracy through technology. The US government and open societies as a whole need to take action to strengthen their democratic institutions and counteract the negative effects of tech companies' business models.

    • Creating stronger open societies in the post-digital ageFocus on solutions rather than just problems, explore more precise tools for addressing digital challenges, and shift the conversation towards understanding engagement metrics and creating a world tired of going in circles

      The discussion at the hearing focused on the need for transformative changes to create stronger open societies in the post-digital age, rather than just addressing the problems and aiming for less bad digital open societies. There was a new bipartisan sense that action needs to be taken, and the focus should be on solutions rather than just the problems. However, there was also a concern that the focus on Section 230 of the Communications Decency Act as a solution may be misplaced, and that more precision about the problem and more useful tools for addressing it should be explored. The conversation should shift towards understanding what "optimizing for engagement" means and identifying the wrong engagement metrics, rather than getting stuck in debates about Section 230. The ultimate goal is to create a world where we're all tired of going in circles and looking for real solutions to the digital challenges we face.

    • Regulating social media's impact on societyWhile removing Section 230 protections for engagement-optimized platforms is a step, it doesn't fully address the issue. Broader definitions of engagement and attention companies could help, but effective regulation requires balancing innovation, free speech, and user well-being.

      While removing Section 230 protections for companies that optimize for engagement could be a step in the right direction, it may not fully address the issue at hand. The problem goes beyond just metrics and extends to the design of these platforms, which are inherently geared towards maximizing user engagement. Defining engagement more broadly and removing protections for attention companies might help, but it may not solve the core issue of platforms prioritizing user engagement over user well-being. The discussion also touched upon the limitations of regulation and the need to consider the unintended consequences of proposed solutions. Ultimately, the challenge lies in finding a way to regulate these platforms effectively without stifling innovation or infringing upon free speech. The conversation highlighted the complexity of the issue and the need for a nuanced approach to addressing the negative impacts of social media on society.

    • Companies face liability for negative consequences of attention-based platformsCompanies must be held accountable for the negative effects of their attention-based platforms. Transformational change and investment in digital social infrastructure are crucial steps towards a healthier attention economy.

      As we navigate the complex issues surrounding the attention economy and the harms it can cause, it's crucial to recognize that attention-based companies now face liability for the negative consequences of their platforms. This is a shift from the previous immunity these companies enjoyed. The question then becomes, what penalties would be significant enough to prevent such harm in the first place? The media environment today is toxic and requires transformational change, but there's no existing model for a healthy attention economy at this scale. Investing in digital social infrastructure on a massive scale, similar to what's being proposed for physical infrastructure, could be a step in the right direction. However, it's essential not to scrap all existing technology but instead ask when it's necessary to transition to something fundamentally better. Additionally, the impact of misinformation on less fortunate countries is a major concern, and the current rate of growth in new threats outpaces our capacity to respond.

    • Struggling to keep up with misinformation on social mediaFacebook takes an average of 28 days to address misinformation, and there's a six-day gap between English and other languages, highlighting the need for continued efforts to ensure a safe and healthy digital environment.

      The volume of unmoderated information on social media platforms like WhatsApp and Facebook is staggering, with 200 billion messages a day on WhatsApp and 15 billion on Facebook. Fact-checking organizations are struggling to keep up, with Facebook taking an average of 28 days to address misinformation, and a six-day gap between English and other languages. The digital world presents unique challenges, as progress made in civil rights and national security in the physical world can be undone in the digital space. Misinformation and hate speech can spread rapidly, and efforts to combat it, such as developing vaccines, can be undermined. The recent Senate hearings focused on these structural issues, and it's crucial that we continue to address them to ensure a safe and healthy digital environment.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    You Will Never Breathe the Same Again — with James Nestor

    You Will Never Breathe the Same Again — with James Nestor

    When author and journalist James Nestor began researching a piece on free diving, he was stunned. He found that free divers could hold their breath for up to 8 minutes at a time, and dive to depths of 350 feet on a single breath. As he dug into the history of breath, he discovered that our industrialized lives have led to improper and mindless breathing, with cascading consequences from sleep apnea to reduced mobility. He also discovered an entire world of extraordinary feats achieved through proper and mindful breathing — including healing scoliosis, rejuvenating organs, halting snoring, and even enabling greater sovereignty in our use of technology. What is the transformative potential of breath? And what is the relationship between proper breathing and humane technology?

    Mind the (Perception) Gap — with Dan Vallone

    Mind the (Perception) Gap — with Dan Vallone

    What do you think the other side thinks? Guest Dan Vallone is the Director of More in Common U.S.A., an organization that’s been asking Democrats and Republicans that critical question. Their work has uncovered countless “perception gaps” in our understanding of each other. For example, Democrats think that about 30 percent of Republicans support "reasonable gun control," but in reality, it’s about 70 percent. Both Republicans and Democrats think that about 50 percent of the other side would feel that physical violence is justified in some situations, but the actual number for each is only about five percent. “Both sides are convinced that the majority of their political opponents are extremists,” says Dan. “And yet, that's just not true.” Social media encourages the most extreme views to speak the loudest and rise to the top—and it’s hard to start a conversation and work together when we’re all arguing with mirages. But Dan’s insights and the work of More in Common provide a hopeful guide to unraveling the distortions we’ve come to accept and correcting our foggy vision.

    Do You Want to Become a Vampire? — with L.A. Paul

    Do You Want to Become a Vampire? — with L.A. Paul

    How do we decide whether to undergo a transformative experience when we don’t know how that experience will change us? This is the central question explored by Yale philosopher and cognitive scientist L.A. Paul

    Paul uses the prospect of becoming a vampire to illustrate the conundrum: let's say Dracula offers you the chance to become a vampire. You might be confident you'll love it, but you also know you'll become a different person with different preferences. Whose preferences do you prioritize: yours now, or yours after becoming a vampire? Similarly, whose preferences do we prioritize when deciding how to engage with technology and social media: ours now, or ours after becoming users — to the point of potentially becoming attention-seeking vampires? 

    In this episode with L.A. Paul, we're raising the stakes of the social media conversation — from technology that steers our time and attention, to technology that fundamentally transforms who we are and what we want. Tune in as Paul, Tristan Harris, and Aza Raskin explore the complexity of transformative experiences, and how to approach their ethical design.

    David Price ’61 retires from Congress after more than three decades of service to North Carolina’s fourth district

    David Price ’61 retires from Congress after more than three decades of service to North Carolina’s fourth district

    Former congressman David Price ’61 joined Catalyze with scholar co-hosts Benny Klein ’24 and Elias Guedira ’26 in December 2022 during the politician’s final month in office. Price, who retired this January, represented North Carolina’s fourth district, including Orange County, Chapel Hill.

    The alumnus visited the Foundation to share about his lifetime career of public service and his over three decades serving in the U.S. House of Representatives. Price also spoke about his involvement as a scholar in the civil rights movement at UNC–Chapel Hill, some of his proudest political accomplishments, and his post-retirement plans. 

    Price released the fourth edition of his book, The Congressional Experience, in 2020. He revised the book to cover the Obama and Trump administrations. 

    After receiving his bachelor’s degree at Carolina, he pursued graduate studies at Yale University to earn a theology degree (1964) and a PhD in political science (1969). Price is a professor of political science at Duke University’s Sanford School of Public Policy.

    Music credits

    The intro music is by Scott Hallyburton ’22, guitarist of the band South of the Soul. 

    How to listen

    On your mobile device, you can listen and subscribe to Catalyze on Apple Podcasts or Spotify. For any other podcast app, you can find the show using our RSS feed.

    Catalyze is hosted and produced by Sarah O’Carroll for the Morehead-Cain Foundation, home of the first merit scholarship program in the United States and located at the University of North Carolina at Chapel Hill. You can let us know what you thought of the episode by finding us on Twitter or Instagram at @moreheadcain or you can email us at communications@moreheadcain.org.