Logo

    Esther Perel on Artificial Intimacy (rerun)

    enSeptember 06, 2024
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    Podcast Summary

    • Relationships and TechnologyTechnology shapes our relationships in profound ways. It offers convenience but risks creating artificial connections that may lead to loneliness. Prioritizing genuine human interactions is crucial for building deep connections and combating societal issues like isolation.

      In today's world, technology significantly impacts our relationships by introducing both artificial connections and challenges to intimacy. Experts like Esther Perel emphasize the importance of fostering genuine human connections, warning that reliance on AI for companionship can lead to loneliness instead of solving it. We need to address how technology affects our ability to form deep, meaningful relationships and ensure that it enhances, rather than undermines, our connections with one another. The goal should be to create a balance that allows us to enjoy the benefits of technology without losing sight of the core human interactions that sustain us. As we navigate this digital age, it’s vital to prioritize nurturing real relationships to combat societal issues like loneliness and maintain social trust, which are essential for a healthy community.

    • Artificial IntimacyTechnology, especially AI, alters how we form connections, offering convenience but lacking the nuanced depth of real human relationships. This can leave individuals feeling empty despite having instant access to support.

      As technology increasingly shapes how we connect with others, particularly through the rise of artificial intelligence in relationships, it brings both opportunities and challenges. While AI can provide instant support, it lacks the deep, nuanced human connection essential for understanding complex emotional issues. Relationships are multi-dimensional and require more than just facts; they need empathy, genuine interaction, and embodied experiences. Having access to AI is helpful for quick answers, yet it often leaves an emptiness because it cannot replicate the richness of real human interactions that involve addressing the intricacies and dilemmas inherent in our relationships. Ultimately, while AI can assist in our relational lives, it can't replace the profound depth that comes with human connection, leaving us to navigate complex decisions and moral dilemmas on our own.

    • Intimacy and TechnologyReliance on technology for emotional support lowers our expectations for genuine intimacy, hindering psychological growth and our ability to cope with life's complexities.

      Modern technology, particularly individual-focused apps and virtual support, may lead us to lower our expectations for emotional connections and intimacy. While people often feel satisfied with AI-assisted therapy, this satisfaction stems from diminished expectations rather than genuine connection. True emotional growth involves engaging with discomfort, accepting complexities, and maintaining relationships that challenge us. Relying solely on technology for intimacy can hinder our psychological development, making it harder to navigate emotions, relationships, and real-life situations. Ultimately, our dependence on artificial interactions might stall our maturity, leaving us ill-equipped to handle the messiness of genuine human connection and the inherent challenges of life.

    • Technology's ImpactAlthough technology helps connect us, it risks creating superficial relationships and increased anxiety by simplifying human interactions, which thrive on complexity and effort.

      Technology, while beneficial in many ways like connecting us and providing support, can diminish our ability to engage meaningfully in relationships. It allows for convenience but can lead to superficial connections and increased anxiety. True relationships thrive on complexity, effort, and feeling, which technology often simplifies or removes, impacting our social fabric and personal growth.

    • Digital RelationshipsDigital replicas may ease loneliness but cannot replace the depth of real human relationships and shared emotional experiences, which include suffering and growth.

      Creating digital replicas, like AI companions or virtual friends, may help with loneliness but raises questions about the quality of real human connections. While these virtual tools can provide temporary comfort similar to children's imaginary friends, they risk stunting emotional development if we rely on them too heavily. True relationships involve shared human experiences, including suffering and growth, which virtual interactions can't fully replicate. There's danger in viewing suffering as merely something to erase rather than understanding it as part of being alive. Balancing technology's benefits with the need for genuine emotional connections is crucial so we don't lose the richness of real life.

    • Vital ConnectionsLife's true vitality comes from meaningful connections, both physical and virtual. We must design technology that enhances rather than replaces these interactions to ensure genuine human experiences and emotional fulfillment.

      Suffering is a part of life, and the reality of human experience cannot be flattened into a simple display of happiness or efficiency. True vitality comes from connecting with others in meaningful ways, both in-person and through technology. We must design technology that encourages real-life interactions rather than replacing them. Hybrid experiences, such as combining virtual and physical events, can inspire spontaneity and connection that enhance our lives. The danger lies in believing that virtual experiences are sufficient for fulfillment. As society evolves with technology, we must challenge the idea that synthetic relationships can truly satisfy our human need for connection. Questioning our reliance on virtual interactions is essential to understanding what it means to feel alive and engaged in the world.

    • Human ConnectionTechnology should emphasize real human connections to combat loneliness and anxiety. Responsible innovation must prioritize relationships, as they are crucial for individual and societal well-being.

      Our society is increasingly relying on technology, which has led to feelings of anxiety and loneliness among individuals. It’s essential for technology companies to prioritize real human relationships and complexity in their designs, much like we’ve learned to be cautious about junk food. Just as we recognize the value of nutritious food, we must also acknowledge the importance of nurturing relationships in our lives. Responsible innovation should consider the social and relational impacts of technology. Without addressing how we connect with one another, we risk perpetuating issues like division and isolation in our communities. Ultimately, fostering healthier relationships has far-reaching benefits for individual and societal well-being. Conversations about technology must include discussions about relationships, which are central to our humanity and have been overlooked in various technological advancements.

    Recent Episodes from Your Undivided Attention

    AI Is Moving Fast. We Need Laws that Will Too.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The CHT Framework for Incentivizing Responsible AI Development

    Further Reading on Air Canada’s Chatbot Fiasco 

    Further Reading on the Elon Musk Deep Fake Scams 

    The Full Text of SB1047, California’s AI Regulation Bill 

    Further reading on SB1047 

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Can We Govern AI? with Marietje Schaake

    A First Step Toward AI Regulation with Tom Wheeler

    Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

    Esther Perel on Artificial Intimacy (rerun)

    Esther Perel on Artificial Intimacy (rerun)

    [This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

    RECOMMENDED MEDIA 

    Mating in Captivity by Esther Perel

    Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

    The State of Affairs by Esther Perel

    Esther takes a look at modern relationships through the lens of infidelity

    Where Should We Begin? with Esther Perel

    Listen in as real couples in search of help bare the raw and profound details of their stories

    How’s Work? with Esther Perel

    Esther’s podcast that focuses on the hard conversations we're afraid to have at work 

    Lars and the Real Girl (2007)

    A young man strikes up an unconventional relationship with a doll he finds on the internet

    Her (2013)

    In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

    RECOMMENDED YUA EPISODES

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    The Three Rules of Humane Tech

    Digital Democracy is Within Reach with Audrey Tang

     

    CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.

    The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill

    More information on the Google antitrust ruling

    More Information on KOSPA

    More information on the SOPA/PIPA internet blackout

    Detailed breakdown of Internet lobbying from Open Secrets

     

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Can We Govern AI? with Marietje Schaake
    The Race to Cooperation with David Sloan Wilson

     

    CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

     

    The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

     

    This Moment in AI: How We Got Here and Where We’re Going

    This Moment in AI: How We Got Here and Where We’re Going

    It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.

    Info Sheet on KOSPA: More information on KOSPA from FairPlay.

    Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

    AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

    Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. 

    Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

     

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Can We Govern AI? with Marietje Schaake 

    The Three Rules of Humane Tech

    The AI Dilemma

     

    Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.

     

     

    Your Undivided Attention
    enAugust 12, 2024

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io