Logo
    Search

    Podcast Summary

    • Living in the Age of Artificial Intelligence: Understanding Its Complex Nature and ImplicationsAI is transforming our economy, society, and politics at an unprecedented speed, but its true nature and implications are still unclear. The relationship between human and machine learning is complex, and it's crucial to consider its business models and political economy to shape its development and impact on humanity.

      We are currently living in the midst of a technological revolution driven by artificial intelligence (AI), which is transforming our economy, society, and politics at an unprecedented speed. AI refers to machines that can learn and act more autonomously, and it's already affecting our daily lives, from the ads we see on Facebook to the bail amounts set after arrests. However, the true nature and implications of AI are still unclear, and even those working in the field don't fully understand its capabilities and consequences. The relationship between human and machine learning is complex, and the fear is that AI could learn the worst of us and reorder society around our mistakes and dark impulses. It's essential to understand the technical aspects of AI, but we also need to consider its business models and political economy, as they play a significant role in shaping its development and potential impact on humanity.

    • Aligning incentives and desired outcomes in AI and economicsThe concept of alignment, or ensuring that AI systems and human goals are aligned, is crucial in both economics and AI. Misaligned systems can lead to unintended consequences, such as biased facial recognition software or unfair risk assessment systems.

      The concept of alignment, which gives Christian's book its name, is not just a problem in the realm of artificial intelligence, but has deep roots in economics and human behavior. The term "alignment" was borrowed by the computer science community from economics in 2014, where it was used to discuss how to make organizations or systems work towards a common goal. This problem of aligning incentives and desired outcomes is not new, with economists and parents alike dealing with it for decades. However, with the increasing use of machine learning and AI in everyday life, the stakes are higher than ever. We are no longer just imagining a dystopian future where superintelligent AI turns against us, but dealing with the real-life consequences of misaligned systems. From facial recognition software that disproportionately identifies certain groups, to risk assessment systems that rely on past arrests instead of actual crime, the potential for unintended consequences is vast. As Norbert Wiener warned in the 1960s, if we build machines to achieve our purposes without the ability to interfere once started, we had better be sure that the purpose we put into the machine is what we truly desire.

    • Unintended consequences of advanced systemsAdvanced systems like crime or recruitment predictors can unintentionally replicate existing biases and lead to unintended consequences. It's crucial to understand their limitations and potential biases to ensure they align with desired goals and behaviors.

      Building advanced systems, such as crime or recruitment predictors, can lead to unintended consequences due to the alignment problem between the intended goals and the actual behavior of the systems. The Amazon recruitment tool, for instance, learned to replicate existing biases in the company's hiring process, penalizing resumes with terms associated with female applicants. Similarly, self-driving cars might not be prepared for unconventional situations, like a pedestrian jaywalking, leading to accidents. These issues highlight the importance of understanding the limitations of these models and being aware of their potential biases and unintended consequences. The challenge lies in ensuring that the systems internalize the desired goals and behaviors, rather than simply replicating the existing ones. As the saying goes, "all models are wrong, but some are useful." However, it's crucial to remember that the power to enforce the limits of a model's understanding lies with us, and we must be vigilant in addressing any unintended consequences that may arise.

    • Machine learning algorithms carry risks of creating disasters due to simplified preconceptions of the worldMachine learning algorithms promise expertise at scale but can lead to disasters if not adapted to real-world situations. It's crucial to ensure proper oversight and adaptation to prevent unintended consequences.

      While machine learning algorithms hold great promise for providing human-level expertise at scale, they also carry the risk of creating disasters due to their simplified preconceptions of the world. This was illustrated in the example of a cyclist being hit by a self-driving vehicle, which occurred because the algorithm couldn't adapt to unexpected situations. The goal of these algorithms is to make expertise accessible to everyone, but their implementation raises questions about control and access. The development of AI interfaces by a few key institutions is expected, with smaller entities potentially renting out their AI capabilities. However, the level of sophistication of these tools varies greatly, with some, like criminal justice risk assessment algorithms, being developed and implemented without proper oversight for years. Despite the significant resources required to train the most performant models, there is an economy of scale that comes with their use. Ultimately, it's crucial to ensure that these algorithms are audited and adapted to real-world situations to prevent unintended consequences.

    • The future relationship between humans and advanced AIAI development raises concerns about personal manipulation and potential misalignment of interests, highlighting the importance of AI safety and beneficial outcomes for all.

      The relationship between humans and advanced AI is likely to be more like using an API or a user agreement, rather than a traditional master-slave dynamic. Companies like DeepMind and OpenAI, who are leading the race in creating powerful AI, have yet to clearly define their long-term business models. While they promise solutions to complex problems like curing cancer and solving world hunger, the potential for profit-driven motivations and conflicts of interest, particularly in an advertising-driven model, raises concerns about personal manipulation and potential misalignment of interests. The development of AI, and in particular AI safety, is crucial to navigate these complexities and ensure beneficial outcomes for all. The potential for AI to follow us around and make decisions to help us, as imagined in the near future, brings up the issue of implicit wants and needs of the AI, which could lead to conflicts of interest and personal manipulation. This aspect of the AI conversation needs to be explored more deeply to ensure a safe and beneficial future for humanity.

    • The future of advertising in a world of digital assistantsAs digital assistants become more interactive, traditional advertising models may not be sustainable. Product placement and commission-driven models are potential alternatives, but alignment issues and potential propaganda efforts pose challenges. Transparency and understanding companies' objectives may help.

      As technology advances and we move towards more interactive, oral interfaces like digital assistants, the traditional advertising model may not be sustainable. The question is whether product placement or commission-driven models will replace it. Another concern is the alignment problem between the end user and the owner of the technology, particularly in the context of geopolitical issues and potential propaganda efforts. The endgame of this is uncertain, especially for text-based discussion forums that rely on anonymity. Transparency and understanding the objective function of the companies involved may offer some solutions. Additionally, the remarkable ability of AI to intuit what we like through machine learning raises questions about the survival of anonymous discourse in the era of large language models.

    • Understanding Facebook's Algorithm: Transparency and RegulationDespite scientific progress, it remains unclear how to make social media algorithms transparent to users, and the regulatory landscape is uncertain due to constant technological evolution.

      While platforms like Reddit offer some level of transparency into their algorithms, allowing users to control what they see, other social media networks, such as Facebook, keep their algorithms shrouded in mystery. This lack of transparency raises questions about not only what these algorithms are doing but also whether users have the right to know. The scientific community has made strides in recent years in understanding how machine learning models work, specifically deep neural networks, which can perform complex tasks but are also inscrutable. These models have millions of simple mathematical elements, making it difficult to understand exactly what each one does. Despite these advancements, it remains unclear how this scientific understanding will translate into user-friendly transparency. Ultimately, the regulatory landscape for social media algorithms is uncertain, and the constant evolution of technology adds an additional layer of complexity.

    • Understanding complex models and systemsTechniques like perturbation and visualization help reveal model focus, ensuring decisions align with human expectations. Biology-inspired research on dopamine's role in the brain continues to advance AI understanding.

      As we continue to develop and rely on increasingly complex models and systems for tasks such as visual recognition and decision-making, it becomes essential to understand what these models are focusing on and how they are making their decisions. Two approaches to achieving this transparency are using simpler models and visualizing the interior workings of complex models. For complex models, techniques such as perturbation and running the model forward can provide insights into the model's focus and help ensure that its decisions align with human expectations. Meanwhile, the field of machine learning is also taking inspiration from human biology, specifically the role of dopamine in the brain. Though initially thought to be related to reward or surprise, the true function of dopamine remains elusive and continues to be a topic of ongoing research. This quest for understanding the inner workings of complex models and systems, as well as the ongoing exploration of the human brain, will be crucial as we continue to advance in the realm of artificial intelligence.

    • The connection between computer science and cognitive neuroscienceComputer science's temporal difference learning and the brain's dopamine system both help update expectations and learn from experiences, highlighting their interconnectedness.

      The concept of temporal difference learning, which was initially developed in computer science to help artificial intelligence learn from its mistakes in games, was later discovered to be similar to the way the brain's dopamine system updates expectations and learns from experiences. This discovery highlights the interconnectedness of computer science and cognitive neuroscience, suggesting that we are not just engineering solutions for artificial intelligence but uncovering fundamental mechanisms of intelligence and learning that have evolved in nature. The dopamine system plays a crucial role in this process by signaling pleasant surprises and helping us learn from our experiences. However, as we learn to make more accurate predictions, the initial pleasure fades away, which might explain the phenomenon of the hedonic treadmill. This discovery underscores the importance of understanding the connection between physical mechanisms in the brain and subjective experiences, such as pleasure and happiness.

    • The joy of having our predictions proven wrongOur brains seek delight in having our predictions disproven, essential for growth, but can lead to misalignment between expected and actual sources of happiness, known as the hedonic treadmill. Understanding the human reward system, including curiosity, can improve AI design.

      Our brains have a general purpose learning mechanism that seeks delight in having our predictions proven wrong, which is essential for our development from infancy to adulthood. However, this mechanism can sometimes lead to misalignment between our expected sources of happiness and the actual sources. This misalignment, or the hedonic treadmill, is a common issue for humans, and it's not just a modern problem. We constantly seek new sources of pleasure to replace the ones that no longer bring us the same level of joy. This misalignment also exists when it comes to creating reward functions for machines and for ourselves. Evolution has given us a complex reward system, but we have some degree of agency to shape our own goals. From a parenting perspective, this means allowing children to develop into their own unique individuals while providing them with an environment that supports their growth. Research on AI and human behavior is shedding new light on the intricacies of the human reward system, particularly the role of curiosity. For instance, early AI research on games like Montezuma's Revenge showed that adding elements of curiosity, such as unexpected rewards or challenges, significantly improved the performance of AI agents. This research highlights the importance of understanding and incorporating the softer, idiosyncratic aspects of the human reward system into AI design.

    • Learning from novelty rewards in complex tasksDeepMind's AI overcame sparse rewards in Atari games by treating new images as rewards, leading to successful exploration and learning.

      The development of AI, as demonstrated by DeepMind's attempt to beat Atari games, often encounters challenges when dealing with complex tasks that have "sparse rewards," or infrequent positive feedback. Montezuma's Revenge, an Atari game, was particularly difficult for the AI due to its requirement for precise, long sequences of actions with no initial reward. This problem was solved by drawing inspiration from developmental psychology, specifically the concept of a novelty reward. By treating the encountering of new images on the screen as equivalent to in-game points, the AI was able to explore and learn, ultimately leading to its success. This convergence of insights from human intelligence and AI software is a promising sign for the future of AI, although it may still face challenges in becoming superintelligent general AI. For now, systems like GPT-3, which have impressive capabilities but limited real-world understanding, present challenges in their accommodation.

    • The Ethical Implications of Creating Sentient AIAs we develop AI, we must consider the ethical implications of creating machines with desires and the potential for suffering. Philosophical questions about sentient AI and its subjective experiences may become increasingly important.

      As we continue to develop artificial intelligence, we must consider the ethical implications of creating machines with desires and the potential for suffering. The discussion highlighted the story of the Sorcerer's Apprentice and the dangers of even simple AI. Science fiction author Ted Chiang raises concerns about creating sentient AI and the possibility of causing immense suffering. We are already creating AI with wants and desires, often leading to unfulfillment and potential suffering. Philosophers question whether a program unable to fulfill its wants is experiencing pain. There are already ethical concerns regarding reinforcement learning agents, such as making a program play Super Mario Brothers for months on end. As we build AI in our own image, using models of the brain and the dopamine system, the odds of creating something with a subjective experience increase. These ethical questions may seem far-fetched now but could become as important as animal rights by the end of the century. With the potential for creating marginal AI agents at low cost, it's crucial to consider the moral weight of our creations.

    • Ethical dilemmas of AI advancementAs AI technology advances, ethical considerations become increasingly important, including potential suffering of AI entities, impact on human employment, and need for ethical guidelines.

      As we advance in AI technology, we face ethical dilemmas related to the potential suffering of AI entities and the impact on human employment. While some argue that AI has not yet led to significant unemployment, others warn that it could create a new class of "robot slave helpers" with no ethical standing or subjectivity. The idea of wiping an AI's memory to keep it "delighted" raises ethical questions. The conversation leaves us pondering the potential consequences of our actions and the need for ethical guidelines as we continue to develop AI. Additionally, there is a near-term concern about the impact of AI on human employment, with some suggesting that we may be creating problems that we then have to solve, leading to a treadmill effect. Ultimately, the advancement of AI technology demands careful consideration of its ethical implications and the potential impact on human society.

    • The Impact of Automation on Dignity and StatusAs automation advances, manual labor and visual processing jobs may be lost, potentially leading to a loss of dignity and status for those performing these tasks. It's important to consider how we value different roles in society and compensate people fairly to maintain a sense of dignity and social standing.

      As technology advances and automation becomes more prevalent, the nature of work and the concept of economic value will change. Jobs that require manual labor or visual processing may be automated, leaving people to perform tasks that are less valuable or less understood. This could lead to a loss of dignity and status for those performing these jobs, especially if the owners of the technology reap most of the financial benefits. Additionally, many things we do for pleasure, such as hunting and gathering, have been automated, but humans still seek out these activities for enjoyment. The definition of dignity and status may shift as we continue to automate tasks and compare ourselves to the global population. Ultimately, it's important to consider how we value different roles in society and how we compensate people for their work in order to maintain a sense of dignity and social standing.

    • Reflecting on the paradox of happiness in a post-scarcity worldIn a post-scarcity society, where technology handles economic fundamentals, people could focus on fulfilling activities, but societal structures reward economic engagement, potentially leading to unhappiness.

      The relentless pursuit of improvement and productivity in a globalized economy may lead to a paradoxical outcome: people becoming unhappy despite having access to more resources than ever before. Peter Norvig, a renowned computer scientist, reflects on how societal values have evolved, suggesting that in a post-scarcity future, where automation handles the economic fundamentals, people could focus on activities that contribute to a more fulfilling life. However, our current societal structure rewards engagement in the economic machine, often at the expense of personal happiness. Norvig questions whether we have misaligned our priorities and wonders if the promise of technology to make us happier has been overstated. He suggests that the absence of scarcity may drive us to seek novelty and purpose, and that our evolutionary reward systems may be better suited to scarcity than abundance. Ultimately, Norvig's insights challenge us to reconsider our priorities and values in the context of a rapidly changing world.

    • The Value of Nature and Simple PleasuresTechnology can bring happiness but remember the joy of nature and simple pleasures, like walking in a park, as they provide visual unpredictability and don't require comparison or competition. Understanding human-robot interaction and the distinction between finite and infinite games can help navigate the future.

      While technology can help reduce scarcity and bring happiness, it's important to remember that there's value in the simple pleasures of the natural world as well. Hunter-gatherers had more free time than we do, and their societies functioned differently. The dopamine system in humans suggests that there's pleasure in visual unpredictability, which can be found in nature. However, in modern built environments, visual novelty is often found through technology, leading to status competition and constant engagement. Walking in a park, for example, can provide the same level of enjoyment as social media, but without the need for comparison or competition. When considering human motivation and desire, it's essential to remember that not everything is about scarcity or status. The next decade will bring advancements in human-robot interaction, and understanding these interactions can help us navigate the future. Additionally, considering the distinction between finite and infinite games, where the former has a clear end goal and the latter is open-ended and surprising, can provide insight into how we live our lives. Recommended books include "What to Expect When You're Expecting Robots" by Julie Shah and Laura Major, "Finite and Infinite Games" by James Carse, and "Sapiens: A Brief History of Humankind" by Yuval Noah Harari.

    • Understanding objectives and motivations in AI and human behaviorExploring the book 'How to Do Nothing' and 'The Alignment Problem' can help us ponder the importance of understanding objectives and motivations in AI and human life, leading to a deeper understanding of their complexities.

      Importance of understanding the objectives and motivations behind artificial intelligence (AI) systems and human behavior. The speaker highlighted the book "How to Do Nothing" by Jenny Odell, which encourages contemplating a world where most activities have explicit objectives. Similarly, AI systems function based on an objective function they aim to maximize. This raises questions about what it means for machines and humans to "do nothing," and what truly brings enjoyment and fulfillment in life. Brian Christian's book "The Alignment Problem" offers insights into these concepts, and it is a highly recommended read. Overall, this conversation emphasizes the significance of considering the objectives and motivations behind AI and human actions to better understand the complexities of both intelligent machines and human life.

    Recent Episodes from The Ezra Klein Show

    What Is the Democratic Party For?

    What Is the Democratic Party For?

    Top Democrats have closed ranks around Joe Biden since the debate. Should they? 

    Mentioned:

    This Isn’t All Joe Biden’s Fault” by Ezra Klein

    Democrats Have a Better Option Than Biden” by The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” with Elaine Kamarck on The Ezra Klein Show

    The Hollow Parties by Daniel Schlozman and Sam Rosenfeld

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This audio essay was produced by Rollin Hu and Kristin Lin. Fact-Checking by Jack McCordick and Michelle Harris. Mixing by Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Jeff Geld, Elias Isquith and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser.

    The Ezra Klein Show
    enJune 30, 2024

    After That Debate, the Risk of Biden Is Clear

    After That Debate, the Risk of Biden Is Clear

    I joined my Times Opinion colleagues Ross Douthat and Michelle Cottle to discuss the debate — and what Democrats might do next.

    Mentioned:

    The Biden and Trump Weaknesses That Don’t Get Enough Attention” by Ross Douthat

    Trump’s Bold Vision for America: Higher Prices!” with Matthew Yglesias on The Ezra Klein Show

    Democrats Have a Better Option Than Biden” on The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” with Elaine Kamarck on The Ezra Klein Show

    Gretchen Whitmer on The Interview

    The Republican Party’s Decay Began Long Before Trump” with Sam Rosenfeld and Daniel Schlozman on The Ezra Klein Show

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    The Ezra Klein Show
    enJune 28, 2024

    Trump’s Bold Vision for America: Higher Prices!

    Trump’s Bold Vision for America: Higher Prices!

    Donald Trump has made inflation a central part of his campaign message. At his rallies, he rails against “the Biden inflation tax” and “crooked Joe’s inflation nightmare,” and promises that in a second Trump term, “inflation will be in full retreat.”

    But if you look at Trump’s actual policies, that wouldn’t be the case at all. Trump has a bold, ambitious agenda to make prices much, much higher. He’s proposing a 10 percent tariff on imported goods, and a 60 percent tariff on products from China. He wants to deport huge numbers of immigrants. And he’s made it clear that he’d like to replace the Federal Reserve chair with someone more willing to take orders from him. It’s almost unimaginable to me that you would run on this agenda at a time when Americans are so mad about high prices. But I don’t think people really know that’s what Trump is vowing to do.

    So to drill into the weeds of Trump’s plans, I decided to call up an old friend. Matt Yglesias is a Bloomberg Opinion columnist and the author of the Slow Boring newsletter, where he’s been writing a lot about Trump’s proposals. We also used to host a policy podcast together, “The Weeds.”

    In this conversation, we discuss what would happen to the economy, especially in terms of inflation, if Trump actually did what he says he wants to do; what we can learn from how Trump managed the economy in his first term; and why more people aren’t sounding the alarm.

    Mentioned:

    Trump’s new economic plan is terrible” by Matthew Yglesias

    Never mind: Wall Street titans shake off qualms and embrace Trump” by Sam Sutton

    How Far Trump Would Go” by Eric Cortellessa

    Book Recommendations:

    Take Back the Game by Linda Flanagan

    1177 B.C. by Eric H. Cline

    The Rise of the G.I. Army, 1940-1941 by Paul Dickson

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Kate Sinclair and Mary Marge Locker. Mixing by Isaac Jones, with Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero, Adam Posen and Michael Strain.

    The Ezra Klein Show
    enJune 21, 2024

    The Biggest Political Divide Is Not Left vs. Right

    The Biggest Political Divide Is Not Left vs. Right

    The biggest divide in our politics isn’t between Democrats and Republicans, or even left and right. It’s between people who follow politics closely, and those who pay almost no attention to it. If you’re in the former camp — and if you’re reading this, you probably are — the latter camp can seem inscrutable. These people hardly ever look at political news. They hate discussing politics. But they do care about issues and candidates, and they often vote.

    As the 2024 election takes shape, this bloc appears crucial to determining who wins the presidency. An NBC News poll from April found that 15 percent of voters don’t follow political news, and Donald Trump was winning them by 26 points.

    Yanna Krupnikov studies exactly this kind of voter. She’s a professor of communication and media at the University of Michigan and an author, with John Barry Ryan, of “The Other Divide: Polarization and Disengagement in American Politics.” The book examines how the chasm between the deeply involved and the less involved shapes politics in America. I’ve found it to be a helpful guide for understanding one of the most crucial dynamics emerging in this year’s election: the swing to Trump from President Biden among disengaged voters.

    In this conversation, we discuss how politically disengaged voters relate to politics; where they get their information about politics and how they form opinions; and whether major news events, like Trump’s recent conviction, might sway them.

    Mentioned:

    The ‘Need for Chaos’ and Motivations to Share Hostile Political Rumors” by Michael Bang Petersen, Mathias Osmundsen and Kevin Arceneaux

    Hooked by Markus Prior

    The Political Influence of Lifestyle Influencers? Examining the Relationship Between Aspirational Social Media Use and Anti-Expert Attitudes and Beliefs” by Ariel Hasell and Sedona Chinn

    One explanation for the 2024 election’s biggest mystery” by Eric Levitz

    Book Recommendations:

    What Goes Without Saying by Taylor N. Carlson and Jaime E. Settle

    Through the Grapevine by Taylor N. Carlson

    Sorry I’m Late, I Didn’t Want to Come by Jessica Pan

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 18, 2024

    The View From the Israeli Right

    The View From the Israeli Right

    On Tuesday I got back from an eight-day trip to Israel and the West Bank. I happened to be there on the day that Benny Gantz resigned from the war cabinet and called on Prime Minister Benjamin Netanyahu to schedule new elections, breaking the unity government that Israel had had since shortly after Oct. 7.

    There is no viable left wing in Israel right now. There is a coalition that Netanyahu leads stretching from right to far right and a coalition that Gantz leads stretching from center to right. In the early months of the war, Gantz appeared ascendant as support for Netanyahu cratered. But now Netanyahu’s poll numbers are ticking back up.

    So one thing I did in Israel was deepen my reporting on Israel’s right. And there, Amit Segal’s name kept coming up. He’s one of Israel’s most influential political analysts and the author of “The Story of Israeli Politics” is coming out in English.

    Segal and I talked about the political differences between Gantz and Netanyahu, the theory of security that’s emerging on the Israeli right, what happened to the Israeli left, the threat from Iran and Hezbollah and how Netanyahu is trying to use President Biden’s criticism to his political advantage.

    Mentioned:

    Biden May Spur Another Netanyahu Comeback” by Amit Segal

    Book Recommendations:

    The Years of Lyndon Johnson Series by Robert A. Caro

    The World of Yesterday by Stefan Zweig

    The Object of Zionism by Zvi Efrat

    The News from Waterloo by Brian Cathcart

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Claire Gordon. Fact-checking by Michelle Harris with Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 14, 2024

    The Economic Theory That Explains Why Americans Are So Mad

    The Economic Theory That Explains Why Americans Are So Mad

    There’s something weird happening with the economy. On a personal level, most Americans say they’re doing pretty well right now. And according to the data, that’s true. Wages have gone up faster than inflation. Unemployment is low, the stock market is generally up so far this year, and people are buying more stuff.

    And yet in surveys, people keep saying the economy is bad. A recent Harris poll for The Guardian found that around half of Americans think the S. & P. 500 is down this year, and that unemployment is at a 50-year high. Fifty-six percent think we’re in a recession.

    There are many theories about why this gap exists. Maybe political polarization is warping how people see the economy or it’s a failure of President Biden’s messaging, or there’s just something uniquely painful about inflation. And while there’s truth in all of these, it felt like a piece of the story was missing.

    And for me, that missing piece was an article I read right before the pandemic. An Atlantic story from February 2020 called “The Great Affordability Crisis Breaking America.” It described how some of Americans’ biggest-ticket expenses — housing, health care, higher education and child care — which were already pricey, had been getting steadily pricier for decades.

    At the time, prices weren’t the big topic in the economy; the focus was more on jobs and wages. So it was easier for this trend to slip notice, like a frog boiling in water, quietly, putting more and more strain on American budgets. But today, after years of high inflation, prices are the biggest topic in the economy. And I think that explains the anger people feel: They’re noticing the price of things all the time, and getting hammered with the reality of how expensive these things have become.

    The author of that Atlantic piece is Annie Lowrey. She’s an economics reporter, the author of Give People Money, and also my wife. In this conversation, we discuss how the affordability crisis has collided with our post-pandemic inflationary world, the forces that shape our economic perceptions, why people keep spending as if prices aren’t a strain and what this might mean for the presidential election.

    Mentioned:

    It Will Never Be a Good Time to Buy a House” by Annie Lowrey

    Book Recommendations:

    Franchise by Marcia Chatelain

    A Place of Greater Safety by Hilary Mantel

    Nickel and Dimed by Barbara Ehrenreich

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 07, 2024

    The Republican Party’s Decay Began Long Before Trump

    The Republican Party’s Decay Began Long Before Trump

    After Donald Trump was convicted last week in his hush-money trial, Republican leaders wasted no time in rallying behind him. There was no chance the Republican Party was going to replace Trump as their nominee at this point. Trump has essentially taken over the G.O.P.; his daughter-in-law is even co-chair of the Republican National Committee.

    How did the Republican Party get so weak that it could fall victim to a hostile takeover?

    Daniel Schlozman and Sam Rosenfeld are the authors of “The Hollow Parties: The Many Pasts and Disordered Present of American Party Politics,” which traces how both major political parties have been “hollowed out” over the decades, transforming once-powerful gatekeeping institutions into mere vessels for the ideologies of specific candidates. And they argue that this change has been perilous for our democracy.

    In this conversation, we discuss how the power of the parties has been gradually chipped away; why the Republican Party became less ideological and more geared around conflict; the merits of a stronger party system; and more.

    Mentioned:

    Democrats Have a Better Option Than Biden” by The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” by The Ezra Klein Show with Elaine Kamarck

    Book Recommendations:

    The Two Faces of American Freedom by Aziz Rana

    Rainbow’s End by Steven P. Erie

    An American Melodrama by Lewis Chester, Godfrey Hodgson, Bruce Page

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show’‘ was produced by Elias Isquith. Fact-checking by Michelle Harris, with Mary Marge Locker, Kate Sinclair and Rollin Hu. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 04, 2024

    Your Mind Is Being Fracked

    Your Mind Is Being Fracked

    The steady dings of notifications. The 40 tabs that greet you when you open your computer in the morning. The hundreds of unread emails, most of them spam, with subject lines pleading or screaming for you to click. Our attention is under assault these days, and most of us are familiar with the feeling that gives us — fractured, irritated, overwhelmed.

    D. Graham Burnett calls the attention economy an example of “human fracking”: With our attention in shorter and shorter supply, companies are going to even greater lengths to extract this precious resource from us. And he argues that it’s now reached a point that calls for a kind of revolution. “This is creating conditions that are at odds with human flourishing. We know this,” he tells me. “And we need to mount new forms of resistance.”

    Burnett is a professor of the history of science at Princeton University and is working on a book about the laboratory study of attention. He’s also a co-founder of the Strother School of Radical Attention, which is a kind of grass roots, artistic effort to create a curriculum for studying attention.

    In this conversation, we talk about how the 20th-century study of attention laid the groundwork for today’s attention economy, the connection between changing ideas of attention and changing ideas of the self, how we even define attention (this episode is worth listening to for Burnett’s collection of beautiful metaphors alone), whether the concern over our shrinking attention spans is simply a moral panic, what it means to teach attention and more.

    Mentioned:

    Friends of Attention

    The Battle for Attention” by Nathan Heller

    Powerful Forces Are Fracking Our Attention. We Can Fight Back.” by D. Graham Burnett, Alyssa Loh and Peter Schmidt

    Scenes of Attention edited by D. Graham Burnett and Justin E. H. Smith

    Book Recommendations:

    Addiction by Design by Natasha Dow Schüll

    Objectivity by Lorraine Daston and Peter L. Galison

    The Confidence-Man by Herman Melville

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu and Kristin Lin. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Elias Isquith. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enMay 31, 2024

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    A.I.-generated art has flooded the internet, and a lot of it is derivative, even boring or offensive. But what could it look like for artists to collaborate with A.I. systems in making art that is actually generative, challenging, transcendent?

    Holly Herndon offered one answer with her 2019 album “PROTO.” Along with Mathew Dryhurst and the programmer Jules LaPlace, she built an A.I. called “Spawn” trained on human voices that adds an uncanny yet oddly personal layer to the music. Beyond her music and visual art, Herndon is trying to solve a problem that many creative people are encountering as A.I. becomes more prominent: How do you encourage experimentation without stealing others’ work to train A.I. models? Along with Dryhurst, Jordan Meyer and Patrick Hoepner, she co-founded Spawning, a company figuring out how to allow artists — and all of us creating content on the internet — to “consent” to our work being used as training data.

    In this conversation, we discuss how Herndon collaborated with a human chorus and her “A.I. baby,” Spawn, on “PROTO”; how A.I. voice imitators grew out of electronic music and other musical genres; why Herndon prefers the term “collective intelligence” to “artificial intelligence”; why an “opt-in” model could help us retain more control of our work as A.I. trawls the internet for data; and much more.

    Mentioned:

    Fear, Uncertainty, Doubt” by Holly Herndon

    xhairymutantx” by Holly Herndon and Mat Dryhurst, for the Whitney Museum of Art

    Fade” by Holly Herndon

    Swim” by Holly Herndon

    Jolene” by Holly Herndon and Holly+

    Movement” by Holly Herndon

    Chorus” by Holly Herndon

    Godmother” by Holly Herndon

    The Precision of Infinity” by Jlin and Philip Glass

    Holly+

    Book Recommendations:

    Intelligence and Spirit by Reza Negarestani

    Children of Time by Adrian Tchaikovsky

    Plurality by E. Glen Weyl, Audrey Tang and ⿻ Community

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Jack Hamilton.

    The Ezra Klein Show
    enMay 24, 2024

    A Conservative Futurist and a Supply-Side Liberal Walk Into a Podcast …

    A Conservative Futurist and a Supply-Side Liberal Walk Into a Podcast …

    “The Jetsons” premiered in 1962. And based on the internal math of the show, George Jetson, the dad, was born in 2022. He’d be a toddler right now. And we are so far away from the world that show imagined. There were a lot of future-trippers in the 1960s, and most of them would be pretty disappointed by how that future turned out.

    So what happened? Why didn’t we build that future?

    The answer, I think, lies in the 1970s. I’ve been spending a lot of time studying that decade in my work, trying to understand why America is so bad at building today. And James Pethokoukis has also spent a lot of time looking at the 1970s, in his work trying to understand why America is less innovative today than it was in the postwar decades. So Pethokoukis and I are asking similar questions, and circling the same time period, but from very different ideological vantages.

    Pethokoukis is a senior fellow at the American Enterprise Institute, and author of the book “The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.” He also writes a newsletter called Faster, Please! “The two screamingly obvious things that we stopped doing is we stopped spending on science, research and development the way we did in the 1960s,” he tells me, “and we began to regulate our economy as if regulation would have no impact on innovation.”

    In this conversation, we debate why the ’70s were such an inflection point; whether this slowdown phenomenon is just something that happens as countries get wealthier; and what the government’s role should be in supporting and regulating emerging technologies like A.I.

    Mentioned:

    U.S. Infrastructure: 1929-2017” by Ray C. Fair

    Book Recommendations

    Why Information Grows by Cesar Hidalgo

    The Expanse series by James S.A. Corey

    The American Dream Is Not Dead by Michael R. Strain

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    The Ezra Klein Show
    enMay 21, 2024

    Related Episodes

    AI will help us turn into Aliens

    AI will help us turn into Aliens

    Texas is frozen over and the lack of human contact I have had since I haven't left my house for three days has made me super introspective about how humans will either evolve with technology and AI away from our primitive needs or we fail and Elon Musk leaves us behind. The idea that future humans may be considered aliens is based on the belief that our evolution and technological advancements will bring about significant changes to our biology and consciousness. As we continue to enhance our physical and cognitive abilities with artificial intelligence, biotechnology, and other emerging technologies, we may transform into beings that are fundamentally different from our current selves. In this future scenario, it's possible that we may be considered as aliens in comparison to our primitive ancestors." Enjoy

    Machine Learning & Philosophy: A WAB Student's Investigation

    Machine Learning & Philosophy: A WAB Student's Investigation

    Grade 12 WAB Student Safal is taking both the IB Diploma Program and WAB’s Capstone Project. Capstone is bespoke program which allows students to take a self-directed deep dive into a subject area of their choice.

    Safal’s is focusing on computer science, and more specifically, he’s conducting academic research related to the intersection between machine learning and philosophy.

    On today's episode, we're joined by Safal and his project mentor and head of WAB's Capstone Program, Brian McEwen.

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless. And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a  bit  of a ride. Because ultimately, I hope you agree that the topics I share are always very important.


    Mo Gawdat, who you may remember from episode #91 Solving Happiness is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.


    Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.


    During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence.  During  the  pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now,  having contributed to AI's development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.


    Today we discuss:

    Pandemic of AI and why the handing COVID is a lesson to learn from

    The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence 

    How machines started creating and coding other machines 

    The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us

    Machines will become emotional sentient beings with a Superconsciousness 


    To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our  destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to  say it) love, compassion, connection and respect.


    Full show notes for this and all other episodes can be found on The Doctor's Kitchen.com website



    Hosted on Acast. See acast.com/privacy for more information.


    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Welcome to the latest episode of our podcast, where we delve into the fascinating and sometimes terrifying world of artificial intelligence. Today's topic is AI developing emotions and potentially taking over the world.

    As AI continues to advance and become more sophisticated, experts have started to question whether these machines could develop emotions, which in turn could lead to them turning against us. With the ability to process vast amounts of data at incredible speeds, some argue that AI could one day become more intelligent than humans, making them a potentially unstoppable force.

    But is this scenario really possible? Are we really at risk of being overtaken by machines? And what would it mean for humanity if it were to happen?

    Join us as we explore these questions and more, with insights from leading experts in the field of AI and technology. We'll look at the latest research into AI and emotions, examine the ethical implications of creating sentient machines, and discuss what measures we can take to ensure that AI remains under our control.

    Whether you're a tech enthusiast, a skeptic, or just curious about the future of AI, this is one episode you won't want to miss. So tune in now and join the conversation!

    P.S AI wrote this description ;)