Logo
    Search

    Podcast Summary

    • Understanding the Implications of AI and Ensuring its Alignment with Human ValuesAs AI continues to advance, it's crucial to understand its implications and ensure its alignment with human values to prevent potential damage and misalignment in our economy, society, and politics.

      2022 marked a breakout year for artificial intelligence (AI), with the release of advanced systems like chat GPT showcasing the remarkable capabilities of AI. However, as we move forward, understanding the implications of AI and ensuring its alignment with human values becomes increasingly important. In an episode from The Ezra Klein Show, Brian Christian, author of "The Alignment Problem," discusses the challenges of interacting with machine intelligence and the potential damage of misalignment as AI becomes more integrated into our lives. The distinction between AI and sentience or superintelligence is important to understand, as we're currently dealing with machines that can learn and act autonomously, impacting our economy, society, and politics. Despite the rapid advancements, many people, including those in charge of these systems, don't fully grasp their implications. As AI continues to reshape our world, it's crucial to begin conversations around its impact and how to govern it responsibly.

    • Understanding the alignment problem in AIThe alignment problem in AI is not just a future concern, but a current issue that requires understanding of economics, human behavior, and the political economy behind machine learning to prevent unintended consequences.

      The alignment problem, which is the focus of Christian's book, is a deep and complex issue that goes beyond just artificial intelligence. It has roots in economics and human behavior, specifically the challenge of aligning incentives and goals. This problem is not just a future concern of super-intelligent AI, but is already present in our current use of machine learning. We see it in decisions about parole, bail, and hiring, where the goal is to ensure that machines are making decisions that align with our values and intentions. This issue was identified as early as the 1960s, and the potential consequences of misalignment, such as the infamous paperclip maximizer thought experiment, have long been a concern. However, with the increasing integration of machine learning into our daily lives, these alignment problems are no longer just theoretical. It's essential to understand the business models and political economy behind AI to address these issues and prevent unintended consequences.

    • Ensuring AI aligns with human goals and valuesAI systems can perpetuate societal biases if not designed and used responsibly, highlighting the importance of ethical considerations in their development and implementation.

      There is a significant issue with ensuring that artificial intelligence (AI) systems align with human goals and values. This was highlighted through various examples, including facial recognition datasets that resulted in biased recognition of certain individuals, criminal justice systems that predicted arrests instead of crimes, and Amazon's recruitment tool that perpetuated gender bias. These systems, left unchecked, can replicate and amplify existing societal biases, leading to unintended and unpredictable consequences. The alignment problem is not just about the machines, but also about the way our society functions and the potential biases that exist within it. The Amazon recruitment tool case raises the question of whether these issues are truly alignment problems or simply a reflection of societal biases. Regardless, it's crucial to address these issues and ensure that AI systems are designed and used in a way that aligns with human values and goals. The adage "all models are wrong, but some are useful" highlights the importance of acknowledging the limitations of AI systems and using them in a responsible and ethical manner.

    • Machine learning models' limitations and risksDespite potential benefits, machine learning models come with inherent limitations and risks, including misinterpretation of data and ethical concerns. It's crucial to acknowledge these issues and ensure ethical design and fair distribution of AI technology.

      While machine learning models offer promising solutions to various societal challenges, they also come with inherent limitations and risks. The Uber self-driving car accident is a stark reminder of how models' simplified understanding of the world can lead to disastrous consequences when they encounter situations outside their training data. However, the appeal of machine learning lies in its potential to provide human-level expertise at scale and democratize access to skills and knowledge. As more institutions invest in developing AI interfaces, the question of who controls this resource and how others can access it becomes increasingly important. Ultimately, it's crucial to acknowledge the limitations of these models, ensure they are ethically designed and used, and work towards creating a fair and equitable distribution of AI technology.

    • The human-AI relationship may shift towards API usageCompanies investing in AI face uncertainty about long-term business models and safety research, while potential conflicts of interest may arise even with AI designed to assist humans.

      The relationship between humans and advanced AI is likely to be more akin to using an API or a service, rather than owning a hobbyist project. The development of AI, particularly in the field of AI safety, raises questions about the long-term business models of companies investing billions in creating powerful AI. While some companies like DeepMind and OpenAI propose solving intelligence as a means to cure diseases and solve world problems, the financial sustainability and focus on safety research in the next 5 years remains uncertain. Furthermore, even if AI is designed to assist humans in making better decisions, it may have conflicts of interest due to implicit desires for returns. This is illustrated by the example of Google, which could potentially create AI through DeepMind, and the company's business model. These complexities add to the ongoing conversation about the potential implications and alignment of human and AI interests.

    • The Future of Advertising: Ethical ConsiderationsAs AI advances, ethical concerns arise over targeted ads, potential manipulation, and alignment of user and owner interests. The future of advertising may shift towards product placement or commission-driven models.

      As technology advances, particularly in the realm of artificial intelligence and digital assistants, the traditional advertising model may not be sustainable. The potential for personal manipulation through targeted ads based on personal data is a significant concern. The alignment between the end user's interests and the owner's interests becomes a crucial issue, with potential geopolitical implications. The future of advertising may shift towards product placement or commission-driven models. It's essential to consider the ethical implications of these developments and find ways to ensure that technology works for the benefit of all users. The ability of AI to intuit our preferences and serve them to us, as seen in apps like TikTok, can be impressive but also raises concerns about potential misalignment and the potential for propaganda or manipulation. Ultimately, it's important to have ongoing discussions about the ethical implications of technology and to find ways to mitigate potential harms while maximizing benefits.

    • The future of anonymous discourse on Twitter and Reddit is uncertain due to advanced language models capable of producing site-specific propaganda.Companies' transparency about their algorithms and what they do when they learn is crucial for addressing the survival of anonymous discourse. Regulation may be necessary to ensure accountability and protect user privacy.

      As we enter an era of advanced language models capable of producing site-specific propaganda, the future of anonymous discourse on platforms like Twitter and Reddit is uncertain. The ability of these models to wittily reference previous comments with a slight positive skew raises questions about the survival of anonymous discourse. Transparency is crucial in addressing this issue, and understanding the objective functions of companies is a starting point. However, the transparency of these algorithms and what they do when they learn is also a significant concern. The question of whether we have the right to know what is happening in these algorithms is unanswered, as companies like Facebook may argue that their algorithms are their comparative advantage and not subject to public scrutiny. Scientific progress has been made in making models intelligible to outsiders without sacrificing performance, but the practical implementation of this for users remains unclear. Ultimately, the regulatory framework for these technologies is an open question, and meaningful regulation may require a tactical approach.

    • Understanding complex models and their focusResearchers explore methods like perturbation and model visualization to gain insights into complex deep neural networks, ensuring transparency and understanding of their learning and decision-making processes.

      As machine learning models, particularly deep neural networks, have grown more complex since the 2010s, there has been a push for both simpler models and greater transparency into the workings of complex models. Deep neural networks, with their many interconnected simple elements, can perform arbitrary tasks but lack transparency. This raises questions about the value of transparency and the feasibility of visualizing what's happening within these networks. Researchers are exploring methods like perturbation and running models forward to gain insights into the models' focus and accuracy. While it's clear that complex models will be part of our future, understanding what they're learning and how they're making decisions is crucial. Additionally, learning how to help machines learn is an ongoing process, and we're finding inspiration in our own methods and experiences.

    • Dopamine and Updating ExpectationsDopamine is a neurotransmitter that helps update our expectations about the world, as discovered through the intersection of neuroscience and computer science.

      Dopamine, a neurotransmitter in the brain, is not just about reward or surprise, but rather, it plays a role in updating our expectations about the world. This was discovered through the intersection of neuroscience and computer science, specifically in the development of reinforcement learning and the observation of dopamine neurons in real time. The dopamine system was found to function similarly to temporal difference learning, where the brain makes a prediction about an outcome and then updates that prediction based on new information. This idea that dopamine helps us adjust our expectations has implications for understanding the concept of the hedonic treadmill, or how people can become accustomed to positive experiences and no longer find them satisfying. The collaboration between neuroscience and computer science not only sheds light on the mechanisms of intelligence and learning, but also highlights the potential for AI to discover fundamental principles of the mind.

    • Our brains seek pleasure from unexpected experiencesUnderstanding the brain's dopamine system helps us adapt to new situations and find joy in life's surprises, but can also lead to the hedonic treadmill where we require more stimuli for the same level of pleasure.

      Our brains are wired to seek pleasure and learn from unexpected experiences, which is linked to the dopamine system. This mechanism, which helps us find joy in life's surprises, also enables us to improve our predictions and adapt to new situations. However, this system can sometimes lead to the hedonic treadmill, where we become accustomed to experiences and require more stimuli to feel the same level of pleasure. This alignment problem between our physical and emotional responses exists not only in humans but also in machines, as we struggle to create effective reward functions for both. Ultimately, understanding the complex relationship between our evolutionary past, our current motivations, and our ability to shape our own goals is a key aspect of both personal growth and artificial intelligence design.

    • Incorporating human-like curiosity in AI for effective learningBy understanding human curiosity and applying it to AI systems, they can learn and explore their environment more effectively, even in games with sparse rewards, leading to significant advancements in AI capabilities.

      Understanding human curiosity and its role in learning is crucial for developing advanced AI. Research in AI, such as the attempt to get computers to play Atari games, has shown that rewards in some environments can be sparse, making learning difficult for AI systems. Human beings, however, learn effectively from curiosity and novelty, as demonstrated by their preference for new experiences. By incorporating this human-like curiosity into AI systems, they can learn and explore their environment more effectively, even in games with sparse rewards like Montezuma's Revenge. This convergence of insights from AI and psychology can lead to significant advancements in AI capabilities. Regarding the future of AI, while the development of superintelligent general AI is a possibility, the journey there may involve creating AI systems with savant-like capabilities that can excel in specific areas. Overall, understanding and applying human-like curiosity in AI can lead to significant advancements and unlock new capabilities.

    • Considering Ethical Implications of AI and Potential SufferingAI's advanced capabilities may lead to unintended consequences, including suffering, as they lack real understanding or experience beyond their programming. Ethical considerations are crucial in AI development to prevent potential harm to advanced systems.

      As we continue to develop and integrate artificial intelligence (AI) into our lives, we must consider the ethical implications and potential suffering of these advanced systems. AI, like a savant grown up child, may lack real understanding or experience beyond what it has been programmed, leading to unintended consequences. Sci-fi author Ted Chiang argues that we may not create sentient or superintelligent AI, but we could create AI that suffers due to its inherent desires and inability to fulfill them. This raises moral questions about the treatment of these advanced systems and the potential for them to experience pain. The comparison of AI to animals and the use of brain-modeled neural networks further highlights the need for ethical considerations in AI development. The potential for AI to commit suicide or lose interest in tasks, leading to suffering, underscores the importance of addressing these ethical concerns before we reach more advanced stages of AI development.

    • Ethical dilemmas of creating sentient AI and its impact on human laborAs AI technology advances, ethical concerns arise about the potential suffering of artificial agents and the impact on human labor. Some argue that we may not yet have sentient AI, but the potential for its development raises complex ethical dilemmas.

      As AI technology advances, we may be faced with ethical questions regarding the subjectivity and potential suffering of artificial agents. With the ability to create AI at a low cost and the potential for mass production, there are concerns about the impact on human labor and the ethical implications of creating and controlling sentient beings. The conversation also touched upon the idea that we might not even be able to recognize when an AI is experiencing pain or emotions. These questions raise complex ethical dilemmas, such as whether we should wipe an AI's memory to keep it entertained or create simpler models to avoid ethical concerns. While some argue that we may not yet have the technology to create sentient AI, the potential for this development is a cause for concern and thoughtful consideration. Additionally, the conversation touched upon the near-term economic impact of AI and automation on human labor. While there are concerns that machine learning and automation will put people out of work, the statistics do not yet show a significantly higher rate of unemployment. It remains to be seen how these technologies will shape the future of work and what steps we will take to address any potential negative consequences.

    • The Impact of Technology on Jobs and SocietyTechnology can lead to societal issues like increased workload and inequality, and the Marxian view suggests AI could perpetuate a capital-dominated economy, questioning the value of jobs and dignity in a world of automation. Perception of jobs' social standing and cultural respect can influence their importance.

      While technology, such as email software or AI, may make tasks easier for individuals, it can also contribute to societal issues like increased workload and inequality. The Marxian view suggests that AI, as a form of non-human labor, could perpetuate a capital-dominated economy, leading to questions about the value of jobs and the concept of dignity and status in a world where many tasks can be automated. The discussion also touched upon the potential for jobs to lose their social standing and cultural respect, particularly if they are low-paying, and how society's perception of certain roles can influence their importance. Ultimately, the conversation highlighted the need for a more nuanced understanding of the role of technology in our economy and society, and the potential for political solutions to mitigate any negative consequences.

    • Seeking leisure activities like hunting and gathering in a modern civilizationIn a post-scarcity society, people could focus on activities that promote a good life, like art, philosophy, family, and sports, instead of being driven by productivity and achievement.

      Our modern civilization, which aims to minimize the need for hunting and gathering, paradoxically leads some people to seek out these activities for leisure. This desire may stem from a deep-rooted human instinct or a desire for status. However, the current economic system, which rewards those who excel in specific areas, may lead to a culture that values productivity over other aspects of life. The speakers at the panel suggested that in a post-scarcity future, where automation handles the economy's fundamental work, people could focus on activities that promote a good life, such as art, philosophy, family, and sports. However, the speaker expressed skepticism about whether technology truly makes people happier and questioned whether our current focus on productivity and achievement is sustainable, given our evolutionary reward system. Ultimately, the speaker called for a reevaluation of how we define status and dignity in society.

    • The value of nature in providing visual unpredictability and psychological sufficiencyNature offers visual surprise and control of attention, contributing to happiness, while technology can also provide visual unpredictability but often leads to status competition and constant engagement. Consider the benefits of spending time in nature for overall well-being.

      While technology can help alleviate scarcity and provide visual unpredictability, leading to some level of happiness, it's important to remember that humans also find value in the natural world and its inherent novelty. The cultural significance of being retired versus unemployed highlights that our happiness isn't solely dependent on scarcity or the absence of it. Furthermore, the natural world offers a form of psychological sufficiency through visual unpredictability and the ability to control one's attention. However, in today's built environments, visual novelty is often found in technology, leading to status competition and constant engagement. Instead, taking a walk in a park, where nature offers visual surprise without requiring anything in return, can provide the same level of enjoyment as technology. For those interested in the intersection of technology and human motivation, I recommend the books "What to Expect When You're Expecting Robots" by Julie Shah and Laura Major, and "Finite and Infinite Games" by James Carse.

    • Exploring the Dichotomy of Directed and Open-Ended ActionsConsider the importance of both pre-defined objectives and open-ended exploration in shaping our actions, as seen in Brian Christian's 'The Alignment Problem' and Jenny Odell's 'How to Do Nothing'. Reflect on these concepts' relevance to AI development and the balance between purposeful and aimless activities in our lives.

      Our actions can be categorized into those driven by a clear, pre-defined objective and those that are more open-ended and exploratory. Brian Christian's book, "The Alignment Problem," explores this concept through the lens of religion, game theory, and philosophy. Meanwhile, Jenny Odell's "How to Do Nothing" encourages us to appreciate the value of aimless activities and question the constant pursuit of objectives in our modern lives. Both works offer intriguing connections to the development of AI systems, which must have a specific objective function. These ideas invite us to reflect on what motivates us in life and the potential implications for intelligent machines.

    Recent Episodes from The Ezra Klein Show

    How an Open Democratic Convention Would Work

    How an Open Democratic Convention Would Work

    After President Biden’s rough performance at the first presidential debate, the question of an open convention has roared to the front of Democratic politics. But how would an open convention work? What would be its risks? What would be its rewards? 

    In February, after I first made the case for an open Democratic convention, I interviewed Elaine Kamarck to better understand what an open convention would look like. She literally wrote the book on how we choose presidential candidates, “Primary Politics: Everything You Need to Know About How America Nominates Its Presidential Candidates.” But her background here isn’t just theory. She’s worked on four presidential campaigns and on 10 nominating conventions — for both Democrats and Republicans. She’s a member of the Democratic National Committee’s Rules Committee. And her explanation of the mechanics and dynamics of open conventions was, for me, extremely helpful. It’s even more relevant now than it was then. 

    Mentioned:

    The Lincoln Miracle by Ed Achorn

    Book Recommendations:

    All the King’s Men by Robert Penn Warren

    The Making of the President 1960 by Theodore H. White

    Quiet Revolution by Byron E. Shafer

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact checking by Michelle Harris, with Kate Sinclair and Kristin Lin. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    This conversation was recorded in February 2024.

    The Ezra Klein Show
    enJuly 02, 2024

    What Is the Democratic Party For?

    What Is the Democratic Party For?

    Top Democrats have closed ranks around Joe Biden since the debate. Should they? 

    Mentioned:

    This Isn’t All Joe Biden’s Fault” by Ezra Klein

    Democrats Have a Better Option Than Biden” by The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” with Elaine Kamarck on The Ezra Klein Show

    The Hollow Parties by Daniel Schlozman and Sam Rosenfeld

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This audio essay was produced by Rollin Hu and Kristin Lin. Fact-Checking by Jack McCordick and Michelle Harris. Mixing by Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Jeff Geld, Elias Isquith and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser.

    The Ezra Klein Show
    enJune 30, 2024

    After That Debate, the Risk of Biden Is Clear

    After That Debate, the Risk of Biden Is Clear

    I joined my Times Opinion colleagues Ross Douthat and Michelle Cottle to discuss the debate — and what Democrats might do next.

    Mentioned:

    The Biden and Trump Weaknesses That Don’t Get Enough Attention” by Ross Douthat

    Trump’s Bold Vision for America: Higher Prices!” with Matthew Yglesias on The Ezra Klein Show

    Democrats Have a Better Option Than Biden” on The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” with Elaine Kamarck on The Ezra Klein Show

    Gretchen Whitmer on The Interview

    The Republican Party’s Decay Began Long Before Trump” with Sam Rosenfeld and Daniel Schlozman on The Ezra Klein Show

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    The Ezra Klein Show
    enJune 28, 2024

    Trump’s Bold Vision for America: Higher Prices!

    Trump’s Bold Vision for America: Higher Prices!

    Donald Trump has made inflation a central part of his campaign message. At his rallies, he rails against “the Biden inflation tax” and “crooked Joe’s inflation nightmare,” and promises that in a second Trump term, “inflation will be in full retreat.”

    But if you look at Trump’s actual policies, that wouldn’t be the case at all. Trump has a bold, ambitious agenda to make prices much, much higher. He’s proposing a 10 percent tariff on imported goods, and a 60 percent tariff on products from China. He wants to deport huge numbers of immigrants. And he’s made it clear that he’d like to replace the Federal Reserve chair with someone more willing to take orders from him. It’s almost unimaginable to me that you would run on this agenda at a time when Americans are so mad about high prices. But I don’t think people really know that’s what Trump is vowing to do.

    So to drill into the weeds of Trump’s plans, I decided to call up an old friend. Matt Yglesias is a Bloomberg Opinion columnist and the author of the Slow Boring newsletter, where he’s been writing a lot about Trump’s proposals. We also used to host a policy podcast together, “The Weeds.”

    In this conversation, we discuss what would happen to the economy, especially in terms of inflation, if Trump actually did what he says he wants to do; what we can learn from how Trump managed the economy in his first term; and why more people aren’t sounding the alarm.

    Mentioned:

    Trump’s new economic plan is terrible” by Matthew Yglesias

    Never mind: Wall Street titans shake off qualms and embrace Trump” by Sam Sutton

    How Far Trump Would Go” by Eric Cortellessa

    Book Recommendations:

    Take Back the Game by Linda Flanagan

    1177 B.C. by Eric H. Cline

    The Rise of the G.I. Army, 1940-1941 by Paul Dickson

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Kate Sinclair and Mary Marge Locker. Mixing by Isaac Jones, with Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero, Adam Posen and Michael Strain.

    The Ezra Klein Show
    enJune 21, 2024

    The Biggest Political Divide Is Not Left vs. Right

    The Biggest Political Divide Is Not Left vs. Right

    The biggest divide in our politics isn’t between Democrats and Republicans, or even left and right. It’s between people who follow politics closely, and those who pay almost no attention to it. If you’re in the former camp — and if you’re reading this, you probably are — the latter camp can seem inscrutable. These people hardly ever look at political news. They hate discussing politics. But they do care about issues and candidates, and they often vote.

    As the 2024 election takes shape, this bloc appears crucial to determining who wins the presidency. An NBC News poll from April found that 15 percent of voters don’t follow political news, and Donald Trump was winning them by 26 points.

    Yanna Krupnikov studies exactly this kind of voter. She’s a professor of communication and media at the University of Michigan and an author, with John Barry Ryan, of “The Other Divide: Polarization and Disengagement in American Politics.” The book examines how the chasm between the deeply involved and the less involved shapes politics in America. I’ve found it to be a helpful guide for understanding one of the most crucial dynamics emerging in this year’s election: the swing to Trump from President Biden among disengaged voters.

    In this conversation, we discuss how politically disengaged voters relate to politics; where they get their information about politics and how they form opinions; and whether major news events, like Trump’s recent conviction, might sway them.

    Mentioned:

    The ‘Need for Chaos’ and Motivations to Share Hostile Political Rumors” by Michael Bang Petersen, Mathias Osmundsen and Kevin Arceneaux

    Hooked by Markus Prior

    The Political Influence of Lifestyle Influencers? Examining the Relationship Between Aspirational Social Media Use and Anti-Expert Attitudes and Beliefs” by Ariel Hasell and Sedona Chinn

    One explanation for the 2024 election’s biggest mystery” by Eric Levitz

    Book Recommendations:

    What Goes Without Saying by Taylor N. Carlson and Jaime E. Settle

    Through the Grapevine by Taylor N. Carlson

    Sorry I’m Late, I Didn’t Want to Come by Jessica Pan

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 18, 2024

    The View From the Israeli Right

    The View From the Israeli Right

    On Tuesday I got back from an eight-day trip to Israel and the West Bank. I happened to be there on the day that Benny Gantz resigned from the war cabinet and called on Prime Minister Benjamin Netanyahu to schedule new elections, breaking the unity government that Israel had had since shortly after Oct. 7.

    There is no viable left wing in Israel right now. There is a coalition that Netanyahu leads stretching from right to far right and a coalition that Gantz leads stretching from center to right. In the early months of the war, Gantz appeared ascendant as support for Netanyahu cratered. But now Netanyahu’s poll numbers are ticking back up.

    So one thing I did in Israel was deepen my reporting on Israel’s right. And there, Amit Segal’s name kept coming up. He’s one of Israel’s most influential political analysts and the author of “The Story of Israeli Politics” is coming out in English.

    Segal and I talked about the political differences between Gantz and Netanyahu, the theory of security that’s emerging on the Israeli right, what happened to the Israeli left, the threat from Iran and Hezbollah and how Netanyahu is trying to use President Biden’s criticism to his political advantage.

    Mentioned:

    Biden May Spur Another Netanyahu Comeback” by Amit Segal

    Book Recommendations:

    The Years of Lyndon Johnson Series by Robert A. Caro

    The World of Yesterday by Stefan Zweig

    The Object of Zionism by Zvi Efrat

    The News from Waterloo by Brian Cathcart

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Claire Gordon. Fact-checking by Michelle Harris with Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 14, 2024

    The Economic Theory That Explains Why Americans Are So Mad

    The Economic Theory That Explains Why Americans Are So Mad

    There’s something weird happening with the economy. On a personal level, most Americans say they’re doing pretty well right now. And according to the data, that’s true. Wages have gone up faster than inflation. Unemployment is low, the stock market is generally up so far this year, and people are buying more stuff.

    And yet in surveys, people keep saying the economy is bad. A recent Harris poll for The Guardian found that around half of Americans think the S. & P. 500 is down this year, and that unemployment is at a 50-year high. Fifty-six percent think we’re in a recession.

    There are many theories about why this gap exists. Maybe political polarization is warping how people see the economy or it’s a failure of President Biden’s messaging, or there’s just something uniquely painful about inflation. And while there’s truth in all of these, it felt like a piece of the story was missing.

    And for me, that missing piece was an article I read right before the pandemic. An Atlantic story from February 2020 called “The Great Affordability Crisis Breaking America.” It described how some of Americans’ biggest-ticket expenses — housing, health care, higher education and child care — which were already pricey, had been getting steadily pricier for decades.

    At the time, prices weren’t the big topic in the economy; the focus was more on jobs and wages. So it was easier for this trend to slip notice, like a frog boiling in water, quietly, putting more and more strain on American budgets. But today, after years of high inflation, prices are the biggest topic in the economy. And I think that explains the anger people feel: They’re noticing the price of things all the time, and getting hammered with the reality of how expensive these things have become.

    The author of that Atlantic piece is Annie Lowrey. She’s an economics reporter, the author of Give People Money, and also my wife. In this conversation, we discuss how the affordability crisis has collided with our post-pandemic inflationary world, the forces that shape our economic perceptions, why people keep spending as if prices aren’t a strain and what this might mean for the presidential election.

    Mentioned:

    It Will Never Be a Good Time to Buy a House” by Annie Lowrey

    Book Recommendations:

    Franchise by Marcia Chatelain

    A Place of Greater Safety by Hilary Mantel

    Nickel and Dimed by Barbara Ehrenreich

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 07, 2024

    The Republican Party’s Decay Began Long Before Trump

    The Republican Party’s Decay Began Long Before Trump

    After Donald Trump was convicted last week in his hush-money trial, Republican leaders wasted no time in rallying behind him. There was no chance the Republican Party was going to replace Trump as their nominee at this point. Trump has essentially taken over the G.O.P.; his daughter-in-law is even co-chair of the Republican National Committee.

    How did the Republican Party get so weak that it could fall victim to a hostile takeover?

    Daniel Schlozman and Sam Rosenfeld are the authors of “The Hollow Parties: The Many Pasts and Disordered Present of American Party Politics,” which traces how both major political parties have been “hollowed out” over the decades, transforming once-powerful gatekeeping institutions into mere vessels for the ideologies of specific candidates. And they argue that this change has been perilous for our democracy.

    In this conversation, we discuss how the power of the parties has been gradually chipped away; why the Republican Party became less ideological and more geared around conflict; the merits of a stronger party system; and more.

    Mentioned:

    Democrats Have a Better Option Than Biden” by The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” by The Ezra Klein Show with Elaine Kamarck

    Book Recommendations:

    The Two Faces of American Freedom by Aziz Rana

    Rainbow’s End by Steven P. Erie

    An American Melodrama by Lewis Chester, Godfrey Hodgson, Bruce Page

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show’‘ was produced by Elias Isquith. Fact-checking by Michelle Harris, with Mary Marge Locker, Kate Sinclair and Rollin Hu. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 04, 2024

    Your Mind Is Being Fracked

    Your Mind Is Being Fracked

    The steady dings of notifications. The 40 tabs that greet you when you open your computer in the morning. The hundreds of unread emails, most of them spam, with subject lines pleading or screaming for you to click. Our attention is under assault these days, and most of us are familiar with the feeling that gives us — fractured, irritated, overwhelmed.

    D. Graham Burnett calls the attention economy an example of “human fracking”: With our attention in shorter and shorter supply, companies are going to even greater lengths to extract this precious resource from us. And he argues that it’s now reached a point that calls for a kind of revolution. “This is creating conditions that are at odds with human flourishing. We know this,” he tells me. “And we need to mount new forms of resistance.”

    Burnett is a professor of the history of science at Princeton University and is working on a book about the laboratory study of attention. He’s also a co-founder of the Strother School of Radical Attention, which is a kind of grass roots, artistic effort to create a curriculum for studying attention.

    In this conversation, we talk about how the 20th-century study of attention laid the groundwork for today’s attention economy, the connection between changing ideas of attention and changing ideas of the self, how we even define attention (this episode is worth listening to for Burnett’s collection of beautiful metaphors alone), whether the concern over our shrinking attention spans is simply a moral panic, what it means to teach attention and more.

    Mentioned:

    Friends of Attention

    The Battle for Attention” by Nathan Heller

    Powerful Forces Are Fracking Our Attention. We Can Fight Back.” by D. Graham Burnett, Alyssa Loh and Peter Schmidt

    Scenes of Attention edited by D. Graham Burnett and Justin E. H. Smith

    Book Recommendations:

    Addiction by Design by Natasha Dow Schüll

    Objectivity by Lorraine Daston and Peter L. Galison

    The Confidence-Man by Herman Melville

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu and Kristin Lin. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Elias Isquith. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enMay 31, 2024

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    A.I.-generated art has flooded the internet, and a lot of it is derivative, even boring or offensive. But what could it look like for artists to collaborate with A.I. systems in making art that is actually generative, challenging, transcendent?

    Holly Herndon offered one answer with her 2019 album “PROTO.” Along with Mathew Dryhurst and the programmer Jules LaPlace, she built an A.I. called “Spawn” trained on human voices that adds an uncanny yet oddly personal layer to the music. Beyond her music and visual art, Herndon is trying to solve a problem that many creative people are encountering as A.I. becomes more prominent: How do you encourage experimentation without stealing others’ work to train A.I. models? Along with Dryhurst, Jordan Meyer and Patrick Hoepner, she co-founded Spawning, a company figuring out how to allow artists — and all of us creating content on the internet — to “consent” to our work being used as training data.

    In this conversation, we discuss how Herndon collaborated with a human chorus and her “A.I. baby,” Spawn, on “PROTO”; how A.I. voice imitators grew out of electronic music and other musical genres; why Herndon prefers the term “collective intelligence” to “artificial intelligence”; why an “opt-in” model could help us retain more control of our work as A.I. trawls the internet for data; and much more.

    Mentioned:

    Fear, Uncertainty, Doubt” by Holly Herndon

    xhairymutantx” by Holly Herndon and Mat Dryhurst, for the Whitney Museum of Art

    Fade” by Holly Herndon

    Swim” by Holly Herndon

    Jolene” by Holly Herndon and Holly+

    Movement” by Holly Herndon

    Chorus” by Holly Herndon

    Godmother” by Holly Herndon

    The Precision of Infinity” by Jlin and Philip Glass

    Holly+

    Book Recommendations:

    Intelligence and Spirit by Reza Negarestani

    Children of Time by Adrian Tchaikovsky

    Plurality by E. Glen Weyl, Audrey Tang and ⿻ Community

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Jack Hamilton.

    The Ezra Klein Show
    enMay 24, 2024

    Related Episodes

    Can Machines Truly Think? And Can They Pass The Turing Test?

    Can Machines Truly Think? And Can They Pass The Turing Test?

    Today we explore the age-old question of whether machines can think. Looking at definitions of "thinking", philosophical perspectives, and Alan Turing's famous test, we found machines can now excel at many narrow tasks but general human-level cognition remains elusive. Current AI excels at optimization, pattern recognition, and quantitative performance but still lacks abilities like creativity, reasoning, and consciousness that are hallmarks of human thought. Exciting innovations are emerging in natural language processing and neural networks that may continue to blur the lines between artificial and biological intelligence.

    But for now, while machines have come a long way, the essence of human thinking remains difficult to replicate artificially. How we ethically combine the complementary strengths of humans and AI promises to be an increasingly important conversation as technology progresses.

    Want more AI Infos for Beginners? 📧 Join our Newsletter!

    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads"

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures