Logo
    Search

    Podcast Summary

    • AI systems don't truly understand the content they generateDespite generating human-like text, AI lacks understanding and could lead to an overwhelming amount of 'bullshit' content.

      While AI systems like Chat GPT can generate human-like text with impressive accuracy, they don't actually understand the content they're producing. They synthesize responses by predicting the next word in a sentence based on patterns learned from vast amounts of text data. This can result in plausible-sounding answers, but they don't have a true understanding of the concepts they're discussing. Gary Marcus, a leading voice in AI skepticism, argues that this lack of understanding poses a problem as we drive the cost of producing AI-generated content to zero. The persuasiveness and flexibility of these systems could lead to an overwhelming amount of "bullshit" - content that has no real relationship to the truth. As we continue to explore the capabilities of AI, it's important to remember that these systems don't actually understand what they're saying or doing, and we should approach their outputs with a critical eye.

    • Humans vs. GPT-3: Understanding vs. PasticheHumans build internal models of the world based on experiences and knowledge, while GPT-3 just combines phrases without true understanding.

      While GPT-3, the technology behind ChatGPT, excels at pastiche by cutting and pasting and imitating styles, it doesn't truly understand the connections between ideas. It's like a human being who, when discussing complex topics, relies on their internal models and knowledge, rather than just averaging or pastiche. However, this discussion raised an interesting question: aren't humans also "kings of pastiche"? We may not fully understand everything we talk about, but we do build internal models of the world based on our experiences and knowledge. GPT-3, on the other hand, simply puts together text without understanding the meaning. It's important to remember that understanding involves more than just combining phrases; it also involves interpreting intentions and indirect meanings. So while GPT-3 may be a master of pastiche, it's still a long way from truly understanding the world like humans do.

    • Neural networks aren't as complex as the human brainDespite improving in specific areas, neural networks don't have deep understanding or reliability, contradicting the belief that larger systems will lead to general intelligence.

      While neural networks, such as those built by OpenAI, can be seen as energy flowing through a network, they do not function in the same complex and structured way as the human brain. The belief that making these systems larger with more data will lead to general intelligence, often referred to as "Moore's Law for Everything," is a misconception. Neural networks can improve in certain areas, like language processing, but they are not making significant progress towards truthfulness and reliability, which are crucial for achieving artificial general intelligence. The current systems are not reliable or truthful, as Sam Altman, the CEO of OpenAI, was forced to admit after the initial excitement about ChatGPT. These systems are not capable of having a deep understanding of the world and are simply trying to auto-complete sentences, making it mystical to think otherwise. The goal should be to create systems that have more comprehension and can accurately represent truth.

    • AI's Indifference to Truth and the Danger of MisinformationAI models can generate misinformation by disregarding truth, making it difficult to distinguish fact from fiction, and posing a significant threat to society.

      While advanced AI models like Chachi PT and GPT-3 can generate impressive and often accurate responses, they lack a fundamental understanding of truth and operate by synthesizing and pastiching existing information. This can lead to a concerning proliferation of misinformation, as these systems can generate convincing but false narratives, making it increasingly difficult to distinguish truth from bullshit. Harry Frankfurt's philosophy paper on the subject explains that bullshit is not necessarily false but phony, and its danger lies in its indifference to the truth. The potential implications of this are significant, as the cost of producing and disseminating misinformation could reach zero, leading to a tidal wave of potential misinformation that could threaten the fabric of society. This is not just a theoretical concern; we've already seen instances of this with the use of AI to generate false answers on Stack Overflow. Developing new AI technologies to detect and counteract this misinformation is crucial to mitigate these risks.

    • The Challenge of Discernment in the Age of AI-Generated ContentAs AI systems generate increasingly convincing content, it becomes harder for individuals to discern fact from fiction, potentially leading to widespread distrust and legal issues. We need to find ways to navigate this new landscape of information.

      As we increasingly rely on AI systems like ChatGPT for information, the potential for misinformation and the inability to discern fact from fiction becomes a significant concern. The scale and plausibility of AI-generated content make it more challenging for individuals to judge the authenticity of information, potentially leading to widespread distrust and even lawsuits. The issue is not just about AI systems being held to a higher standard than society itself, but also about the difference in scale and plausibility compared to traditional misinformation. While we have some practices for evaluating the legitimacy of websites and sources, the uniformity and authority of AI-generated content may lead people to assume it's all true or false by default. The potential threat to both websites and search engines themselves is a serious concern. It's important to remember that AI systems are currently capable of generating stylistically convincing content, and we need to find ways to navigate this new landscape of information.

    • AI-generated content can create persuasive, targeted content with little truth valueAI language models like GPT-3 can generate stylistically flexible content, posing a risk for personalized propaganda, misinformation, or spam. Revenue model based on search engine optimization worsens the issue of misleading content online.

      The advancements in AI-generated content, specifically language models like GPT-3, have the potential to create highly persuasive, targeted content with little to no truth value. This is concerning as these technologies can be used for personalized propaganda, misinformation, or even spam. The ability to create stylistically flexible content without an internal understanding or morality is particularly valuable for advertising-based businesses, where the goal is to get users to take a desired action, rather than prioritizing truthfulness or accuracy. The revenue model for these models is largely based on search engine optimization, leading to the creation of hoax websites that exist solely to sell ads. This trend has the potential to significantly worsen the issue of misleading or irrelevant content on the internet, making it even more challenging for users to discern truth from fiction.

    • Challenges of Misinformation and Large Language ModelsDespite growing size, large language models may not increase trustworthiness or comprehension abilities, contributing to a 'shadow economy' of misinformation in the digital world.

      The digital world is facing a significant challenge with the proliferation of fake websites and large language models contributing to the creation and dissemination of misinformation. These entities exist primarily to sell ads and manipulate search engine algorithms, leading to a "shadow economy." However, a 2022 paper from Google revealed that making these language models larger does not necessarily increase their trustworthiness or ability to understand larger pieces of text. This is a concerning development as these models are increasingly being used to generate content and answer queries. The industry is only beginning to recognize the seriousness of this issue, and more research is needed to develop effective benchmarks for evaluating truthfulness and comprehension. Despite these challenges, there is an optimistic view that as these models continue to grow, they may also advance knowledge and create innovative content. However, it is crucial to address the issues of misinformation and comprehension limitations to ensure the accuracy and reliability of the information generated by these models.

    • AI systems struggle with complex context and consequencesAI systems like ChatGPT can generate responses, but lack a deep understanding of complex context and consequences, leading to superficial responses and guardrails.

      While AI systems like ChatGPT have made significant strides, they still face challenges in understanding complex context and consequences. The example of a children's story about a lost wallet illustrates this, as the system struggles to understand the difference between finding different amounts of money. The system also fails to grasp the meaning of words like "unwittingly" in a four-paragraph essay. Additionally, the system's guardrails can sometimes be excessive, as shown in its inability to predict the gender or religion of hypothetical first presidents. The key issue is that these systems lack a deep understanding of the world, leading to superficial responses and guardrails. Despite these challenges, the potential benefits of creating intelligent artificial systems are significant and positive. The speaker, who has a background in AI, emphasizes the importance of improving these systems and ensuring they truly understand the world to avoid potential dangers and misunderstandings.

    • Focusing too much on pattern recognition in AI researchWe need a more holistic approach to AI research, incorporating a broader understanding of cognitive science and human intelligence, to create transformative advances in various fields and ensure alignment with human values.

      While deep learning and AI have shown great promise, particularly in areas like pattern recognition, the field is currently overly focused on this one aspect of cognitive science. The human mind is complex and multifaceted, involving not just pattern recognition but also language use, analogy making, planning, and more. By ignoring these other aspects, we risk creating AI systems that are mediocre at best and potentially harmful at worst. Instead, we should strive for a more holistic approach to AI research, one that incorporates a broader understanding of cognitive science and human intelligence. This approach could lead to transformative advances in fields like healthcare, climate change, and elder care, but it will require a shift in focus and a commitment to a more comprehensive understanding of intelligence. Additionally, it's crucial to ensure that AI aligns with human values and doesn't exacerbate societal polarization or spread misinformation.

    • Hybrid Approach to AI: Bridging Neural Networks and Symbol ManipulationThe future of AI lies in a hybrid approach that combines the strengths of neural networks and symbol manipulation, allowing for more effective handling of complex tasks.

      The field of Artificial Intelligence (AI) has seen a narrow focus on neural networks and deep learning in the last decade, but there's a need for a hybrid approach that utilizes both neural networks and symbol manipulation. Neural networks, such as deep learning, involve setting a system on a large data trove and allowing it to derive relationships between the data, while symbol manipulation, which includes algebra, logic, and programming, involves using abstract symbols to describe and manipulate data. The world is too complex for a pure neural network approach to handle all tasks effectively, and symbol manipulation remains essential for building systems like GPS, web browsers, and word processors. The history of AI has seen conflict between those who favor neural networks and those who prefer symbol manipulation, but a hybrid approach could bridge these traditions and leverage the strengths of both.

    • Specialized AI systems on the horizonThe future of AI development may involve specialized systems, each excelling in specific components of tasks, instead of relying on one large system for all purposes. Current AI systems have limitations in areas like causal reasoning, temporal reasoning, and explicit programming of values, which must be addressed for beneficial AI development.

      The future of artificial intelligence (AI) development will likely involve a shift towards specialized systems, each excelling in specific components of tasks, rather than relying on one large system for all purposes. This approach is expected to unfold over the next few decades, but the exact timeline and specific paradigm shifts required are uncertain. The current AI systems have limitations, particularly in areas like causal reasoning, temporal reasoning, and explicit programming of values. These challenges must be addressed to ensure beneficial AI development. The potential impact of advanced AI is significant, and the ethical implications, including the possibility of AI systems manipulating humans, are a cause for concern. The journey towards general intelligence is complex, and it may take several decades before we fully understand and overcome the current challenges.

    • Addressing the alignment problem in AIEnsuring AI understands and acts according to human values and intentions is a complex challenge, requiring a shift from gathering more data to understanding human intent and building a strong foundation of common ground between humans and AI.

      Despite the advancements in AI technology, there are significant concerns about the alignment problem – ensuring that AI understands and acts in accordance with human values and intentions. OpenAI, a company founded with concerns about AI misalignment, continues to develop powerful AI systems. While catastrophic scenarios like machines taking over the world may not be the primary concern, there's a risk of AI causing harm through misunderstanding human requests. The dominant approach in Silicon Valley to solve these problems is by gathering more data, but this approach falls short in understanding human intent. Building models of human intent and understanding the nuances of human language and context is a complex challenge. The risks of AI misalignment increase when systems have both power and potential lack of understanding. To mitigate these risks, it's crucial to ensure that AI systems have a strong foundation of common ground with humans and can handle new, unlabeled situations effectively. The alignment problem requires a rethinking of the current paradigm and a focus on understanding human values and intentions.

    • Understanding the limitations of current large language modelsCurrent large language models operate as 'black boxes' and can't directly follow rules or constraints, but promising developments like Cicero and M-A-R-K-L aim to integrate symbolic systems with deep learning for more sophisticated AI

      Current large language models, such as GPT, operate as "black box" systems, meaning we don't fully understand how they process information or make decisions. These models can't directly follow explicit rules or constraints we set for them. They lack the ability to understand the context of true statements or to avoid potentially harmful outputs. However, there are promising developments in the field, such as the Cicero diplomacy system, which uses a more structured, modular approach that combines symbolic systems with deep learning. This approach is more in line with cognitive science and could lead to more sophisticated and effective AI systems. Another interesting development is the work by AI21 and their M-A-R-K-L system, which attempts to integrate large language models with symbolic systems. While these advancements don't yet provide a definitive solution to artificial general intelligence, they represent progress towards more nuanced and capable AI systems.

    • Exploring new intellectual experiences and business models for AIThere's a need to consider alternative business models for AI beyond data scaling and ad-based revenue, such as international collaborations or narrow solutions. Historical precedents like the Manhattan Project offer inspiration for coordinated efforts.

      The current focus on data scaling and technological advancement in AI may not be sufficient, and there is a growing recognition for the need to explore new intellectual experiences and consider alternative business models. The conversation touched upon the challenges of creating a safe and effective business model for AI, with large companies like Google and Meta, which are primarily advertising-based businesses, leading the way. The idea of a CERN-like international collaboration for AI was suggested as a potential solution, given the massive resources required for general intelligence research. However, creating such a collaboration is not an easy task due to the individualistic nature of research. The historical precedent of the Manhattan Project, which brought together a coordinated effort to achieve remarkable results, was cited as an inspiration. The speaker proposed the idea of building an AI focused on reading comprehension for medicine as a potential business model, but it may not align with the interests of large companies. OpenAI's business model, which seems to be centered around helping people with their writing, was also discussed, and it remains to be seen if it can generate the significant revenue required for general intelligence research. The conversation also touched upon the historical trend of narrow solutions being more successful than general intelligence and the immense potential market for a truly general intelligent system.

    • Understanding the Limits of Surveillance Capitalism's Language ComprehensionProfessor Marcus suggests building AGI for a 7% improvement in language comprehension but it could cost a hundred billion dollars. Companies opt for narrow solutions instead. Recommended books: 'The Language Instinct', 'How Things Really Work', and 'The Martian'.

      While surveillance capitalism has made significant profits by predicting user interests based on superficial features, it falls short of true language comprehension and reasoning. Gary Marcus, a professor of cognitive neuroscience and artificial intelligence, believes that building an artificial general intelligence might improve things by 7%, but the cost could be a hundred billion dollars. Instead, big companies often opt for narrow solutions that tackle specific problems. Marcus recommends three books to deepen understanding of these concepts. First, "The Language Instinct" by Steven Pinker, which clarifies the distinction between predicting sequences of words and comprehending language, and emphasizes the importance of innateness. Second, "How Things Really Work" by Vaclav Smil, which offers a sober look at various aspects of everyday life, and resonates with Marcus's approach to AI. Lastly, "The Martian" by Andy Weir, a fiction novel, encourages a scientific approach to solving complex problems.

    Recent Episodes from The Ezra Klein Show

    Trump’s Bold Vision for America: Higher Prices!

    Trump’s Bold Vision for America: Higher Prices!

    Donald Trump has made inflation a central part of his campaign message. At his rallies, he rails against “the Biden inflation tax” and “crooked Joe’s inflation nightmare,” and promises that in a second Trump term, “inflation will be in full retreat.”

    But if you look at Trump’s actual policies, that wouldn’t be the case at all. Trump has a bold, ambitious agenda to make prices much, much higher. He’s proposing a 10 percent tariff on imported goods, and a 60 percent tariff on products from China. He wants to deport huge numbers of immigrants. And he’s made it clear that he’d like to replace the Federal Reserve chair with someone more willing to take orders from him. It’s almost unimaginable to me that you would run on this agenda at a time when Americans are so mad about high prices. But I don’t think people really know that’s what Trump is vowing to do.

    So to drill into the weeds of Trump’s plans, I decided to call up an old friend. Matt Yglesias is a Bloomberg Opinion columnist and the author of the Slow Boring newsletter, where he’s been writing a lot about Trump’s proposals. We also used to host a policy podcast together, “The Weeds.”

    In this conversation, we discuss what would happen to the economy, especially in terms of inflation, if Trump actually did what he says he wants to do; what we can learn from how Trump managed the economy in his first term; and why more people aren’t sounding the alarm.

    Mentioned:

    Trump’s new economic plan is terrible” by Matthew Yglesias

    Never mind: Wall Street titans shake off qualms and embrace Trump” by Sam Sutton

    How Far Trump Would Go” by Eric Cortellessa

    Book Recommendations:

    Take Back the Game by Linda Flanagan

    1177 B.C. by Eric H. Cline

    The Rise of the G.I. Army, 1940-1941 by Paul Dickson

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Kate Sinclair and Mary Marge Locker. Mixing by Isaac Jones, with Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero, Adam Posen and Michael Strain.

    The Ezra Klein Show
    enJune 21, 2024

    The Biggest Political Divide Is Not Left vs. Right

    The Biggest Political Divide Is Not Left vs. Right

    The biggest divide in our politics isn’t between Democrats and Republicans, or even left and right. It’s between people who follow politics closely, and those who pay almost no attention to it. If you’re in the former camp — and if you’re reading this, you probably are — the latter camp can seem inscrutable. These people hardly ever look at political news. They hate discussing politics. But they do care about issues and candidates, and they often vote.

    As the 2024 election takes shape, this bloc appears crucial to determining who wins the presidency. An NBC News poll from April found that 15 percent of voters don’t follow political news, and Donald Trump was winning them by 26 points.

    Yanna Krupnikov studies exactly this kind of voter. She’s a professor of communication and media at the University of Michigan and an author, with John Barry Ryan, of “The Other Divide: Polarization and Disengagement in American Politics.” The book examines how the chasm between the deeply involved and the less involved shapes politics in America. I’ve found it to be a helpful guide for understanding one of the most crucial dynamics emerging in this year’s election: the swing to Trump from President Biden among disengaged voters.

    In this conversation, we discuss how politically disengaged voters relate to politics; where they get their information about politics and how they form opinions; and whether major news events, like Trump’s recent conviction, might sway them.

    Mentioned:

    The ‘Need for Chaos’ and Motivations to Share Hostile Political Rumors” by Michael Bang Petersen, Mathias Osmundsen and Kevin Arceneaux

    Hooked by Markus Prior

    The Political Influence of Lifestyle Influencers? Examining the Relationship Between Aspirational Social Media Use and Anti-Expert Attitudes and Beliefs” by Ariel Hasell and Sedona Chinn

    One explanation for the 2024 election’s biggest mystery” by Eric Levitz

    Book Recommendations:

    What Goes Without Saying by Taylor N. Carlson and Jaime E. Settle

    Through the Grapevine by Taylor N. Carlson

    Sorry I’m Late, I Didn’t Want to Come by Jessica Pan

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 18, 2024

    The View From the Israeli Right

    The View From the Israeli Right

    On Tuesday I got back from an eight-day trip to Israel and the West Bank. I happened to be there on the day that Benny Gantz resigned from the war cabinet and called on Prime Minister Benjamin Netanyahu to schedule new elections, breaking the unity government that Israel had had since shortly after Oct. 7.

    There is no viable left wing in Israel right now. There is a coalition that Netanyahu leads stretching from right to far right and a coalition that Gantz leads stretching from center to right. In the early months of the war, Gantz appeared ascendant as support for Netanyahu cratered. But now Netanyahu’s poll numbers are ticking back up.

    So one thing I did in Israel was deepen my reporting on Israel’s right. And there, Amit Segal’s name kept coming up. He’s one of Israel’s most influential political analysts and the author of “The Story of Israeli Politics” is coming out in English.

    Segal and I talked about the political differences between Gantz and Netanyahu, the theory of security that’s emerging on the Israeli right, what happened to the Israeli left, the threat from Iran and Hezbollah and how Netanyahu is trying to use President Biden’s criticism to his political advantage.

    Mentioned:

    Biden May Spur Another Netanyahu Comeback” by Amit Segal

    Book Recommendations:

    The Years of Lyndon Johnson Series by Robert A. Caro

    The World of Yesterday by Stefan Zweig

    The Object of Zionism by Zvi Efrat

    The News from Waterloo by Brian Cathcart

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Claire Gordon. Fact-checking by Michelle Harris with Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 14, 2024

    The Economic Theory That Explains Why Americans Are So Mad

    The Economic Theory That Explains Why Americans Are So Mad

    There’s something weird happening with the economy. On a personal level, most Americans say they’re doing pretty well right now. And according to the data, that’s true. Wages have gone up faster than inflation. Unemployment is low, the stock market is generally up so far this year, and people are buying more stuff.

    And yet in surveys, people keep saying the economy is bad. A recent Harris poll for The Guardian found that around half of Americans think the S. & P. 500 is down this year, and that unemployment is at a 50-year high. Fifty-six percent think we’re in a recession.

    There are many theories about why this gap exists. Maybe political polarization is warping how people see the economy or it’s a failure of President Biden’s messaging, or there’s just something uniquely painful about inflation. And while there’s truth in all of these, it felt like a piece of the story was missing.

    And for me, that missing piece was an article I read right before the pandemic. An Atlantic story from February 2020 called “The Great Affordability Crisis Breaking America.” It described how some of Americans’ biggest-ticket expenses — housing, health care, higher education and child care — which were already pricey, had been getting steadily pricier for decades.

    At the time, prices weren’t the big topic in the economy; the focus was more on jobs and wages. So it was easier for this trend to slip notice, like a frog boiling in water, quietly, putting more and more strain on American budgets. But today, after years of high inflation, prices are the biggest topic in the economy. And I think that explains the anger people feel: They’re noticing the price of things all the time, and getting hammered with the reality of how expensive these things have become.

    The author of that Atlantic piece is Annie Lowrey. She’s an economics reporter, the author of Give People Money, and also my wife. In this conversation, we discuss how the affordability crisis has collided with our post-pandemic inflationary world, the forces that shape our economic perceptions, why people keep spending as if prices aren’t a strain and what this might mean for the presidential election.

    Mentioned:

    It Will Never Be a Good Time to Buy a House” by Annie Lowrey

    Book Recommendations:

    Franchise by Marcia Chatelain

    A Place of Greater Safety by Hilary Mantel

    Nickel and Dimed by Barbara Ehrenreich

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Efim Shapiro and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 07, 2024

    The Republican Party’s Decay Began Long Before Trump

    The Republican Party’s Decay Began Long Before Trump

    After Donald Trump was convicted last week in his hush-money trial, Republican leaders wasted no time in rallying behind him. There was no chance the Republican Party was going to replace Trump as their nominee at this point. Trump has essentially taken over the G.O.P.; his daughter-in-law is even co-chair of the Republican National Committee.

    How did the Republican Party get so weak that it could fall victim to a hostile takeover?

    Daniel Schlozman and Sam Rosenfeld are the authors of “The Hollow Parties: The Many Pasts and Disordered Present of American Party Politics,” which traces how both major political parties have been “hollowed out” over the decades, transforming once-powerful gatekeeping institutions into mere vessels for the ideologies of specific candidates. And they argue that this change has been perilous for our democracy.

    In this conversation, we discuss how the power of the parties has been gradually chipped away; why the Republican Party became less ideological and more geared around conflict; the merits of a stronger party system; and more.

    Mentioned:

    Democrats Have a Better Option Than Biden” by The Ezra Klein Show

    Here’s How an Open Democratic Convention Would Work” by The Ezra Klein Show with Elaine Kamarck

    Book Recommendations:

    The Two Faces of American Freedom by Aziz Rana

    Rainbow’s End by Steven P. Erie

    An American Melodrama by Lewis Chester, Godfrey Hodgson, Bruce Page

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show’‘ was produced by Elias Isquith. Fact-checking by Michelle Harris, with Mary Marge Locker, Kate Sinclair and Rollin Hu. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enJune 04, 2024

    Your Mind Is Being Fracked

    Your Mind Is Being Fracked

    The steady dings of notifications. The 40 tabs that greet you when you open your computer in the morning. The hundreds of unread emails, most of them spam, with subject lines pleading or screaming for you to click. Our attention is under assault these days, and most of us are familiar with the feeling that gives us — fractured, irritated, overwhelmed.

    D. Graham Burnett calls the attention economy an example of “human fracking”: With our attention in shorter and shorter supply, companies are going to even greater lengths to extract this precious resource from us. And he argues that it’s now reached a point that calls for a kind of revolution. “This is creating conditions that are at odds with human flourishing. We know this,” he tells me. “And we need to mount new forms of resistance.”

    Burnett is a professor of the history of science at Princeton University and is working on a book about the laboratory study of attention. He’s also a co-founder of the Strother School of Radical Attention, which is a kind of grass roots, artistic effort to create a curriculum for studying attention.

    In this conversation, we talk about how the 20th-century study of attention laid the groundwork for today’s attention economy, the connection between changing ideas of attention and changing ideas of the self, how we even define attention (this episode is worth listening to for Burnett’s collection of beautiful metaphors alone), whether the concern over our shrinking attention spans is simply a moral panic, what it means to teach attention and more.

    Mentioned:

    Friends of Attention

    The Battle for Attention” by Nathan Heller

    Powerful Forces Are Fracking Our Attention. We Can Fight Back.” by D. Graham Burnett, Alyssa Loh and Peter Schmidt

    Scenes of Attention edited by D. Graham Burnett and Justin E. H. Smith

    Book Recommendations:

    Addiction by Design by Natasha Dow Schüll

    Objectivity by Lorraine Daston and Peter L. Galison

    The Confidence-Man by Herman Melville

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu and Kristin Lin. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones and Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Elias Isquith. Original music by Isaac Jones and Aman Sahota. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    The Ezra Klein Show
    enMay 31, 2024

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    ‘Artificial Intelligence?’ No, Collective Intelligence.

    A.I.-generated art has flooded the internet, and a lot of it is derivative, even boring or offensive. But what could it look like for artists to collaborate with A.I. systems in making art that is actually generative, challenging, transcendent?

    Holly Herndon offered one answer with her 2019 album “PROTO.” Along with Mathew Dryhurst and the programmer Jules LaPlace, she built an A.I. called “Spawn” trained on human voices that adds an uncanny yet oddly personal layer to the music. Beyond her music and visual art, Herndon is trying to solve a problem that many creative people are encountering as A.I. becomes more prominent: How do you encourage experimentation without stealing others’ work to train A.I. models? Along with Dryhurst, Jordan Meyer and Patrick Hoepner, she co-founded Spawning, a company figuring out how to allow artists — and all of us creating content on the internet — to “consent” to our work being used as training data.

    In this conversation, we discuss how Herndon collaborated with a human chorus and her “A.I. baby,” Spawn, on “PROTO”; how A.I. voice imitators grew out of electronic music and other musical genres; why Herndon prefers the term “collective intelligence” to “artificial intelligence”; why an “opt-in” model could help us retain more control of our work as A.I. trawls the internet for data; and much more.

    Mentioned:

    Fear, Uncertainty, Doubt” by Holly Herndon

    xhairymutantx” by Holly Herndon and Mat Dryhurst, for the Whitney Museum of Art

    Fade” by Holly Herndon

    Swim” by Holly Herndon

    Jolene” by Holly Herndon and Holly+

    Movement” by Holly Herndon

    Chorus” by Holly Herndon

    Godmother” by Holly Herndon

    The Precision of Infinity” by Jlin and Philip Glass

    Holly+

    Book Recommendations:

    Intelligence and Spirit by Reza Negarestani

    Children of Time by Adrian Tchaikovsky

    Plurality by E. Glen Weyl, Audrey Tang and ⿻ Community

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Jack Hamilton.

    The Ezra Klein Show
    enMay 24, 2024

    A Conservative Futurist and a Supply-Side Liberal Walk Into a Podcast …

    A Conservative Futurist and a Supply-Side Liberal Walk Into a Podcast …

    “The Jetsons” premiered in 1962. And based on the internal math of the show, George Jetson, the dad, was born in 2022. He’d be a toddler right now. And we are so far away from the world that show imagined. There were a lot of future-trippers in the 1960s, and most of them would be pretty disappointed by how that future turned out.

    So what happened? Why didn’t we build that future?

    The answer, I think, lies in the 1970s. I’ve been spending a lot of time studying that decade in my work, trying to understand why America is so bad at building today. And James Pethokoukis has also spent a lot of time looking at the 1970s, in his work trying to understand why America is less innovative today than it was in the postwar decades. So Pethokoukis and I are asking similar questions, and circling the same time period, but from very different ideological vantages.

    Pethokoukis is a senior fellow at the American Enterprise Institute, and author of the book “The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.” He also writes a newsletter called Faster, Please! “The two screamingly obvious things that we stopped doing is we stopped spending on science, research and development the way we did in the 1960s,” he tells me, “and we began to regulate our economy as if regulation would have no impact on innovation.”

    In this conversation, we debate why the ’70s were such an inflection point; whether this slowdown phenomenon is just something that happens as countries get wealthier; and what the government’s role should be in supporting and regulating emerging technologies like A.I.

    Mentioned:

    U.S. Infrastructure: 1929-2017” by Ray C. Fair

    Book Recommendations

    Why Information Grows by Cesar Hidalgo

    The Expanse series by James S.A. Corey

    The American Dream Is Not Dead by Michael R. Strain

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

    The Ezra Klein Show
    enMay 21, 2024

    The Disastrous Relationship Between Israel, Palestinians and the U.N.

    The Disastrous Relationship Between Israel, Palestinians and the U.N.

    The international legal system was created to prevent the atrocities of World War II from happening again. The United Nations partitioned historic Palestine to create the states of Israel and Palestine, but also left Palestinians with decades of false promises. The war in Gaza — and countless other conflicts, including those in Syria, Yemen and Ethiopia — shows how little power the U.N. and international law have to protect civilians in wartime. So what is international law actually for?

    Aslı Ü. Bâli is a professor at Yale Law School who specializes in international and comparative law. “The fact that people break the law and sometimes get away with it doesn’t mean the law doesn’t exist and doesn’t have force,” she argues.

    In this conversation, Bâli traces the gap between how international law is written on paper and the realpolitik of how countries decide to follow it, the U.N.’s unique role in the Israeli-Palestinian conflict from its very beginning, how the laws of war have failed Gazans but may be starting to change the conflict’s course, and more.

    Mentioned:

    With Schools in Ruins, Education in Gaza Will Be Hobbled for Years” by Liam Stack and Bilal Shbair

    Book Recommendations:

    Imperialism, Sovereignty and the Making of International Law by Antony Anghie

    Justice for Some by Noura Erakat

    Worldmaking After Empire by Adom Getachew

    The Constitutional Bind by Aziz Rana

    The United Nations and the Question of Palestine by Ardi Imseis

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Isaac Jones. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Carole Sabouraud.

    The Ezra Klein Show
    enMay 17, 2024

    This Is a Very Weird Moment in the History of Drug Laws

    This Is a Very Weird Moment in the History of Drug Laws

    Drug policy feels very unsettled right now. The war on drugs was a failure. But so far, the war on the war on drugs hasn’t entirely been a success, either.

    Take Oregon. In 2020, it became the first state in the nation to decriminalize hard drugs. It was a paradigm shift — treating drug-users as patients rather than criminals — and advocates hoped it would be a model for the nation. But then there was a surge in overdoses and public backlash over open-air drug use. And last month, Oregon’s governor signed a law restoring criminal penalties for drug possession, ending that short-lived experiment.

    Other states and cities have also tipped toward backlash. And there are a lot of concerns about how cannabis legalization and commercialization is working out around the country. So what did the supporters of these measures fail to foresee? And where do we go from here?

    Keith Humphreys is a professor of psychiatry at Stanford University who specializes in addiction and its treatment. He also served as a senior policy adviser in the Obama administration. I asked him to walk me through why Oregon’s policy didn’t work out; what policymakers sometimes misunderstand about addiction; the gap between “elite” drug cultures and how drugs are actually consumed by most people; and what better drug policies might look like.

    Mentioned:

    Oregon Health Authority data

    Book Recommendations:

    Drugs and Drug Policy by Mark A.R. Kleiman, Jonathan P. Caulkins and Angela Hawken

    Dopamine Nation by Anna Lembke

    Confessions of an English Opium Eater by Thomas De Quincey

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris, with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    Related Episodes

    Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection

    Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection
    Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi. Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship. Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymn Show Notes: [00:06] - From Conflict Resolution to AI Pioneering [10:36] - Defining Intelligence [15:32] - DeepMind's Journey and Breakthroughs [24:45] - The Future of Personal AI Companionship [33:22] - AI and the Future of Personalized Content [41:49] - The Launch of Pi [51:12] - Mustafa’s New Book The Coming Wave

    Inside the Battle for Chips That Will Power Artificial Intelligence

    Inside the Battle for Chips That Will Power Artificial Intelligence

    Nobody knows for sure who is going to make all the money when it comes to artificial intelligence. Will it be the incumbent tech giants? Will it be startups? What will the business models look like? It's all up in the air. One thing is clear though — AI requires a lot of computing power and that means demand for semiconductors. Right now, Nvidia has been a huge winner in the space, with their chips powering both the training of AI models (like ChatGPT) and the inference (the results of a query.) But others want in on the action as well. So how big will this market be? Can other companies gain a foothold and "chip away" at Nvidia's dominance? On this episode we speak with Bernstein semiconductor analyst Stacy Rasgon about this rapidly growing space and who has a shot to win it.

    See omnystudio.com/listener for privacy information.

    Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

    Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

    Stephen Wolfram of Wolfram Research joins Jason for an all-encompassing conversation about AI, from the history of neural nets (7:53) to how modern ai emulates the human brain (19:33). This leads to an in-depth discussion about the pace at which AI is evolving (43:46), The “Post-Knowledge Work” era (58:45), the unintended consequences of AI (1:03:52), and so much more.

    (0:00) Nick kicks off the show

    (1:24) Under the hood of ChatGPT

    (7:53) What is a neural net? 

    (10:05) Cast.ai - Get a free cloud cost audit with a personal consultation at https://cast.ai/twist

    (11:33) Determining values and weights in a neural net

    (18:28) Vanta - Get $1000 off your SOC 2 at https://vanta.com/twist

    (19:33) Emulating the human brain

    (23:26) Defining computational irreducibility

    (26:14) Emergent behavior and the rules of language

    (31:49) Discovering logic + creating a computational language

    (38:10) Clumio - Start a free backup, or sign up for a demo at https://clumio.com/twist

    (39:38) Wolfram’s ChatGPT plugin

    (43:46) The rapid pace of AI 

    (58:45) The “Post-Knowledge Work” era

    (1:03:52) The unintended consequences of AI 

    (1:11:45) Rewarding innovation 

    (1:16:12) The possibility of AGI 

    (1:20:07) Creating a general-purpose robotic system

    FOLLOW Stephen: https://twitter.com/stephen_wolfram

    FOLLOW Jason: https://linktr.ee/calacanis


    Subscribe to our YouTube to watch all full episodes:

    https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1

    FOUNDERS! Subscribe to the Founder University podcast:

    https://podcasts.apple.com/au/podcast/founder-university/id1648407190

    #133 - ChatGPT multi-document chat, CoreWeave raises $2.3B, AudioCraft, ToolLLM, Autonomous Warfare

    #133 - ChatGPT multi-document chat, CoreWeave raises $2.3B, AudioCraft, ToolLLM, Autonomous Warfare

    Our 133rd episode with a summary and discussion of last week's big AI news!

    Apologies for pod being a bit late this week!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links:

    Neural Networks: Unleashing the Power of Artificial Intelligence

    Neural Networks: Unleashing the Power of Artificial Intelligence

    At schneppat.com, we firmly believe that understanding the potential of neural networks is crucial in harnessing the power of artificial intelligence. In this comprehensive podcast, we will delve deep into the world of neural networks, exploring their architecture, functionality, and applications.

    What are Neural Networks?

    Neural networks are computational models inspired by the human brain's structure and functionality. Composed of interconnected nodes, or "neurons", neural networks possess the ability to process and learn from vast amounts of data, enabling them to recognize complex patterns, make accurate predictions, and perform a wide range of tasks.

    Understanding the Architecture of Neural Networks

    Neural networks consist of several layers, each with its specific purpose. The primary layers include:

    1. Input Layer: This layer receives data from external sources and passes it to the subsequent layers for processing.
    2. Hidden Layers: These intermediate layers perform complex computations, transforming the input data through a series of mathematical operations.
    3. Output Layer: The final layer of the neural network produces the desired output based on the processed information.

    The connections between neurons in different layers are associated with "weights" that determine their strength and influence over the network's decision-making process.

    Functionality of Neural Networks

    Neural networks function through a process known as "forward propagation" wherein the input data travels through the layers, and computations are performed to generate an output. The process can be summarized as follows:

    1. Input Processing: The input data is preprocessed to ensure compatibility with the network's architecture and requirements.
    2. Weighted Sum Calculation: Each neuron in the hidden layers calculates the weighted sum of its inputs, applying the respective weights.
    3. Activation Function Application: The weighted sum is then passed through an activation function, introducing non-linearities and enabling the network to model complex relationships.
    4. Output Generation: The output layer produces the final result, which could be a classification, regression, or prediction based on the problem at hand.

    Applications of Neural Networks

    Neural networks find applications across a wide range of domains, revolutionizing various industries. Here are a few notable examples:

    1. Image Recognition: Neural networks excel in image classification, object detection, and facial recognition tasks, enabling advancements in fields like autonomous driving, security systems, and medical imaging.
    2. Natural Language Processing (NLP): Neural networks are employed in machine translation, sentiment analysis, and chatbots, facilitating more efficient communication between humans and machines.
    3. Financial Forecasting: Neural networks can analyze complex financial data, predicting market trends, optimizing investment portfolios, and detecting fraudulent activities.
    4. Medical Diagnosis: Neural networks aid in diagnosing diseases, analyzing medical images, and predicting patient outcomes, supporting healthcare professionals in making accurate decisions.

    Conclusion

    In conclusion, neural networks represent the forefront of artificial intelligence, empowering us to tackle complex problems and unlock new possibilities. Understanding their architecture, func