Logo

    #379 — Regulating Artificial Intelligence

    enAugust 12, 2024
    Who are the key figures mentioned in the text?
    What is California Senator Scott Wiener proposing?
    What concerns does Yashua Benjio have about AI?
    What risks are associated with advanced AI models?
    What are the criticisms of the California AI Safety Bill?

    Podcast Summary

    • AI Risks & RegulationCalifornia Senator Scott Wiener and computer scientist Yashua Benjio are collaborating to address AI risks through legislation and regulation, emphasizing the urgency as AI technology advances faster than anticipated.

      Two leading figures in the field of artificial intelligence, California State Senator Scott Wiener and computer scientist Yashua Benjio, are working to address the risks associated with advanced AI models. Wiener, who represents the tech-hub city of San Francisco, has introduced legislation, SB 1047, to reduce the risks of frontier models of AI. Benjio, a renowned AI expert and Turing Award winner, shares concerns about the potential dangers of AI and the need for regulation. Both emphasize the urgency of addressing these issues as AI technology advances faster than anticipated. Their conversation on Sam Harris' podcast covers the importance of AI safety, the misconceptions of those who downplay the risks, and the potential consequences for society and democracy.

    • AI risks and harms mitigationExperts agree on the need to address potential risks and harms of AI, including misuse and development of AGI unaligned with human values. Proactive approach required to understand risks better and put in place protections for the public.

      As we advance in AI technology, there is a growing consensus among experts that we need to address potential risks and mitigate potential harms. These risks range from the misuse of AI in its current form, to the development of AGI that could be misaligned with human values. While some believe the risks are overblown, others are deeply concerned about potential catastrophic consequences. The rational response, according to the discussion, is to take a proactive approach and work to understand these risks better, and put in place protections for the public. This includes both near-term risks, such as misuse of increasingly powerful AI, and long-term risks, such as the development of AGI that could be unaligned with human values. Even if we build AI that is controllable and aligned, it could still fall into the wrong hands. The key takeaway is that we need to be prepared for various scenarios and take action to mitigate potential risks.

    • AI safety and risksExperts debate the risks of AI, including potential world dictatorship, and the need for safety protections, with opinions divided on the timeline for achieving human-level intelligence and the influence of psychological biases

      While AI has the potential to achieve goals and respond to queries using knowledge and reasoning, the goals themselves are often determined by humans. The presence of safety protections can help filter unacceptable goals, but even superhuman AI in the wrong hands could lead to a world dictatorship. The debate among experts about the risks of AI largely revolves around the timeline for achieving human-level intelligence. Some, like Joshua, have shifted their views due to the faster-than-expected progress in AI, while others remain optimistic. Psychological biases may also influence some experts' dismissal of potential risks. Ultimately, it's crucial to continue the conversation about the potential risks and benefits of AI and work towards finding solutions before they become a pressing issue.

    • AI Arms RaceAbsence of regulations in an AI arms race could lead to catastrophic consequences, making laws necessary to ensure safety measures are implemented

      While some believe in the idea of an unregulated access to powerful AI for the greater good, the reality is that an arms race among parties in the absence of regulations could lead to catastrophic consequences. The example given was in the context of cyber attacks and bioweapons, where the defender has to plug all the holes, while the attacker only needs to find one. Senate Bill 1047 aims to address this issue by requiring safety evaluations and risk mitigation steps for models of a certain scale and cost. However, even with commitments from the large labs, laws are necessary to ensure these safety measures are implemented. Voluntary commitments alone are not enough to mitigate potential risks.

    • AI lab regulationThe debate over AI lab regulation centers on concerns over potential risks, economic burdens, and safety testing. Proponents argue that these risks are real and that safety testing costs are relatively small for large labs, while critics argue that the risks are exaggerated and the regulation could pose a significant economic burden.

      The debate surrounding the regulation of AI labs involves concerns over potential risks, economic burdens, and safety testing. Critics argue that the risks are exaggerated and that the regulation could pose a significant economic burden. However, proponents of the regulation argue that these risks are real and tangible, and that companies are already investing in safety testing. The costs of safety testing are believed to be relatively small for large labs, and the regulation primarily applies to these large entities. Despite criticisms, the proponents of the regulation remain committed to its implementation. The debate continues as stakeholders weigh the potential benefits and costs of the proposed regulation.

    • AI Safety TestingThe debate over AI safety testing continues, with some claiming tests are already underway, while investors and critics argue they may not be sufficient or even real. The California AI Safety Bill aims to address this issue, but opponents argue it doesn't eliminate all risk and that liability already exists under tort law.

      The debate surrounding the safety and liability of large language models like ChatGPT is far from settled. While some labs claim they are already conducting safety testing, investors and critics argue that these tests may not be sufficient or even real. The potential consequences of a large language model causing harm, such as a power outage leading to billions in damages and loss of life, are significant. The California AI Safety Bill (SB 1047) aims to address this issue by requiring safety evaluations and limiting liability for companies that comply. However, opponents argue that this bill does not eliminate all risk and that liability already exists under tort law. Furthermore, the bill does not force companies to be physically located in California, making a mass exodus unlikely. The misconception that model developers will face prison time for harms caused by their models is also false. Despite the federal government's interest in AI safety, the lack of progress on federal legislation has led some states to take action.

    • AI RegulationThe Biden administration's executive order on AI regulation is a step towards legislation, but it lacks the force of law and could be revoked. SB 1047 aims to regulate large AI systems, but it's too soon to regulate AI due to the unknowns surrounding the technology, and striking a balance between regulation and innovation is crucial.

      Despite the need for regulation in the technology sector, particularly regarding artificial intelligence (AI), Congress has a poor track record in passing significant legislation. The Biden administration's executive order on AI regulation is a step in the right direction, but it lacks the force of law and could be revoked. The recently proposed SB 1047 aims to regulate large AI systems, but it applies equally to open-source and closed-source models, with some amendments made in response to feedback from the open-source community. The threshold for regulation is set high, targeting only the largest, future AI models. Critics argue that it's too soon to regulate AI due to the unknowns surrounding the technology and its potential development. While there are valid concerns on both sides, it's crucial to strike a balance between regulation and innovation. The conversation around AI regulation is ongoing, and it's essential to continue the dialogue to ensure that we're prepared for the future.

    • Sam Harris PodcastThe Sam Harris Podcast offers valuable insights on philosophy, neuroscience, and current events through ad-free discussions. Listeners can subscribe for full access or request a free account.

      The Making Sense Podcast, hosted by Sam Harris, offers valuable insights and discussions on a variety of topics, including philosophy, neuroscience, and current events. To access the full-length episodes, listeners need to subscribe at samharris.org. The podcast is available to everyone through a scholarship program for those who cannot afford a subscription. The podcast is ad-free and relies entirely on listener support, so if you find value in the content, consider subscribing to help sustain the production. If you're unable to pay, a free account can be requested on the website. The podcast covers a wide range of thought-provoking topics, and the discussions often challenge conventional thinking, making it a worthwhile investment for those seeking to expand their knowledge and understanding of the world.

    Recent Episodes from Making Sense with Sam Harris

    #382 — The Eye of Nature

    #382 — The Eye of Nature

    Sam Harris speaks with Richard Dawkins about his new book The Genetic Book of the Dead, the genome as a palimpsest, what scientists of the future may do with genetic information, genotypes and phenotypes, embryology and epigenetics, why the Lamarckian theory of acquired characteristics couldn't be true, how environmental selection pressure works, why evolution is hard to think about, human dependence on material culture, the future of genetic enhancement of human beings, viral DNA, symbiotic bacteria, AI and the future of scholarship, resurrecting extinct species, the problem of free speech in the UK, the problem of political Islam and antisemitism in the UK, reflections on Dan Dennett, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    Making Sense with Sam Harris
    enSeptember 06, 2024

    #381 — Delusions, Right and Left

    #381 — Delusions, Right and Left

    Sam Harris speaks with “Destiny” (Steven Bonnell) about politics and public debate. They discuss how he approaches debate, “Trump derangement syndrome,” January 6th, why Trump’s norm violations don’t matter to many people, misadventures on the information landscape, social media and the problem of being too online, Islam and conflict in the Middle East, the difference between the far left and the far right, the lack of sane conservative policies to counterbalance the left, whether the pendulum is swinging back on the left, the ethics and politics of apology, private friendships and public disagreements, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #380 — The Roots of Attention

    #380 — The Roots of Attention

    Sam Harris speaks with Amishi Jha about attention and the brain. They discuss how attention is studied, the failure of brain-training games, the relationship between attention and awareness, mindfulness as an intrinsic mental capacity, the neurological implications of different types of meditation, the neural correlates of attention and distraction, the prospects of self-transcendence, the link between thought and emotion, the difference between dualistic and nondualistic mindfulness, studying nondual awareness in the lab, the influence of smartphones, the value of mind wandering, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #379 — Regulating Artificial Intelligence

    #379 — Regulating Artificial Intelligence

    Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #378 — Digital Delusions

    #378 — Digital Delusions

    Sam Harris speaks with Renée DiResta about the state of our information landscape. They discuss the difference between influence and propaganda, shifts in communication technology, influencers and closed communities, the asymmetry of passion online and the illusion of consensus, the unwillingness to criticize one's own side, audience capture, what we should have learned from the Covid pandemic, what is unique about vaccines, Renée's work at the Stanford Internet Observatory, her experience of being smeared by Michael Shellenberger and Matt Taibbi, Elon Musk and the Twitter files, the false analogy of social media as a digital public square, the imagined "censorship-industrial complex," the 2024 presidential election, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #377 — The Future of Psychedelic Medicine 2

    #377 — The Future of Psychedelic Medicine 2

    Sam Harris speaks with Dr. Jennifer Mitchell and Dr. Sarah Abedi about recent developments in research on psychedelics. They discuss the history of this research and the war on drugs, recent setbacks in the FDA approval process, MDMA as a treatment for post-traumatic stress disorder (PTSD), the challenges of conducting this research, allegations of therapist misconduct, new therapeutic models for mental health treatment, psychoneuroimmunology, "non-psychedelic" psychedelics, good and bad trips, the FDA's coming decision on MDMA-assisted therapy, "right-to-try" policies for pharmaceuticals, the role of psychedelic therapists, the problem of having all this therapeutic work being done underground, and other topics.

    Petition to approve MDMA-assisted therapy for PTSD: https://www.approvemdmatherapy.com/

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #376 — How Democracies Fail

    #376 — How Democracies Fail

    Sam Harris and Anne Applebaum discuss the nature of modern autocracies and how democracies fail. They discuss the power of ideas, why autocracies seek to undermine democracies, cooperation among dictators, how Western financial experts and investors have enabled autocracies, how Putin came to power, the failure of engagement and investment to create political change, what’s at stake in the war in Ukraine, Trump’s charisma, the current symptoms of American democratic decline, the ideologues around Trump, the hollowing out of institutions, how things might unravel in America, anti-liberal tendencies in American politics, the role of social media, the different pathologies on the Left and Right, analogies to Vichy France, the weakness of the Democrats, the political effects of the assassination attempt on former President Trump, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

    #374 — Consciousness and the Physical World

    #374 — Consciousness and the Physical World

    Sam Harris speaks with Christof Koch about the nature of consciousness. They discuss Christof’s development as a neuroscientist, his collaboration with Francis Crick, change blindness and binocular rivalry, sleep and anesthesia, the limits of physicalism, non-locality, brains as classical systems, conscious AI, idealism and panpsychism, Integrated Information Theory (IIT), what it means to say something “exists,” the illusion of the self, brain bridging, Christof’s experience with psychedelics, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #373 — Anti-Zionism Is Antisemitism

    #373 — Anti-Zionism Is Antisemitism

    Sam Harris speaks with Michal Cotler-Wunsh about the global rise of antisemitism. They discuss the bias against Israel at the United Nations, the nature of double standards, the precedent set by Israel in its conduct in the war in Gaza, the shapeshifting quality of antisemitism, anti-Zionism as the newest strain of Jew hatred, the “Zionism is racism” resolution at the U.N., the lie that Israel is an apartheid state, the notion that Israel is perpetrating a “genocide” against the Palestinians, the Marxist oppressed-oppressor narrative, the false moral equivalence between the atrocities committed by Hamas and the deaths of noncombatants in Gaza, the failure of the social justice movement to respond appropriately to events in Israel, what universities should have done after October 7th, reclaiming the meanings of words, extremism vs civilization, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.