Logo
    Search

    existential risk

    Explore "existential risk" with insightful episodes like "AI and Existential Risk - Overview and Discussion" and "86 | Martin Rees on Threats to Humanity, Prospects for Posthumanity, and Life in the Universe" from podcasts like ""Last Week in AI" and "Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas"" and more!

    Episodes (2)

    AI and Existential Risk - Overview and Discussion

    AI and Existential Risk - Overview and Discussion

    A special non-news episode in which Andrey and Jeremie discussion AI X-Risk!

    Please let us know if you'd like use to record more of this sort of thing by emailing contact@lastweekin.ai or commenting whether you listen.

    Outline:

    (00:00) Intro (03:55) Topic overview (10:22) Definitions of terms (35:25) AI X-Risk scenarios (41:00) Pathways to Extinction (52:48) Relevant assumptions (58:45) Our positions on AI X-Risk (01:08:10) General Debate (01:31:25) Positive/Negative transfer (01:37:40) X-Risk within 5 years  (01:46:50) Can we control an AGI (01:55:22) AI Safety Aesthetics (02:00:53) Recap (02:02:20) Outer vs inner alignment (02:06:45) AI safety and policy today (02:15:35) Outro

    Links

    Taxonomy of Pathways to Dangerous AI

    Clarifying AI X-risk

    Existential Risks and Global Governance Issues Around AI and Robotics

    Current and Near-Term AI as a Potential Existential Risk Factor

    AI x-risk, approximately ordered by embarrassment

    Classification of Global Catastrophic Risks Connected with Artificial Intelligence

    X-Risk Analysis for AI Research

    The Alignment Problem from a Deep Learning Perspective

    86 | Martin Rees on Threats to Humanity, Prospects for Posthumanity, and Life in the Universe

    86 | Martin Rees on Threats to Humanity, Prospects for Posthumanity, and Life in the Universe

    Anyone who has read histories of the Cold War, including the Cuban Missile Crisis and the 1983 nuclear false alarm, must be struck by how incredibly close humanity has come to wreaking incredible destruction on itself. Nuclear war was the first technology humans created that was truly capable of causing such harm, but the list of potential threats is growing, from artificial pandemics to runaway super-powerful artificial intelligence. In response, today’s guest Martin Rees and others founded the Cambridge Centre for the Study of Existential Risk. We talk about what the major risks are, and how we can best reason about very tiny probabilities multiplied by truly awful consequences. In the second part of the episode we start talking about what humanity might become, as well as the prospect of life elsewhere in the universe, and that was so much fun that we just kept going.

    Support Mindscape on Patreon.

    Lord Martin Rees, Baron of Ludlow, received his Ph.D. in physics from University of Cambridge. He is currently Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, as well as Astronomer Royal of the United Kingdom. He was formerly Master of Trinity College and President of the Royal Society. Among his many awards are the Heineman Prize for Astrophysics, the Gruber Prize in Cosmology, the Crafoord Prize, the Michael Faraday Prize, the Templeton Prize, the Isaac Newton Medal, the Dirac Medal, and the British Order of Merit. He is a co-founder of the Centre for the Study of Existential Risk.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.