Logo
    Search

    ai risk

    Explore "ai risk" with insightful episodes like "200 - Vitalik Buterin's Philosophy: d/acc", "AI and Existential Risk - Overview and Discussion", "#650 - Geoffrey Miller - How Dangerous Is The Threat Of AI To Humanity?", "Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future" and "Mike Baker On Bud Light Crashing 30% After The Endorsement of Dylan Mulvaney | Ep. 257 | Part 1" from podcasts like ""Bankless", "Last Week in AI", "Modern Wisdom", "Dwarkesh Podcast" and "PBD Podcast"" and more!

    Episodes (5)

    200 - Vitalik Buterin's Philosophy: d/acc

    200 - Vitalik Buterin's Philosophy: d/acc

    ✨ DEBRIEF | Ryan & David unpacking the episode:
    https://www.bankless.com/debrief-vitalik-philosophy

    ------
    This past year there’s been a great societal wide debate: Tech-acceleration (e/acc) vs deacceleration. 

    Should we continue the path toward AI or are the robots going to kill us? We’ve been hoping Vitalik would weigh in on this debate ever since our episode Elizer Yudkowski —now he has, so what’s Vitalik’s probability of AI doom?

    -----
    🏹 Airdrop Hunter is HERE, join your first HUNT today
    https://bankless.cc/JoinYourFirstHUNT

    ------
    BANKLESS SPONSOR TOOLS:

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    ⁠https://k.xyz/bankless-pod-q2  ⁠

    🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING
    ⁠https://bankless.cc/MetaMask 

    ⚖️ARBITRUM | SCALING ETHEREUM
    ⁠https://bankless.cc/Arbitrum  ⁠

    👾GMX | V2 IS NOW LIVE 
    https://bankless.cc/GMX

    🔗CELO | CEL2 COMING SOON
    https://bankless.cc/Celo 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    ⁠https://bankless.cc/uniswap

    ------

    0:00 Intro
    3:50 The Technology Conversation
    15:30 The Accelerationist View
    30:30 The AI Problem
    38:30 8 Year Old Child
    44:00 AI Doom
    51:00 d/acc
    1:08:30 Improving the World
    1:16:45 e/acc
    1:20:30 Crypto Tribes
    1:27:00 Direction and Priorities
    1:30:50 Closing Optimism

    ------
    Resources:

    Vitalik's Techno-Optimism:
    https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html

    Tweet Thread:
    https://x.com/VitalikButerin/status/1729251808936362327?s=20

    Techno-Optimist Manifesto:
    https://a16z.com/the-techno-optimist-manifesto/

    ------
    More content is waiting for you on Bankless.com 🚀 
    https://bankless.cc/YouTubeInfo

    AI and Existential Risk - Overview and Discussion

    AI and Existential Risk - Overview and Discussion

    A special non-news episode in which Andrey and Jeremie discussion AI X-Risk!

    Please let us know if you'd like use to record more of this sort of thing by emailing contact@lastweekin.ai or commenting whether you listen.

    Outline:

    (00:00) Intro (03:55) Topic overview (10:22) Definitions of terms (35:25) AI X-Risk scenarios (41:00) Pathways to Extinction (52:48) Relevant assumptions (58:45) Our positions on AI X-Risk (01:08:10) General Debate (01:31:25) Positive/Negative transfer (01:37:40) X-Risk within 5 years  (01:46:50) Can we control an AGI (01:55:22) AI Safety Aesthetics (02:00:53) Recap (02:02:20) Outer vs inner alignment (02:06:45) AI safety and policy today (02:15:35) Outro

    Links

    Taxonomy of Pathways to Dangerous AI

    Clarifying AI X-risk

    Existential Risks and Global Governance Issues Around AI and Robotics

    Current and Near-Term AI as a Potential Existential Risk Factor

    AI x-risk, approximately ordered by embarrassment

    Classification of Global Catastrophic Risks Connected with Artificial Intelligence

    X-Risk Analysis for AI Research

    The Alignment Problem from a Deep Learning Perspective

    #650 - Geoffrey Miller - How Dangerous Is The Threat Of AI To Humanity?

    #650 - Geoffrey Miller - How Dangerous Is The Threat Of AI To Humanity?
    Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15)  Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

    Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

    The second half of my 7 hour conversation with Carl Shulman is out!

    My favorite part! And the one that had the biggest impact on my worldview.

    Here, Carl lays out how an AI takeover might happen:

    * AI can threaten mutually assured destruction from bioweapons,

    * use cyber attacks to take over physical infrastructure,

    * build mechanical armies,

    * spread seed AIs we can never exterminate,

    * offer tech and other advantages to collaborating countries, etc

    Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:

    * what is the far future best case scenario for humanity

    * what it would look like to have AI make thousands of years of intellectual progress in a month

    * how do we detect deception in superhuman models

    * does space warfare favor defense or offense

    * is a Malthusian state inevitable in the long run

    * why markets haven't priced in explosive economic growth

    * & much more

    Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Catch part 1 here

    Timestamps

    (0:00:00) - Intro

    (0:00:47) - AI takeover via cyber or bio

    (0:32:27) - Can we coordinate against AI?

    (0:53:49) - Human vs AI colonizers

    (1:04:55) - Probability of AI takeover

    (1:21:56) - Can we detect deception?

    (1:47:25) - Using AI to solve coordination problems

    (1:56:01) - Partial alignment

    (2:11:41) - AI far future

    (2:23:04) - Markets & other evidence

    (2:33:26) - Day in the life of Carl Shulman

    (2:47:05) - Space warfare, Malthusian long run, & other rapid fire



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Mike Baker On Bud Light Crashing 30% After The Endorsement of Dylan Mulvaney | Ep. 257 | Part 1

    Mike Baker On Bud Light Crashing 30% After The Endorsement of Dylan Mulvaney | Ep. 257 | Part 1

    In this episode, Patrick Bet-David and Mike Baker will discuss:

    • Bud Light Crashes 30% After Endorsement of Dylan Mulvaney
    • Why the LGBT community is growing so fast
    • Biden Using TikTok Influencers To Get Re-elected
    • How Restrict Act Could Be a Threat To Your Digital Freedom
    • ChaosGPT Tweeting Out Plans To Destroy Humanity

    FaceTime or Ask Patrick any questions on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://minnect.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Want to get clear on your next 5 business moves? ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://valuetainment.com/academy/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Join the channel to get exclusive access to perks:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://bit.ly/3Q9rSQL⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Download the podcasts on all your favorite platforms⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://bit.ly/3sFAW4N⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Text: PODCAST to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠310.340.1132⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to get added to the distribution list

    --- Support this podcast: https://podcasters.spotify.com/pod/show/pbdpodcast/support