Logo
    Search

    alignment challenges

    Explore "alignment challenges" with insightful episodes like "Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3" and "Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment" from podcasts like ""Bankless" and "Dwarkesh Podcast"" and more!

    Episodes (2)

    Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

    Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

    In this episode, we delve into the frontier of AI and the challenges surrounding AI alignment. The AI / Crypto overlap at Zuzalu sparked discussions on topics like ZKML, MEV bots, and the integration of AI agents into the Ethereum landscape. 

    However, the focal point was the alignment conversation, which showcased both pessimistic and resigned optimistic perspectives. We hear from Nate Sores of MIRI, who offers a downstream view on AI risk, and Deger Turan, who emphasizes the importance of human alignment as a prerequisite for aligning AI. Their discussions touch on epistemology, individual preferences, and the potential of AI to assist in personal and societal growth.

    ------
    🚀 Join Ryan & David at Permissionless in September. Bankless Citizens get 30% off. 🚀
    https://bankless.cc/GoToPermissionless

    ------
    BANKLESS SPONSOR TOOLS:

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    ⁠https://k.xyz/bankless-pod-q2⁠ 

    🦊METAMASK PORTFOLIO | TRACK & MANAGE YOUR WEB3 EVERYTHING
    ⁠https://bankless.cc/MetaMask 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    ⁠https://bankless.cc/Arbitrum⁠ 

    🛞MANTLE | MODULAR LAYER 2 NETWORK
    ⁠https://bankless.cc/Mantle⁠ 

    👾POLYGON | VALUE LAYER OF THE INTERNET
    https://polygon.technology/roadmap 

    ------

    Timestamps

    0:00 Intro
    1:50 Guests

    5:30 NATE SOARES
    7:25 MIRI
    13:30 Human Coordination
    17:00 Dangers of Superintelligence
    21:00 AI’s Big Moment
    24:45 Chances of Doom
    35:35 A Serious Threat
    42:45 Talent is Scarce
    48:20 Solving the Alignment Problem
    59:35 Dealing with Pessimism
    1:03:45 The Sliver of Utopia

    1:14:00 DEGER TURAN
    1:17:00 Solving Human Alignment
    1:22:40 Using AI to Solve Problems
    1:26:30 AI Objectives Institute
    1:31:30 Epistemic Security
    1:36:18 Curating AI Content
    1:41:00 Scalable Coordination
    1:47:15 Building Evolving Systems
    1:54:00 Independent Flexible Systems
    1:58:30 The Problem is the Solution
    2:03:30 A Better Future

    -----
    Resources

    Nate Soares
    https://twitter.com/So8res?s=20 

    Deger Turan
    https://twitter.com/degerturann?s=20 

    MIRI
    https://intelligence.org/ 

    Less Wrong AI Alignment
    xhttps://www.lesswrong.com/tag/ai-alignment-intro-materials 

    AI Objectives Institute
    https://aiobjectives.org/ 

    ------

    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
     https://www.bankless.com/disclosures⁠ 

    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

    I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:

    * time to AGI

    * leaks and spies

    * what's after generative models

    * post AGI futures

    * working with Microsoft and competing with Google

    * difficulty of aligning superhuman AI

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00) - Time to AGI

    (05:57) - What’s after generative models?

    (10:57) - Data, models, and research

    (15:27) - Alignment

    (20:53) - Post AGI Future

    (26:56) - New ideas are overrated

    (36:22) - Is progress inevitable?

    (41:27) - Future Breakthroughs



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe