Logo

    existential risks

    Explore " existential risks" with insightful episodes like "545: Sam's Busy Weekend", "519: Not So OpenAI" and "#49 - AGI: Could The End Be Nigh? (With Rosie Campbell)" from podcasts like ""Coder Radio", "Coder Radio" and "Increments"" and more!

    Episodes (3)

    #49 - AGI: Could The End Be Nigh? (With Rosie Campbell)

    #49 - AGI: Could The End Be Nigh? (With Rosie Campbell)
    When big bearded men wearing fedoras begin yelling at you that the end is nigh (https://www.youtube.com/watch?v=gA1sNLL6yg4&ab_channel=BanklessShows) and superintelligence is about to kill us all, what should you do? Vaden says don't panic, and Ben is simply awestruck by the ability to grow a beard in the first place. To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she's more worried about the existential risks of AI than we are, she's just as keen on some debate over a bottle of wine. We discuss: - Whether machine learning poses an existential threat - How concerned we should be about existing AI - Whether deep learning can get us to artificial general intelligence (AGI) - If AI safety is simply quality assurance - How can we test if an AI system is creative? References: - Mathgen: Randomly generated math papers (https://thatsmathematics.com/mathgen/) Contact us - Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani - Follow Rosie at @RosieCampbell or https://www.rosiecampbell.xyz/ - Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ - Come join our discord server! DM us on twitter or send us an email to get a supersecret link Prove you're creative by inventing the next big thing and then send it to us at incrementspodcast@gmail.com Special Guest: Rosie Campbell.
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io