Logo
    Search

    regulation in ai

    Explore "regulation in ai" with insightful episodes like "Should we press pause on AI?" and "#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge" from podcasts like ""The Gray Area with Sean Illing" and "Modern Wisdom"" and more!

    Episodes (2)

    Should we press pause on AI?

    Should we press pause on AI?
    How worried should we be about AI? Sean Illing is joined by Stuart J. Russell, a professor at the University of California Berkeley and director of the Center for Human-Compatible AI. Russell was among the signatories who wrote an open letter asking for a six-month pause on AI training. They discuss the dangers of losing control of AI and what the upsides of this rapidly developing technology could be.  Host: Sean Illing (@seanilling), host, The Gray Area Guest: Stuart J. Russell, professor at the University of California Berkeley and director of the Center for Human-Compatible AI References:  Pause Giant AI Experiments: An Open Letter “AI has much to offer humanity. It could also wreak terrible harm. It must be controlled.” by Stuart Russell (The Observer, April 2023) Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig (Pearson Education International)  Human-Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (Penguin Random House, 2020) “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” by Kevin Roose (New York Times, February 2023) Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts. Subscribe for free. Be the first to hear the next episode of The Gray Area by subscribing in your favorite podcast app. Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcasts This episode was made by:  Engineer: Patrick Boyd Deputy Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices

    #297 - Brian Christian - The Alignment Problem: AI's Scary Challenge

    #297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
    Brian Christian is a programmer, researcher and an author. You have a computer system, you want it to do X, you give it a set of examples and you say "do that" - what could go wrong? Well, lots apparently, and the implications are pretty scary. Expect to learn why it's so hard to code an artificial intelligence to do what we actually want it to, how a robot cheated at the game of football, why human biases can be absorbed by AI systems, the most effective way to teach machines to learn, the danger if we don't get the alignment problem fixed and much more... Sponsors: Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Buy The Alignment Problem - https://amzn.to/3ty6po7 Follow Brian on Twitter - https://twitter.com/brianchristian  Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices