Logo

    supervised learning

    Explore " supervised learning" with insightful episodes like "What does a tic-tac-toe board look like to machine learning?", "Howdy, and the myth of “pouring in data”", "Introducing Tic-Tac-Toe the Hard Way", "How Organizations can Harness the Power of Artificial Intelligence (AI)" and "Prioritizing training data, model interpretability, and dodging an AI Winter" from podcasts like ""Tic-Tac-Toe the Hard Way", "Tic-Tac-Toe the Hard Way", "Tic-Tac-Toe the Hard Way", "V-Next: The Future is Now" and "Banana Data Podcast"" and more!

    Episodes (26)

    What does a tic-tac-toe board look like to machine learning?

    What does a tic-tac-toe board look like to machine learning?

    How should David represent the data needed to train his machine learning system? What does a tic-tac-toe board “look” like to ML? Should he train it on games or on individual boards? How does this decision affect how and how well the machine will learn to play? Plus, an intro to reinforcement learning, the approach Yannick will be taking.

    For more information about the show, check out pair.withgoogle.com/thehardway.


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri

    Howdy, and the myth of “pouring in data”

    Howdy, and the myth of “pouring in data”

    Welcome to the podcast! We’re Yannick and David, a software engineer and a non-technical writer. Over the next 9 episodes we’re going to use two different approaches to build machine learning systems that play two versions of tic-tac-toe. Building a machine learning app requires humans making a lot of decisions. We start by agreeing that David will use a “supervised learning” approach while Yannick will go with “reinforcement learning.”

    For more information about the show, check out pair.withgoogle.com/thehardway


    You can reach out to the hosts on Twitter: @dweinberger and @tafsiri

    Prioritizing training data, model interpretability, and dodging an AI Winter

    Prioritizing training data, model interpretability, and dodging an AI Winter

    This episode, Triveni and Will tackle the value, ethics, and methods for good labeled data, while also weighing the need for model interpretability and the possibility of an impending AI winter.  Triveni will also take us through a step-by-step of the decisions made by a Random Forest algorith

      As always, be sure to rate and subscribe!

      Be sure to check out the articles we mentioned this week:

     The Side of Machine Learning You’re Undervaluing and How to Fix it by Matt Wilder (LabelBox)

     The Hidden Costs of Automated Thinking by Jonathan Zittrain (The New Yorker)

     Another AI Winter Could Usher in a Dark Period for Artificial Intelligence by Eleanor Cummins (PopSci)



    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io