Logo

    generative models

    Explore " generative models" with insightful episodes like "The Tragedy of the AI Commons - Ethical Dilemmas of Generative Models", "AI and Safety with Siddarth Srivastava" and "Susan Magsamen on the Intersection of Brain Sciences and the Arts" from podcasts like ""52 Weeks of Cloud", "Learning Futures" and "At a Distance"" and more!

    Episodes (3)

    The Tragedy of the AI Commons - Ethical Dilemmas of Generative Models

    The Tragedy of the AI Commons - Ethical Dilemmas of Generative Models

    Hey readers 👋, if you enjoyed this content, I wanted to share some of my favorite resources to continue your learning journey in technology!

    Hands-On Courses for Rust, Data, Cloud, AI and LLMs 🚀

    📚 Must-Read Books:

    Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017

    Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/

    Developing on AWS with C#: https://www.amazon.com/Developing-AWS-Comprehensive-Solutions-Platform/dp/1492095877

    Pragmatic AI Labs Books: https://www.amazon.com/gp/product/B0992BN7W8

    🎥 Follow & Subscribe:

    Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5Q

    52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.com

    noahgift.com: https://noahgift.com/

    Pragmatic AI Labs Website: https://paiml.com/

    Your adventure in tech awaits! Dive in now, and elevate your skills to new heights. 🚀

    Check out all a Master's degree worth of courses on Coursera on topics ranging from Cloud Computing to Rust to LLMs and Generative AI: https://www.coursera.org/instructor/noahgift.

    AI and Safety with Siddarth Srivastava

    AI and Safety with Siddarth Srivastava

    In this episode of the Learning Futures Podcast, Dr. Siddharth Srivastava, Associate Professor at the School of Computing and Augmented Intelligence at Arizona State University discusses the need for responsible development of AI systems that keep users informed of capabilities and limitations. He highlights exciting research on learning generalizable knowledge to make AI more robust and data-efficient. However, dangers arise from overtrusting unproven systems, so regulation and oversight are needed even as innovation continues. By prioritizing users, the current explosion in AI research can drive responsible progress.

     

    Key topics discussed: 

    - Dr. Srivastava discusses his background in AI research and the journey that led him to focus on developing safe and reliable AI systems. 

    - The recent explosion of interest and adoption of generative AI like ChatGPT took many researchers by surprise, especially the accessibility and breadth of applications people found for these narrow systems.

    - It's important to distinguish narrow AI applications like generative models from general AI. Overuse of the term "AI" can lead to misconceptions.

    - Considerations around safety, bias, and responsible use need to be built into AI systems from the start. Keeping users informed of capabilities and limitations is key.  

    - Exciting new research directions include learning generalizable knowledge to make AI systems more robust and data-efficient.

    - Dangers arise from overtrusting unproven AI systems. Regulation and oversight will be needed, but should not stifle innovation.

    - Overall, it's an exciting time in AI research. With a thoughtful, practical approach focused on user needs, AI can be developed responsibly.

     

    Links: