Logo
    Search

    Lunchtime BABLing with Dr. Shea Brown

    Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
    enBabl AI33 Episodes

    Episodes (33)

    NIST, ISO 42001, and BABL AI online courses

    NIST, ISO 42001, and BABL AI online courses
    Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI. What's Inside: 1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications. 2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry. 3. Aligning Education with Innovation: We also explore how BABL AI’s online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.

    Navigating Global AI Regulatory Compliance

    Navigating Global AI Regulatory Compliance
    Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024. 🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally. πŸ” Highlights of This Episode: EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements. Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. πŸš€ Takeaway: This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical. πŸ‘‰ Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge. #AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence

    Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07

    Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07
    Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING." Link here: https://babl.ai/courses/ πŸ€– Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects. πŸŽ™οΈ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence. πŸ” The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives. Link to paper here: https://arxiv.org/abs/2209.00692 πŸ’‘ Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics. πŸ‘ Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen. πŸ“’ We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going! πŸ”” Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads. #LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics

    What Companies Need To Consider When Implementing AI

    What Companies Need To Consider When Implementing AI
    πŸ“Ί About This Episode: Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about "What Things Should Companies Consider When Implementing AI." Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape. In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain. Whether you're a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice. πŸ”— Stay Connected: Hit that like and subscribe button for more enlightening episodes. Tune into our podcast across various platforms for your on-the-go AI insights. πŸ‘‹ Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can't wait to share more in our upcoming episodes!

    Key Takeaways of the EU AI Act | Lunchtime BABLing

    Key Takeaways of the EU AI Act | Lunchtime BABLing
    Description: πŸ”Š Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, "Key Takeaways of the EU AI Act," join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act. 🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It's comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act's implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation. πŸ”‘ Highlights of the episode include: A detailed explanation of what the EU AI Act entails and why it's a game-changer. Insights into who will be affected by the Act and how it extends beyond European borders. The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories. A look into the conformity assessment process and the compliance requirements for organizations. Practical steps organizations should take to prepare for compliance. πŸ€” Whether you're a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences. πŸ“£ Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don't forget to like and subscribe if you're watching on YouTube, or thank you for listening if you're tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing! #EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #Podcast

    028. International Association of Algorithmic Auditors

    028. International Association of Algorithmic Auditors
    Lunchtime BABLing listeners can use Coupon Code "BABLING" to save an 20% off all BABL AI courses. Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification Description: Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA). Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association's goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society. The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process. As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors. Whether you're a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing. IAAA website: https://iaaa-algorithmicauditors.org πŸŽ™οΈ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don't forget to like and subscribe for more insightful discussions on Lunchtime BABLing! #AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLing

    027. Understanding Fundamental Rights Impact Assessments in the EU AI Act

    027. Understanding Fundamental Rights Impact Assessments in the EU AI Act
    Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained Description: Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, "Understanding Fundamental Rights Impact Assessments in the EU AI Act," is a must-listen for anyone interested in the intersection of AI, regulation, and human rights. Key Discussion Points: Introduction to the EU AI Act: Gain insights into the EU AI Act's passing and its significance in shaping the future of AI regulation. Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments. Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems. Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes. Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation. Episode Highlights: Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses. Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape. Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject.

    026. National Conference on AI Law, Ethics, and Compliance

    026. National Conference on AI Law, Ethics, and Compliance
    πŸ”Ή New Episode: National Conference on AI Law, Ethics, and Compliance In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance. Key Discussions: -Understanding AI and the risks involved. -Governance frameworks for AI deployment. -The implications of the recent U.S. Executive Order on AI. -Global initiatives for AI safety and governance. Industry Spotlight: -The surge of generative AI in corporate strategy. -The evolving landscape of AI policy, privacy concerns, and intellectual property. Engage with Us: Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: β€œBABLING” Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

    025. AI and Algorithm Auditing Certificate

    025. AI and Algorithm Auditing Certificate
    Lunchtime BABLing is back with a new season! In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: "BABLING" Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: https://babl.ai/

    024. Interview with Khoa Lam on AI Auditing

    024. Interview with Khoa Lam on AI Auditing
    On this week's Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. They discuss a wide range of topics including: 1: How Khoa got into the field of Responsible AI 2: His work at AI Incident Database 3: His thoughts on generative AI and large language models 4: The technical aspects of AI and Algorithmic Auditing Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/ AI Incident Database: https://incidentdatabase.ai BABL AI Courses: https://courses.babl.ai/ Website: https://babl.ai/ Linkedin: https://www.linkedin.com/company/babl-ai/

    022. Final Rules for NYC Local Law

    022. Final Rules for NYC Local Law
    In this episode Shea reviews the new rules for NYC's Local Law No. 144, which requires bias audits of automated employment decision tools. The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement). Sign up for our new "AI & Algorithm Auditor Certification Program" starting May 8th! https://courses.babl.ai/p/ai-and-algorithm-auditor-certification?affcode=616760_7ts3gujl

    021. Large Language Models, Open Letter Moratorium on AI, NIST's AI Risk Management Framework, and Algorithmic Bias Lab

    021. Large Language Models, Open Letter Moratorium on AI, NIST's AI Risk Management Framework, and Algorithmic Bias Lab
    This week on Lunchtime BABLing, we discuss: 1: The power, hype, and dangers of large language models like ChatGPT. 2: The recent open letter asking for a moratorium on AI research. 3: In context learning of large language models the problems for auditing. 4: NIST's AI Risk Management Framework and its influence on public policy like California's ASSEMBLY BILL NO. 331. 5: Updates on The Algorithmic Bias Lab's new training program for AI auditors. https://babl.ai https://courses.babl.ai/?affcode=616760_7ts3gujl

    020. AI Governance Report & Auditor Training

    020. AI Governance Report & Auditor Training
    This week we discuss our recent report "The Current State of AI Governance", which is the culmination of a year-long research project looking into the effectiveness of AI governance controls. Full report here: https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf We also discuss our new training program, the "AI & Algorithm Auditor Certificate Program", which starts in May 2023. This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general: 1: Algorithms, AI, & Machine Learning 2: Algorithmic Risk & Impact Assessments 3: AI Governance & Risk Management 4: Bias, Accuracy, & the Statistics of AI Testing 5: Algorithm Auditing & Assurance Early pricing can be found here: https://courses.babl.ai/?affcode=616760_7ts3gujl BABL AI: https://babl.ai

    019. Interrogating Large Language Models with Jiahao Chen

    019. Interrogating Large Language Models with Jiahao Chen
    On this week's Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC. We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including: 1: Do systems like ChatGPT reason? 2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process? 3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering). 4: Black-box vs. Whitebox testing of LLMs for algorithm auditing. 5: Classical assessments of intelligence and their applicability to LLMs. 6: Re-thinking education and assessment in the age of AI. Jiahao Chen Twitter: https://twitter.com/acidflask Responsible AI LLC: https://responsibleai.tech/

    018. The 5 Skills you NEED for AI Auditing

    018. The 5 Skills you NEED for AI Auditing
    You need way more than "five skills" to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs. This is part of our weekly webinar/podcast that went very long so we've cut out a lot of the Q&A, which covered a lot of questions that we'll address in future videos, like: What kind of training do I need to become an AI or algorithm auditor? Do I need technical knowledge of machine learning to do AI ethics?

    016. Breaking into AI Ethics (Part 2)

    016. Breaking into AI Ethics (Part 2)
    In this Q&A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic. Questions include: 1. Do I need an advanced degree to work in responsible AI? 2. How do I know what topics to focus on? 3. Do I need programming skills to work in responsible AI? 4. Where can I find training in AI ethics?

    015. AI Risk Management Standards

    015. AI Risk Management Standards
    In this episode of Lunchtime BABLing, we discuss the emergence of new standards for AI Risk Management, as well as regulatory requirements involving AI risk management, including: 1: NIST AI Risk Management Framework 2: ISO/IEC 23894:2023 - Information technology β€” Artificial intelligence β€” Guidance on risk management 3: Colorados SB21-169 - Protecting Consumers from Unfair Discrimination in Insurance Practices 4: Risk Management in the [EU] Artificial Intelligence Act 5: BABL AI Cheat Sheet on AI Governance

    014. The Use and Regulation of AI in Hiring with Dr. Frida Polli

    014. The Use and Regulation of AI in Hiring with Dr. Frida Polli
    Today on Lunchtime BABLing, Shea talks with the Chief Data Science Officer at Harver, Dr. Frida Polli. Prior to being at Harver, Frida was the founder and CEO of pymetrics; we talk about: βœ… How AI is being used in hiring, βœ… What it takes to use it responsibly, βœ… Her own journey through this space, and βœ… Reflections on upcoming regulations, including the recent New York City Local Law 144 and the EU AI Act. Frida: https://www.linkedin.com/in/frida-polli-phd-03a1855/ Harver: https://harver.com/