Logo
    Search

    Podcast Summary

    • Politicians and AIPoliticians, like Congressman Don Beyer, can demonstrate a deep understanding of AI through continuous learning and determination, challenging the assumption that they lack technical backgrounds.

      Congressman Don Beyer, a current representative from Northern Virginia, shares a deep-rooted interest in science and AI policy. His background in math and problem-solving, coupled with his appointment to the Science Committee in Congress, fueled his curiosity in the field. Despite not having a technical background, Beyer's determination led him to take up coding and machine learning courses at his local university, aiming to earn a master's degree. This inspiring story challenges the common assumption that politicians lack understanding of science, technology, and AI. Beyer's journey highlights the importance of continuous learning and the potential for individuals to adapt and excel in new domains, regardless of age or background.

    • AI learning age barrierRegardless of age, individuals can learn and master AI and ML, and continuous learning is essential for policymakers to effectively address AI issues

      Age is not a barrier when it comes to learning and mastering artificial intelligence and machine learning. Despite having graduated decades ago and initially feeling out of place among younger students, the speaker was surprised by their determination and technical backgrounds. This experience has been a humbling reminder of the importance of continuous learning. Another observation is the stark contrast between the hands-on development of AI at a practitioner level and the policymaking level. While the speaker is actively involved in conversations about AI on the government side, they acknowledge the need to understand the technical aspects of AI to effectively address policy issues. The response to the speaker's deep dive into the topic among other members of Congress has been largely amused, but there is a growing interest in learning more about AI. The speaker hopes that this curiosity will lead to a more comprehensive understanding and engagement with AI issues in the future.

    • AI policyPolicymakers are focusing on AI safety, copyright, and reliability, balancing benefits and risks, and exploring legislation in a complex, slow process

      Policymakers at various levels, including Congress, are increasingly focusing on AI and its implications. This includes education, research, and legislation. The main areas of focus include AI safety, copyright and intellectual property issues related to AI-generated content, and the reliability and safety of generative AI models. The rapid advancement of AI technology poses unique challenges for policymakers, who are working to keep up and address potential downsides. The AI caucus in the US House and Senate, for example, is growing rapidly and is focused on education and exploring legislation. The challenge for policymakers is to balance the potential benefits of AI with its risks and ensure that it is developed and used in a way that benefits society as a whole. The process is complex and slow, with many competing interests and a need for bipartisan support. However, the increasing attention on AI is a positive sign that policymakers are starting to grapple with these issues.

    • AI regulation and safetyThe Biden administration's executive order on AI and the establishment of the Safety Institute at NIST are encouraging signs of leadership in addressing AI safety concerns and balancing innovation with safety. Internationally, ongoing conversations between policy makers aim to prevent the use of AI in conflicts and engage all major powers in collaboration and regulation.

      While there are ongoing efforts to address CSAM (Child Sexual Abuse Material) and hold accountable those who create and distribute it, the focus is also on identifying effective legislation at the state and federal levels. The Biden administration's executive order on AI and the establishment of the Safety Institute at NIST are encouraging signs of leadership. Internationally, there are ongoing conversations between policy makers, including the EU, US, China, and others, about the need for collaboration and regulation in AI. The goal is to balance innovation with safety concerns, engage all major powers, and prevent the use of AI in conflicts. Renewed arms control discussions and legislation prohibiting AI from making nuclear attack decisions are also important steps towards a safer world. Despite the challenges, there is a shared understanding that human beings, not machines, should make decisions of such magnitude.

    • AI challengesAddressing practical concerns like job elimination and new opportunities, as well as existential risks like misinformation and bio-weapons, is crucial as we advance in AI technology.

      As we continue to advance in AI technology, it's crucial to address both the practical and existential challenges it presents. From a practical standpoint, job elimination due to automation is a concern, but also an opportunity for new industries and innovations. Misinformation and the potential for creating bio-weapons are serious concerns that require regulation and awareness. At the same time, we should also consider how AI can be used to tackle real-world issues like climate change, food insecurity, and mental health. From a day-to-day practitioner perspective, it's essential to keep these potential benefits and risks in mind as we build and apply AI systems. Additionally, we should strive to use AI for good and consider the broader societal implications of its use. The future of AI holds immense potential for improving our lives, but it also comes with significant challenges that we must address proactively.

    • AI potential in Education and HealthcareAI can revolutionize education and healthcare sectors by providing personalized learning, early intervention in mental health, and more precise treatments in healthcare.

      Artificial Intelligence (AI) has the potential to revolutionize various sectors, including education, mental health, and climate change, while addressing concerns around safety and privacy. AI can serve as a personal assistant, aid in suicide prevention, and enhance education by allowing students to learn at their own pace. However, there are challenges such as policy restrictions and privacy concerns that need to be addressed. Despite these challenges, the benefits of AI in education, healthcare, and other areas are significant and worth pursuing. The use of AI in education can make learning more personalized and effective, while its application in mental health can lead to early detection and intervention. In healthcare, AI is already making a difference by enabling earlier diagnosis and more precise treatments. Overall, AI is a powerful tool with immense potential to improve our lives and solve pressing global issues.

    • AI and human longevityAdvancements in AI could significantly extend human longevity by unlocking new insights into biology and the human brain, but ethical dilemmas and safety concerns must be addressed.

      The future of human longevity could be significantly extended due to advancements in artificial intelligence (AI). The knowledge and data processing capabilities of AI have the potential to unlock new insights into areas such as biology and the human brain, which are currently not well understood. The first person to live to be 150 years old may already be alive today. However, it's important to note that with these advancements come safety-related concerns and ethical dilemmas. Governments are beginning to address these issues, and it's crucial for practitioners to be part of these conversations. Overall, the optimism surrounding AI's potential to revolutionize various fields, including healthcare and aging research, is encouraging. It's essential to stay informed and engaged in these discussions to ensure that AI is used responsibly and ethically. To learn more and join the conversation, visit PracticalAI.fm and sign up for their free Slack community.

    Recent Episodes from Practical AI: Machine Learning, Data Science

    Stanford's AI Index Report 2024

    Stanford's AI Index Report 2024
    We’ve had representatives from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) on the show in the past, but we were super excited to talk through their 2024 AI Index Report after such a crazy year in AI! Nestor from HAI joins us in this episode to talk about some of the main takeaways including how AI makes workers more productive, the US is increasing regulations sharply, and industry continues to dominate frontier AI research.

    Apple Intelligence & Advanced RAG

    Apple Intelligence & Advanced RAG
    Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.

    The perplexities of information retrieval

    The perplexities of information retrieval
    Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

    Using edge models to find sensitive data

    Using edge models to find sensitive data
    We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.

    Rise of the AI PC & local LLMs

    Rise of the AI PC & local LLMs
    We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

    AI in the U.S. Congress

    AI in the U.S. Congress
    At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

    Full-stack approach for effective AI agents

    Full-stack approach for effective AI agents
    There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.

    Private, open source chat UIs

    Private, open source chat UIs
    We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).

    Related Episodes

    When data leakage turns into a flood of trouble

    When data leakage turns into a flood of trouble
    Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)

    AlphaFold is revolutionizing biology

    AlphaFold is revolutionizing biology
    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.

    Zero-shot multitask learning (Practical AI #158)

    Zero-shot multitask learning (Practical AI #158)
    In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.