Logo
    Search

    Podcast Summary

    • Embracing the Rapidly Changing Tech LandscapeAppreciate past progress, look forward to future innovation, hybrid conferences, latest AI/ML developments, and tools like Leno, Fastly, LaunchDarkly, and RudderStack.

      We are living in unprecedented times with technology advancing at an unprecedented rate. This was emphasized during the latest episode of the Practical AI podcast, where the hosts reflected on the rapid changes that have occurred in the last 30 years and the accelerating pace of innovation. They encouraged listeners to appreciate the progress made and look forward to the future. On a related note, the podcast also touched on the topic of conferences returning in a hybrid format, combining both in-person and virtual elements. This is a sign of the ongoing evolution of events in the tech industry. In terms of specific news and learning resources, the episode discussed recent developments in AI and machine learning, as well as tools and platforms like Leno, Fastly, LaunchDarkly, and RudderStack that can help developers and data scientists stay up-to-date and productive. Overall, the Practical AI podcast continues to provide valuable insights and connections for anyone interested in the world of AI, machine learning, and data science. So, whether you're a seasoned professional or just starting out, be sure to tune in and join the community.

    • Research vs Industry AI ConferencesResearch conferences focus on sharing new findings and advancing AI science, while industry conferences highlight practical applications and real-world experiences.

      There are two main types of conferences in the field of Artificial Intelligence (AI): research-focused and industry-focused. These two types of conferences serve different purposes and cater to different audiences. Research-focused conferences, such as NeurIPS, ACL, and EMNLP, are primarily attended by academics and researchers. In these conferences, original research is presented and peer-reviewed, providing a platform for sharing new findings and advancing the field. This process ensures that the research presented is of high quality and contributes to the scientific understanding of AI. On the other hand, industry-focused conferences, like NVIDIA's GTC, provide a platform for companies and organizations to showcase their practical applications of AI and discuss their experiences in implementing AI solutions. These conferences offer valuable opportunities for networking and learning about real-world AI applications. It's important to note that attending a conference doesn't require presenting research or a talk. Both types of conferences offer valuable learning opportunities for those interested in AI, whether they're looking to expand their knowledge, network with professionals, or stay updated on the latest research and industry trends. In summary, understanding the differences between research-focused and industry-focused conferences can help those interested in AI make informed decisions about which conferences to attend based on their goals and interests.

    • Maximizing the Value of Informal Interactions at ConferencesExpand your network by meeting new people and engaging in conversations with strangers during meals and social events at conferences to gain unique insights and knowledge.

      Conferences offer invaluable opportunities for learning and networking that go beyond formal sessions. While attending sessions is essential, informal conversations with fellow attendees can lead to memorable and unique insights. Introverts, including those who enjoy conversations, can benefit from these encounters but may need to balance their energy levels with quieter moments. To maximize the value of these interactions, attendees should aim to meet new people and engage in conversations with strangers. Breaking away from familiar colleagues during meals and other social events is an excellent strategy to expand one's network and bring back valuable insights to one's organization. Ultimately, conferences provide a rare opportunity to learn from peers, expand professional networks, and gain knowledge that may not be readily available online.

    • Balancing public engagement and private time for productivityEffective communication requires balance between personal reflection and public engagement. Rapid NLP advancements bring numerous applications in various industries, and upcoming ML DataOps Summit will discuss successful implementation.

      Finding a balance between public engagement and private time is crucial for productivity and effective communication in both personal and professional settings. This was highlighted in a conversation between Chris and Ivan, where they discussed the importance of taking breaks for personal reflection and the unexpected benefits that can come from these moments, such as meeting new people and having valuable conversations. Furthermore, the rapid advancements in Natural Language Processing (NLP) technology, as represented by OpenAI's GPT 3, have led to numerous applications in various industries, including customer support, healthcare, finance, and research. The upcoming ML DataOps Summit, in partnership with TechCrunch, will feature experts discussing these developments and their successful implementation in organizations. Moreover, Chris shared an experience from a recent conference where the keynote speaker introduced the concept of "humology," which combines humans and technology. This idea encourages businesses to evaluate their processes and determine where they can effectively integrate AI and automation to improve efficiency and productivity.

    • Utilizing Technology for Efficiency and Environmental SustainabilityBusinesses and industries should adopt advanced technology to increase efficiency, reduce harm to the environment, and improve outcomes. Ethical considerations and responsible implementation are crucial.

      Businesses and industries should aim to fully utilize the technology available to them to increase efficiency and reduce harm to the environment. The example given was the evolution of weed control in agriculture, from manual labor to chemical applications, and eventually to autonomous machines using computer vision. While automation can lead to job displacement, it also has the potential to save resources and improve outcomes. However, it's important to consider the ethical implications of technology and strive for responsible implementation. The use of outdated, brute force applications of technology can be harmful, but with advancements in AI and automation, we can find more precise and effective solutions. Ultimately, the goal should be to work in harmony with technology to create a better future for all.

    • Navigating the Disconnect Between Humans and TechnologyAmidst rapid technological evolution, humans face challenges in adapting and adding value while managing information overload and mitigating bias in AI. AI offers potential solutions but requires awareness and responsible use.

      We are in the midst of a rapid evolution in technology, specifically with data and AI, which is bringing about significant changes in various aspects of life and work. This evolution is happening much faster than our biological brains have evolved to adapt, leading to a dissonance between humans and the tools we create. This dissonance is a unique challenge for individuals, especially the younger generation, as they will need to find ways to add value to the world and cooperate with technology. Moreover, the abundance of information available today presents a new problem - not being able to find relevant and trustworthy information amidst the vast amount of data. AI and machine learning techniques offer potential solutions to help us navigate this information overload. However, there are also risks, such as bias in the information presented to us. Despite the challenges, the potential benefits of these technologies are exciting. They can help us find relevant information, connect the dots, and make sense of the world in ways that were not possible before. However, it is essential to be aware of the risks and work towards mitigating them. In summary, we are living in a time of unprecedented change, and it is essential to be aware of the opportunities and challenges that come with it. We must find ways to adapt and cooperate with technology while ensuring that it benefits society as a whole.

    • Navigating the Rapidly Evolving World of TechnologyEmbrace change, collaborate, experiment, and remember the progress we've made in technology and science.

      We are living in a time of unprecedented change, and it's essential to remember this as we navigate the rapidly evolving world of technology. Gerhard van der Westhuizen, host of the "Ship It" podcast, shared his experience of growing up with limited resources but using them to explore the world through a small local library and encyclopedias. He emphasized that the world has changed more in the last 30 years than in the centuries before, and we must not forget this. Great teams create great engineers, not the other way around, as discussed on the "Ship It" show. They also advocate for testing assumptions and experimenting, as demonstrated in their open-source podcasting platform. An ongoing project, the Big Science Research Workshop, is a highly distributed collaborative research effort involving 600 researchers from 50 countries and more than 250 institutions. Inspired by large-scale physics collaborations like CERN, this project focuses on large multilingual language models, which require significant infrastructure and data governance. In essence, we are in a unique period of time, and it's crucial to remember the progress we've made while continuing to learn and adapt to new developments. Whether in technology or science, collaboration and experimentation are key to pushing boundaries and making a difference.

    • T0: A New Evolution in Language ModelsOpenAI's new model, T0, outperforms GPT-3 in certain ways while being 16 times smaller. Its use of prompts for various NLP tasks offers a more flexible and user-friendly approach and allows for immediate understanding and response to unique prompts.

      OpenAI's new model, T0, represents a significant evolution in the field of language models. T0 outperforms GPT-3 in certain ways while being 16 times smaller, making it a promising development for the industry. The key difference in T0's approach lies in its use of prompts, which involves reframing various NLP tasks as prompts paired with the corresponding outputs. This strategy allows the model to handle a wide range of tasks more effectively and offers a more flexible and user-friendly approach compared to previous models that have relied on proxy tasks like masking. By optimizing the model for zero-shot interactions, T0 can immediately understand and respond to unique prompts, making it a valuable tool for various applications. Daniel, an NLP expert, sees this as an advantageous strategy as it caters to the growing demand for models that can understand and respond to unique prompts without the need for extensive fine-tuning. Overall, T0's innovative approach to language modeling represents a significant step forward in the field and opens up new possibilities for NLP applications.

    • Transformer Revolution: T0 Model for Flexible PromptsT0 model, inspired by T5, handles zero-shot usage and custom prompts, expanding its adaptability for various tasks.

      The new transformer model, T0, is designed to handle zero-shot usage and custom, flexible prompts, making it adaptable to a wide range of tasks. This model is inspired by Google's T5 text-to-text transformer and is part of the transformer revolution in NLP. However, transformers are not the only game in town, and there are still other interesting developments in neural network architectures, such as streamlined and space-efficient speech models, multimodal approaches, and graph neural networks. A recent article in IEEE Spectrum, "How Deep Learning Works Inside the Neural Networks that Power Today's AI," provides a fresh perspective on the basics of deep learning, which is essential for newcomers to the field. Another article, "5 Deep Learning Activation Functions You Need to Know," is a valuable resource for understanding the fundamental building blocks of deep learning models. Overall, the transformer revolution has been a significant development in AI, but there is still a diverse range of approaches and techniques being explored.

    • Learning about activation functions is crucial for machine learning beginnersUnderstanding activation functions and their applications is essential for anyone starting in machine learning as they determine a neural network's output and impact performance.

      Understanding activation functions is a crucial step for anyone starting out in machine learning. These functions determine the output of a neural network based on its input, and choosing the right one for a specific task can significantly impact the model's performance. The article discussed in the podcast provides a quick summary of various activation functions and their applications, making it an excellent resource for beginners. Moreover, although tooling and pre-built solutions are becoming more accessible, gaining an intuitive understanding of the underlying concepts is essential. As the field evolves, having a solid foundation in the basics will enable you to make informed decisions and adapt to new developments. In summary, taking the time to learn about activation functions and their applications is an essential investment for anyone interested in machine learning. The podcast's conversation between Chris and Daniel highlights the importance of this foundational knowledge and offers valuable insights for those just starting their journey in this field.

    Recent Episodes from Practical AI: Machine Learning, Data Science

    Vectoring in on Pinecone

    Vectoring in on Pinecone
    Daniel & Chris explore the advantages of vector databases with Roie Schwaber-Cohen of Pinecone. Roie starts with a very lucid explanation of why you need a vector database in your machine learning pipeline, and then goes on to discuss Pinecone’s vector database, designed to facilitate efficient storage, retrieval, and management of vector data.

    Stanford's AI Index Report 2024

    Stanford's AI Index Report 2024
    We’ve had representatives from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) on the show in the past, but we were super excited to talk through their 2024 AI Index Report after such a crazy year in AI! Nestor from HAI joins us in this episode to talk about some of the main takeaways including how AI makes workers more productive, the US is increasing regulations sharply, and industry continues to dominate frontier AI research.

    Apple Intelligence & Advanced RAG

    Apple Intelligence & Advanced RAG
    Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.

    The perplexities of information retrieval

    The perplexities of information retrieval
    Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

    Using edge models to find sensitive data

    Using edge models to find sensitive data
    We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.

    Rise of the AI PC & local LLMs

    Rise of the AI PC & local LLMs
    We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

    AI in the U.S. Congress

    AI in the U.S. Congress
    At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

    Full-stack approach for effective AI agents

    Full-stack approach for effective AI agents
    There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.

    Related Episodes

    When data leakage turns into a flood of trouble

    When data leakage turns into a flood of trouble
    Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)

    AlphaFold is revolutionizing biology

    AlphaFold is revolutionizing biology
    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.

    Zero-shot multitask learning (Practical AI #158)

    Zero-shot multitask learning (Practical AI #158)
    In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.