Logo
    Search

    Podcast Summary

    • Understanding the Latest Advancements in AI: Generative AIGenerative AI expands AI capabilities beyond data transformation, enabling creation of new content like text, images, and music. Stay informed and engaged with the community to expand knowledge and skills in this ever-evolving field.

      Artificial Intelligence (AI) is constantly evolving, and it's essential to understand the latest advancements, such as generative AI, to keep up with the field. Prior to generative AI, AI and machine learning were primarily focused on transforming input data into output data, acting as a software function. However, with the emergence of generative AI, the capabilities of AI have expanded significantly. Generative AI goes beyond just transforming data and can create new content, such as text, images, or even music. During a recent podcast meetup, the hosts of Practical AI and the Latent Space podcast discussed the latest AI news and the impact of generative AI on the industry. They also emphasized the importance of communities like MLOps, where professionals can learn and collaborate on deploying models and implementing AI in production. A listener named Tanya made an insightful observation during a conversation with Daniel. She pointed out that the term "AI" has evolved, and it's crucial to clarify what we mean when we use the term today. In the past, generative AI did not exist, and the capabilities of AI were more limited. In summary, the rapid advancements in AI and machine learning necessitate ongoing learning and understanding of new concepts, such as generative AI. By staying informed and engaged with the community, professionals can continue to expand their knowledge and skills in this ever-evolving field.

    • AI models have human-designed architectureAI models are designed by humans with specific code and adjustable parameters, making them complex but not inherently different from traditional software functions.

      While Artificial Intelligence (AI) and machine learning models function as sophisticated filters, transforming one form of data into another, they are not entirely dissimilar to traditional software engineering. However, a common misconception exists where people perceive AI as a magical, self-manifesting entity. In reality, these models have an underlying architecture designed and structured by human programmers. This architecture includes specific code that performs various tasks within the model, such as adding numbers or averaging data. The missing pieces of these architectures are referred to as parameters, which can be adjusted to improve the model's performance. For instance, in the case of classifying cats or dogs based on image data, a simple model architecture might determine the classification based on the percentage of red in the image. Despite their complex nature, AI and machine learning models are not inherently different from traditional software functions; they are simply more intricate and require human expertise to design and optimize.

    • Training machine learning models involves finding optimal parameters through trial and errorMachine learning models are trained by adjusting parameters to minimize error on a large labeled dataset, with supervised learning being the dominant approach in industry

      Training machine learning models, even the most complex ones, involves finding the optimal parameters through a process of trial and error. This is done by feeding the model a large dataset with labeled examples and adjusting the parameters to minimize the error or "loss" in the model's predictions. This process, which includes both the training and inference stages, is the foundation of supervised learning, which remains the dominant approach in industry despite the emergence of self-supervised models around 2019. The training process is an iterative algorithm that compares the model's results to the target and aims to reduce the error. While not a brute force implementation due to optimization, it can still be computationally intensive for large models. The ultimate goal is to find the ideal parameters to accurately perform a task, such as classifying images as dogs or cats. Supervised learning, which relies on labeled examples, continues to dominate the AI scene, with an estimated 95% of deployed models using this approach.

    • From supervised learning to fine-tuning and generative AIAI has evolved from supervised learning to fine-tuning and generative models, allowing for less data and time to adapt models to new tasks, and offering unique training processes for various applications.

      The landscape of Artificial Intelligence (AI) has evolved significantly over the years, with two major shifts worth noting. The first shift was from supervised learning, where models were trained from scratch with specific data, to fine-tuning or transfer learning using large pre-trained models. This approach allows users to adapt existing models to new tasks with less data and time. The second shift is the rise of generative AI, which has captured the public's imagination and changed perceptions about AI. These large models, such as those in the GPT family or image-based models like Stable Diffusion or DALL-E, transform input data into output while offering unique training processes. The significant shift, however, lies in how these models are used. Previously, the base or foundation models had limited utility on their own, but the real power came from fine-tuning them with specific data for particular tasks. This evolution in AI has led to a more accessible and transformative technology for industries and the public.

    • Using pre-trained foundation models for efficient AI developmentPre-trained foundation models are cost-effective and time-efficient for AI development. They allow access to nearly complete models, reducing the need for extensive training from scratch. Their generative nature makes them versatile and adaptable to various types of information, opening up new possibilities for AI applications.

      Instead of starting from scratch with machine learning models, it's more efficient and cost-effective to use pre-trained foundation models and fine-tune them for specific use cases. These foundation models are trained by large organizations with vast resources, allowing individuals and businesses to access nearly complete models and customize them for their unique needs. This approach reduces the overall cost and time investment required for developing effective AI models. Moreover, the concept of generative AI has evolved, with these foundation models now seen as valuable in their base form, even without further fine-tuning. These models generate completions based on input sequences, and they can handle various types of information, including text, music, and images. This versatility makes generative AI a topic of great interest as researchers explore new ways to apply this approach to various information streams. In essence, leveraging pre-trained foundation models and fine-tuning them for specific applications is a more efficient and cost-effective approach to AI development. It allows individuals and businesses to access nearly complete models, reducing the time and resources needed for training from scratch. Additionally, the generative nature of these models makes them versatile and adaptable to various types of information, opening up new possibilities for AI applications.

    • Generative models revolutionize content creation in marketingGenerative models create images, text, music, and videos based on prompts, increasing efficiency and creativity in marketing industries

      Generative models in AI technology have significantly increased productivity and creativity across various industries, particularly in marketing. These models can generate images, text, music, and even video content based on given prompts, eliminating the need for extensive data gathering and training. For instance, a product description can be used to generate a lifestyle image, while a GPT model can create ad copy for the image. Moreover, these models can generate music and videos based on mood descriptions, making content creation more efficient and imaginative. The possibilities are endless, and the current wave of AI is dominated by the exploration of these use cases. A recent example includes generating a professional-quality PowerPoint presentation using a generative model in a matter of minutes. The limitations are only set by imagination, and the potential applications are vast, making generative models a game-changer in the AI landscape.

    • Microsoft's DeviceScript: Streamlining presentations and documentationMicrosoft's DeviceScript technology can generate high-quality slides with minimal effort, potentially saving countless hours and resources in industries. However, ethical concerns around generative models' impact on human roles and survival remain.

      Microsoft's new TypeScript programming environment for microcontrollers, DeviceScript, has the potential to significantly reduce the time and resources required for generating presentations and documentation in various industries. Jared's weekend project showcased the technology's ability to create high-quality slides with minimal effort, which could save countless human hours in the long run. This is just one example of DeviceScript's capabilities, and if adopted on a larger scale, it could lead to substantial cost savings and increased efficiency. Moreover, the discussion touched upon the emerging trend of generative models, which can perform a wide range of tasks, some of which are perceived as risky. While these models offer undeniable benefits, there are valid concerns regarding their potential impact on humanity's survival. These debates revolve around the ability of these models to automate and potentially replace human roles, as well as the ethical implications of creating autonomous systems. As these technologies continue to evolve, it is crucial to address these concerns and find a balance between harnessing their power and mitigating potential risks.

    • Misplaced debates on AI risksThe risks of current generative models to humanity may not come from the models themselves, but from their misuse in conjunction with human intent and automation.

      The focus of debates surrounding the potential risks of current generative models to humanity may be misplaced. While some argue that these models are not conscious or intelligent enough to pose a threat, others are concerned about the possibility of humans using these models for malicious purposes. The latter perspective suggests that the danger might not come from the models themselves, but rather from their misuse in conjunction with human intent and automation. It's essential to consider both sides of the argument and acknowledge the potential risks associated with the intersection of human motivations and powerful AI tools.

    • AI misinformation riskAI models can generate misinformation, posing risks to complex equipment operations and decision-making, while advancements in AI capabilities may outpace our ability to address ethical implications

      While the development of AI, particularly in areas like chatbots and language models, is a rapidly evolving field, it's important to be aware of potential external risks that don't require consciousness or AGI to cause harm. A concrete example given is the misinformation that could be generated by AI models interacting with manuals and operation information for complex equipment, potentially leading to dangerous decisions. This risk is compounded by the fact that capabilities and risk profiles are constantly changing as technology advances. Despite this, some believe that modern AI models are already capable of flying aircraft more effectively than human pilots due to their ability to "learn" from vast amounts of data. However, the challenge lies in keeping up with the ethical implications of AI development, as it can outpace our ability to do so. Ultimately, it's crucial to stay informed and adapt our ethical frameworks as technology advances.

    • Regulating AI: Balancing Risks and FallibilityThe EU is focusing on regulating risky AI applications, but it's crucial to remember humans can make mistakes too, and testing both AI and human operators is necessary for safety.

      As the development of generative AI continues to outpace regulation, it's crucial to consider both the risks associated with AI and the fallibility of human operators. This week, the European Union took a step towards passing regulations on AI, focusing on risky applications such as automating processes in utilities. However, it's essential to remember that humans can also make mistakes, and testing both AI and human operators is necessary to ensure safety. The future may hold a time when AI models make fewer errors than human operators, but it's not rational to dismiss this possibility entirely. The debate on when this point will be reached is ongoing, but a rational approach would be to consider the statistics and make decisions based on safety rather than fear or emotion. Regulators and governments must continue to work towards keeping up with the evolving state of AI and implementing guardrails to mitigate potential risks.

    • AI's Impact on Human Jobs and IdentityAI's advancement will change jobs, self-identity, and human roles in society, with potential benefits and challenges

      As AI technology advances, it will fundamentally change the nature of human jobs and self-identification. The fear is not just about AI automating jobs away, but about how it transforms the tasks humans perform and our sense of self. For instance, if an AI is better than a human pilot, regulators might ban human pilots from flying, leading to a loss of jobs and a shift in self-identity. Similar changes could happen in content generation, where AI might produce better content than human writers, leading to a loss of jobs and a change in how humans approach writing. The impact of AI on humanity goes beyond job displacement; it will change how we see ourselves and our roles in society. However, it's important to note that there are benefits to this technological advancement, and we should focus on developing practical applications and tooling around these models to create delightful customer experiences and solve real-world problems. As we continue to integrate AI into our lives, it's crucial to stay informed and involved to navigate the changes ahead. We've all been cyborgs for some time, carrying around cell phones, and the advancement of AI should not surprise us. Instead, we should embrace the opportunities it presents and work together to create a future where humans and AI can coexist and thrive.

    • Emphasizing collaboration and community in AI and techShare knowledge and resources to help reach a larger audience, express gratitude to partners and contributors, and emphasize the value of collaboration and community in the field of AI and technology.

      Key takeaway from this episode of Practical AI is the importance of sharing knowledge and resources with others. The hosts reminded listeners to subscribe to the show and spread the word to help reach a larger audience. They expressed gratitude to their partners, Fastly and Fly, for their support. Additionally, they acknowledged the contributions of their resident DJ, Breakmaster Cylinder, who consistently provides excellent beats. Overall, the episode emphasized the value of collaboration and community in the field of AI and technology. So, if you've gained some insights from this podcast, consider sharing it with your network to help more people benefit from the valuable information being shared.

    Recent Episodes from Practical AI: Machine Learning, Data Science

    Stanford's AI Index Report 2024

    Stanford's AI Index Report 2024
    We’ve had representatives from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) on the show in the past, but we were super excited to talk through their 2024 AI Index Report after such a crazy year in AI! Nestor from HAI joins us in this episode to talk about some of the main takeaways including how AI makes workers more productive, the US is increasing regulations sharply, and industry continues to dominate frontier AI research.

    Apple Intelligence & Advanced RAG

    Apple Intelligence & Advanced RAG
    Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.

    The perplexities of information retrieval

    The perplexities of information retrieval
    Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

    Using edge models to find sensitive data

    Using edge models to find sensitive data
    We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.

    Rise of the AI PC & local LLMs

    Rise of the AI PC & local LLMs
    We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

    AI in the U.S. Congress

    AI in the U.S. Congress
    At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

    Full-stack approach for effective AI agents

    Full-stack approach for effective AI agents
    There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.

    Private, open source chat UIs

    Private, open source chat UIs
    We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).

    Related Episodes

    When data leakage turns into a flood of trouble

    When data leakage turns into a flood of trouble
    Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)

    AlphaFold is revolutionizing biology

    AlphaFold is revolutionizing biology
    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.

    Zero-shot multitask learning (Practical AI #158)

    Zero-shot multitask learning (Practical AI #158)
    In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.