Logo
    Search

    Podcast Summary

    • 2024: A Year of Continued Investment and Exploration in AIOnly around 15-20% of enterprises have integrated AI systems, indicating a long way to go before it becomes mainstream technology. Continued investment and exploration are necessary to address regulatory and legal considerations and prepare for future advancements in AI.

      2024 is expected to be a year of continued investment and exploration in AI, particularly in the area of generative AI, as more organizations try to understand how to effectively implement and integrate it into their businesses. Chris and Daniel discussed how 2023 saw a significant push from big tech into AI, but there are still many challenges to overcome before it becomes a widely adopted and seamless technology. They noted that the current stage of AI adoption can be compared to an expansion period, where the focus is on developing and scaling AI applications. However, there are also regulatory and legal considerations that will come into play as the technology continues to evolve. Daniel mentioned a study that showed only around 15-20% of enterprises have some level of integration with AI systems, indicating that there is still a long way to go before it becomes a mainstream technology. Overall, the conversation highlighted the ongoing journey of AI adoption and the importance of continued investment and exploration to prepare for the future.

    • Shifting focus from R&D to scaling and application developmentIn 2024, companies prioritize scaling and application development over heavy R&D investments. AI integrations are commoditizing, requiring businesses to differentiate through human creativity and innovation.

      In 2024, companies are shifting their focus from heavy research and development (R&D) investments towards scaling and developing applications, platforms, and systems based on existing technologies. The market for AI integrations is exploding, with large tech companies offering APIs for AI usage. This trend is leading to a commoditization of certain AI products, and companies will need to differentiate themselves through human creativity and innovation. Additionally, having AI in a product is no longer enough, and businesses must find new ways to stand out. Some of the highlights from 2023 include the rapid adoption of these trends and the increasing ubiquity of AI. Companies that can effectively navigate this landscape and differentiate themselves will likely succeed, while others may struggle.

    • The Year of AI Integration: ChatGPT and BeyondAI models like ChatGPT, Google's Gemini, Meta's Llama 2, and Anthropic's Claude 2 revolutionized daily work and life in 2023, becoming essential components in various domains, from coding to education.

      2023 was the year of integration for AI models in various aspects of daily work and life. ChatGPT, released with an upgrade from GPT 3, ignited a competition among tech companies, leading to the introduction of new models like Google's Gemini, Meta's Llama 2, and Anthropic's Claude 2. The models, initially, had limitations in languages other than Python, but this began to change as the year progressed. For developers and strategists, the use of AI models became a constant part of their workflow, whether they were coding or handling non-technical tasks. Even for an 11-year-old student, AI models were shown to be useful learning tools, leading to conversations about their integration into education. By the end of 2023, AI models had become an essential component in various domains, from coding to everyday life, making the year a significant milestone in the development and adoption of AI technology.

    • AI integration into daily life in 2023AI technologies became fully ingrained in daily life, enabling personalized experiences and seamless experimentation for individuals and their families

      In 2023, the use of AI technologies became fully integrated into every aspect of the speaker's life, not just limited to work or specific activities. This integration allowed for seamless experimentation and the application of AI outputs across various opportunities. Furthermore, the impact of AI extended beyond the individual, affecting family members as well. An example of this was the creation of personalized Christmas gifts using AI tools like ClipDrop from Stability. This year marked a shift from setting aside dedicated time for AI-related activities to effortlessly incorporating these technologies into daily life.

    • AI technology brings value to non-tech individuals and businessesSimple AI applications can have significant impacts, AI tools enhance productivity, and privacy-preserving options expand accessibility

      Even simple and seemingly trivial applications of AI technology can bring significant value to individuals and businesses that are not tech-focused. The speaker shared a personal experience of sending a thank-you text with an attached image, which was later printed out and framed in a veterinary office. This simple act, facilitated by AI image generation technology, led to a meaningful impact on the recipients. Another takeaway is the increasing productivity gains that developers are experiencing through the integration of AI tools into their workflows. The speaker shared how they made the switch from VS Code to Versus Code with Codium and found the combination of AI-powered features and chat interface to be highly efficient. The emergence of privacy-preserving options, such as Continue.dev, further enhances the accessibility and seamless integration of AI tools for individual contributors. These examples illustrate how AI technology is transforming the way we work and interact, often in unexpected and meaningful ways, and underscores the importance of staying open to new technological innovations.

    • Discovering Productivity Benefits of AI in 2024In 2024, AI is expected to offer productivity benefits beyond entertainment, but public perception is influenced by conflicting views from industry pioneers. Stay informed through reliable sources to make informed decisions.

      The year 2024 is hoped to be the year where people discover the productivity benefits of AI instead of just the entertainment aspects. There is a growing fear of AI among those outside the industry, but significant policy and regulation initiatives have been put in place to mitigate concerns. However, the contradictory views of industry pioneers like Jan Leemans and Geoffrey Hinton make it difficult for the general public to determine who to believe. It's essential to educate oneself on the topic through reliable sources to make informed decisions. Despite the challenges, the potential for AI to transform productivity and improve lives is immense. The industry will continue to evolve, and it's crucial for individuals and organizations to stay informed and adapt to the changes.

    • Hybridized approach of data science, machine learning, and generative AI will be a game-changer in 2024The combination of traditional data science, machine learning algorithms, and generative AI systems will revolutionize enterprise-level solutions, with examples like sentiment analysis and AI-generated responses informed by human intervention.

      While generative AI is expected to be a major focus in 2024, Chris proposes a spicy take that the combination of traditional data science and machine learning algorithms with generative AI systems will be a game-changer. He believes that enterprises will see a resurgence of this hybridized approach, which can be seen in simple examples like sentiment analysis and AI-generated responses informed by human intervention. This combination of technologies, according to Chris, will be a practical and powerful solution that may not receive as much attention in the news but will be significant on the enterprise level. Chris also mentioned his prediction for Prediction Guard taking off in 2024. Although this was seen as a given, his spicy take aimed to emphasize the potential impact of the hybridized approach.

    • Anticipation for AGI resurgence despite potential risksSkepticism about current AI tooling value, human touch necessary, focus shifting to neuroscience for consciousness understanding and safety measures

      There is growing anticipation for a resurgence of research into Artificial General Intelligence (AGI) due to the remarkable productivity and output of current AI models. The fear is that we may stumble upon consciousness in these models without fully understanding it, leading to potential surprises and risks. Daniel Stenberg, the creator of curl, shares his skepticism about the current value of generative AI tooling but is optimistic about future improvements. However, he strongly believes that the human touch is necessary to ensure the best outcomes. The focus on understanding consciousness and ensuring safety measures will likely shift from the AI space to the neuroscience space.

    • AI Trends for 2024: Focus on Retrieval Augmented Generation, Open Models, Productivity Enhancement, Multimodal Models, and Small Language ModelsIn 2024, AI trends will continue to focus on Retrieval Augmented Generation, open models surpassing GPT 4 capabilities, productivity enhancement, multimodal models, and small language models for economic and compute efficiency reasons.

      According to various predictions on AI for 2024, several trends are expected to continue and develop further. These trends include the focus on Retrieval Augmented Generation (RAG) for improvements, open models surpassing the capabilities of GPT 4, productivity enhancement in work through AI rather than replacement, a greater emphasis on multimodal models, and a shift towards small language models for economic and compute efficiency reasons. These trends were distilled from numerous Twitter, LinkedIn, and blog posts, and are considered practical and safe predictions that many industry experts would agree with. The multimodal focus is particularly anticipated, and it's likely that most, if not all, of these predictions will come true this year. Some open models already surpass certain aspects of GPT 4 in specific tasks or domains. The logical progression of AI development suggests that these trends are the next logical steps.

    • Evolving Landscape of Large Language Models: Open Models and Cost-Efficient Options Gain GroundCEO of Hugging Face predicts financial challenges for some AI companies due to high compute costs, emphasizes benefits of cost-efficient models for enterprises and the environment, and extends a hand to struggling teams in the field.

      The landscape of large language models (LLMs) is evolving rapidly, with open models and more cost-efficient options gaining ground against the current market leader, GPT-4. Mixtrel from Mistral and newer models like Gemini are showing promise in specific tasks. Clem, the CEO of Hugging Face, predicts that some hyped AI companies may face financial challenges due to high compute costs associated with running large models at scale. He also emphasizes the benefits of cost-efficient models for enterprises and the environment. Hugging Face is even extending a hand to struggling teams in the field, offering them a chance to join their team and continue their work using their infrastructure. The focus on cost-effective models is not only beneficial for the climate but also for operational costs, making it a trend that may not favor all players in the market.

    • The intersection of software engineering and Large Language Models in 2024In 2024, the importance of strong software engineering fundamentals in deploying LLM applications will be emphasized, as economics shift from building to financial viability. LLMs are just another aspect of software, and all software will eventually have these capabilities.

      The intersection of software engineering and Large Language Models (LLMs) will become increasingly important in 2024. The economics of sustaining LLM development will shift from a focus on building and engineering to a practical consideration of financial viability. Jerry from Llama Index, who was recently on the show, emphasized the importance of strong software engineering fundamentals in deploying LLM applications. AI remains a cool new capability within the larger software space, and the two are gradually merging. While we are still in the "cool space" of AI, software skills will continue to be essential, with some tasks being human-driven and others being driven by software with models. As we move forward, it's important to remember that LLMs are just another aspect of software, and all software will eventually have these capabilities. So, stay tuned for more discussions on this topic and others in 2024. Don't forget to join our Slack community at changelog.com/community to connect with us and share your thoughts on potential guests and topics. Happy new year to all our listeners!

    • The importance of community and continuous learning in AISurround yourself with resources and people to succeed in AI, join Practical AI's free Slack team and subscribe to their podcast for updates and connections.

      Learning from this week's Practical AI episode is the importance of community and continuous learning in the field of artificial intelligence. The hosts, Daniel and Chris, emphasized the value of subscribing to Practical AI and joining their free Slack team to connect with like-minded individuals and stay updated on the latest AI developments. They also expressed their gratitude to their partners, Fly.io, Brakemaster Cylinder, and the listeners for their support. Overall, the message was clear: to succeed in AI, it's essential to surround yourself with the right resources and people. So, keep learning, keep growing, and don't forget to join the Practical AI community!

    Recent Episodes from Practical AI: Machine Learning, Data Science

    Apple Intelligence & Advanced RAG

    Apple Intelligence & Advanced RAG
    Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.

    The perplexities of information retrieval

    The perplexities of information retrieval
    Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

    Using edge models to find sensitive data

    Using edge models to find sensitive data
    We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.

    Rise of the AI PC & local LLMs

    Rise of the AI PC & local LLMs
    We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

    AI in the U.S. Congress

    AI in the U.S. Congress
    At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

    Full-stack approach for effective AI agents

    Full-stack approach for effective AI agents
    There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.

    Private, open source chat UIs

    Private, open source chat UIs
    We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).

    Mamba & Jamba

    Mamba & Jamba
    First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21’s co-founder Yoav.

    Related Episodes

    When data leakage turns into a flood of trouble

    When data leakage turns into a flood of trouble
    Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)

    AlphaFold is revolutionizing biology

    AlphaFold is revolutionizing biology
    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.

    Zero-shot multitask learning (Practical AI #158)

    Zero-shot multitask learning (Practical AI #158)
    In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.