Logo
    Search

    Podcast Summary

    • Discussing Intel Liftoff and the Advent of Gen AI hackathonIntel Liftoff is a free accelerator program for startups in AI and ML, offering technical support, access to technology, and marketing assistance. The Advent of Gen AI hackathon, inspired by Advent of Code, fosters innovation and collaboration in generative AI.

      The decisions we make in the field of artificial intelligence can significantly impact the future of the technology and who has access to it. Chris Dixon's book, "Read, Write, Own," encourages building a new era of the Internet that puts people in charge, with a focus on open networks and fair compensation for creators. During this episode of Practical AI, a fireside chat was held with the organizers of Intel's Liftoff program for startups. Liftoff is a free accelerator program for early-stage startups in AI and machine learning, offering world-class technical support, access to technology, and marketing assistance. The program provides engineering expertise, access to Intel software and the Intel Developer Cloud, and global marketing through Intel's channels. The idea for the Advent of Gen AI hackathon came from the team's inspiration from Advent of Code. The initial vision was to create an event focused on generative AI, with the goal of fostering innovation and collaboration in the field. The hackathon received positive feedback and will continue to improve for future events. Intel Liftoff is an excellent opportunity for startups looking to scale and accelerate their journey in AI and machine learning.

    • Intel's Gen AI Initiative: Engaging the Community in Generative AI TechnologyIntel's Gen AI initiative attracted over 2,000 participants, from beginners to experts, through engaging challenges in Generative AI technology. The event was a success and Intel plans to continue it annually, potentially exploring new technologies.

      The Advent of Gen AI initiative, launched by Intel, was designed to introduce a wide range of individuals to Generative AI technology through a set of engaging and fun challenges. The response was overwhelming, with participants ranging from beginners with prompt engineering knowledge to experts in the field, including students, startups, and even Intel employees. The challenges ranged from algorithmic questions to building multimodality chatbots, and the community support was strong, with individuals helping each other out in the process. The initiative was a success, with over 2,000 registrations before registrations had to be closed. Intel plans to continue this annual event, potentially exploring new technologies each year. If you missed out on participating in 2023, keep an eye out for future opportunities through the Liftoff program and Intel's social media channels.

    • Gen AI event exceeds expectations with active community engagement and diverse challengesThe Gen AI event showcased the power of community collaboration and the transformative capabilities of AI technology through a range of challenges, providing opportunities for learning and growth for participants of all skill levels.

      The recent Gen AI event surpassed expectations with a large number of high-quality submissions and active community engagement. The challenges, designed to progressively build on coding and creativity skills, ranged from simple image generation to more complex Python code explanation, requiring varying levels of expertise. The event not only served as a hackathon but also became a valuable learning resource for those new to Gen AI. Despite initial uncertainty about the event's success, the community's enthusiasm and collaboration made it an incredible experience. The challenges, while not necessarily increasing in difficulty as a strict definition, provided a progression of complexity for different skill levels. The Gen AI event demonstrated the power of the community and the transformative capabilities of AI technology.

    • Learning AI skills through hackathons and Intel Developer CloudHackathons offer opportunities to learn various AI skills, from prompting to RAG systems, and Intel Developer Cloud provides efficient tools for model optimization and deployment.

      The recent focus in the AI industry is on making neural networks more accessible through new levels of abstraction and APIs, allowing individuals to build and use AI skills in various applications without extensive coding knowledge. The 5-day hackathon challenges aimed to teach different skills, from prompting and image editing to RAG (Retrieval and Generation) systems and code explanation. These skills are in high demand and are the foundation for implementing AI solutions in industries and businesses. The Intel Developer Cloud was a unique aspect of the hackathon, providing various ways to run AI models beyond just using GPUs. Participants were introduced to this platform and some innovative methods of utilizing it, such as Intel's Neural Compressor for model optimization and Intel's OpenVINO toolkit for model deployment on various devices. These offerings allowed for more efficient and diverse AI model implementation, making the Intel Developer Cloud an essential tool for developers and engineers looking to expand their AI skillset.

    • Intel Developer Cloud: Powerful Tools for AI and ML ProjectsIntel Developer Cloud offers powerful GPUs, CPUs, and file storage, along with various accelerators for AI workloads, including JupyterHub instances, data center Max series GPUs, and 4th generation Xeon processors. It's known for its efficient and effective performance and offers a range of machine sizes and plans to add more features.

      Intel Developer Cloud (IDC) offers a unique solution for startups and individuals working on AI and machine learning projects, providing access to powerful GPUs, CPUs, and file storage, along with various accelerators like Gaudi 2, designed specifically for high bandwidth workloads. IDC's offerings include JupyterHub instances, data center Max series GPUs, and 4th generation Xeon processors, making it an efficient and effective choice for AI workloads. The platform also offers a range of machine sizes, from single nodes to clusters, and is planning to add more local models and elements to boost its capabilities. The Intel team has received positive feedback from users and aims to make it even better with upcoming features like Kubernetes service objects. Dan, one of the first customers, was impressed by the performance and support and recommends IDC for those looking for a performance-focused cloud solution for their AI projects. There are various tooling options available, from optimizing models for CPU or edge environments to using powerful accelerators like Gaudi 2. For those interested, exploring the Hugging Face optimum library is a great starting point.

    • Optimize machine learning models with Intel's Optimum toolIntel's Optimum tool simplifies model optimization for various architectures through easy class and optimizer replacement or wrapping, with collaborations with Hugging Face and commitments to open source software and community engagement.

      Intel's Optimum tool provides an easy and effective way to optimize machine learning models for various architectures, including CPUs, GPUs, HPUs, and more. This is achieved through simple replacement or wrapping of classes and optimizers, allowing users to run their models quickly and efficiently across a wide range of hardware. Intel's collaboration with Hugging Face on this tooling has significantly improved the ease of use and applicability of model optimization, especially for inference in LMS. Furthermore, Intel's commitment to open source software and community engagement is evident in their contributions to major projects like PyTorch and TensorFlow, as well as their development of extensions and libraries to further optimize model performance. Intel's philosophy of one API, heterogeneous programming, and open standards enables seamless integration of various accelerators and minimal code changes for cross-platform compatibility. Overall, Optimum and Intel's approach to machine learning optimization offer significant benefits for developers and organizations looking to maximize the potential of their models on diverse hardware architectures.

    • Open-source models and frameworks fueling innovationThe recent trend towards open-source models and frameworks enables rapid innovation and adoption, as demonstrated by Intel's fine-tuned neural chat models and Vana's Python RAG framework. These tools allow for fine-tuning and quick adoption of new technologies, leading to innovative solutions in various fields.

      The recent trend towards open-source models and frameworks in the tech industry is enabling rapid innovation and adoption. Intel's release of fine-tuned neural chat models based on the Mistral model, which are openly accessible on Hugging Face, demonstrates this ability. Another example is Vana, a Python RAG framework for text-to-SQL generation that lets users chat with any relational database and boasts high accuracy, excellent security, and privacy. During a recent hackathon, the quality of submissions was impressive, with standout examples including the use of Retrieval Augmented Generation (RAG) for tasks like parsing YouTube videos and generating Python explanations. The Jupyter Notebooks provided as learning activities were also well-received, with many participants using them to create amazing work. What sets these examples apart is the way they demonstrate the power of combining models and frameworks to solve complex problems. The ability to fine-tune models and quickly adopt new technologies is leading to innovative solutions in various fields. The trend towards open-source models and frameworks is enabling a new era of collaboration and innovation in tech.

    • Combination of generative AI and democratization of tools leads to impressive turnout in AI challengeDiverse range of participants, high-quality submissions, ease of use of APIs and toolings, creativity and ingenuity, global hackathon, collaboration and communication in Slack channel.

      The combination of generative AI and the democratization of AI tools has led to an impressive turnout and high-quality submissions in the recent AI challenge, involving a diverse range of participants from various regions and backgrounds. The ease of use of APIs and toolings, such as Hugging Face and PredictionIO, enabled individuals and teams to optimize their solutions and push the boundaries of what was expected. The creativity and ingenuity displayed by the participants, including a middle school student named Arian, were astounding. The event, which was targeted towards startups, attracted a wide range of developers from different industries and companies, making it a truly global hackathon. The Liftoff team has already posted three blog articles about the challenge on their website, developer.intl.com/liftoff, where you can find more information about the submissions and future events. The collaboration and communication in the Slack channel were constant, making it an exciting and productive experience for everyone involved.

    • Leveraging Large Language Models: Scaling for GrowthCollaboration and innovation led to practical, trustworthy, and privacy-conserving solutions using large language models. The trend of open models becoming more accessible and scalable is expected to continue shaping the future of AI.

      Key takeaway from this hackathon experience with Intel Liftoff is the potential for significant growth and the importance of being prepared to scale when working with large language models (LLMs). Ralph, from the Liftoff program, was impressed by the interaction and collaboration between teams, leading to the creation of practical, trustworthy, and privacy-conserving solutions. He also highlighted the encouraging trend of open models becoming more accessible and scalable, which is expected to continue shaping the future of AI. The Liftoff team appreciated the support from Intel and the entire community, with many expressing their intent to bring these solutions to their workplaces. The team looks forward to addressing any shortcomings and scaling up for future challenges. Overall, this hackathon demonstrated the power of collaboration and innovation in the AI ecosystem.

    • Valuing community feedback and collaborationThe Liftoff team fosters a community-driven effort, offering benefits like scale, expertise, and access to hardware for startups, and expresses gratitude towards contributors.

      The Liftoff team values community feedback and encourages everyone, especially startups, to get involved and contribute to their community. They aim to create a more community-driven effort, rather than a top-down approach. The team is doing remarkable things and offers benefits such as scale, expertise, and access to hardware for participating startups. They expressed gratitude towards everyone who contributed to the recent event, including Eugenie for her feedback, and Kelly for her exceptional work on the website and content creation despite being sick. The entire team's collaboration made the event a success, and they look forward to doing more in the future. If you're interested in joining the community, sign up for the Practical AI Slack team at practicalai.fm/community.

    Recent Episodes from Practical AI: Machine Learning, Data Science

    Apple Intelligence & Advanced RAG

    Apple Intelligence & Advanced RAG
    Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.

    The perplexities of information retrieval

    The perplexities of information retrieval
    Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

    Using edge models to find sensitive data

    Using edge models to find sensitive data
    We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.

    Rise of the AI PC & local LLMs

    Rise of the AI PC & local LLMs
    We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

    AI in the U.S. Congress

    AI in the U.S. Congress
    At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

    Full-stack approach for effective AI agents

    Full-stack approach for effective AI agents
    There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.

    Private, open source chat UIs

    Private, open source chat UIs
    We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).

    Mamba & Jamba

    Mamba & Jamba
    First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21’s co-founder Yoav.

    Related Episodes

    When data leakage turns into a flood of trouble

    When data leakage turns into a flood of trouble
    Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)

    AlphaFold is revolutionizing biology

    AlphaFold is revolutionizing biology
    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.

    Zero-shot multitask learning (Practical AI #158)

    Zero-shot multitask learning (Practical AI #158)
    In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.