Logo
    Search

    deep learning

    Explore " deep learning" with insightful episodes like "The new AI app stack", "Blueprint for an AI Bill of Rights", "Vector databases (beyond the hype)", "There's a new Llama in town" and "Legal consequences of generated content" from podcasts like ""Practical AI: Machine Learning, Data Science", "Practical AI: Machine Learning, Data Science", "Practical AI: Machine Learning, Data Science", "Practical AI: Machine Learning, Data Science" and "Practical AI: Machine Learning, Data Science"" and more!

    Episodes (100)

    Blueprint for an AI Bill of Rights

    Blueprint for an AI Bill of Rights
    In this Fully Connected episode, Daniel and Chris kick it off by noting that Stability AI released their SDXL 1.0 LLM! They discuss its virtues, and then dive into a discussion regarding how the United States, European Union, and other entities are approaching governance of AI through new laws and legal frameworks. In particular, they review the White House’s approach, noting the potential for unexpected consequences.

    Vector databases (beyond the hype)

    Vector databases (beyond the hype)
    There’s so much talk (and hype) these days about vector databases. We thought it would be timely and practical to have someone on the show that has been hands on with the various options and actually tried to build applications leveraging vector search. Prashanth Rao is a real practitioner that has spent and huge amount of time exploring the expanding set of vector database offerings. After introducing vector database and giving us a mental model of how they fit in with other datastores, Prashanth digs into the trade offs as related to indices, hosting options, embedding vs. query optimization, and more.

    There's a new Llama in town

    There's a new Llama in town
    It was an amazing week in AI news. Among other things, there is a new NeRF and a new Llama in town!!! Zip-NeRF can create some amazing 3D scenes based on 2D images, and Llama 2 from Meta promises to change the LLM landscape. Chris and Daniel dive into these and they compare some of the recently released OpenAI functionality to Anthropic’s Claude 2.

    Legal consequences of generated content

    Legal consequences of generated content
    As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrated this a couple years ago by generating, writing to disk, and then releasing every possible musical melody. Damien joins us to answer our many questions about generated content, copyright, dataset licensing/usage, and the future of knowledge work.

    A developer's toolkit for SOTA AI

    A developer's toolkit for SOTA AI
    Chris sat down with Varun Mohan and Anshul Ramachandran, CEO / Cofounder and Lead of Enterprise and Partnership at Codeium, respectively. They discussed how to streamline and enable modern development in generative AI and large language models (LLMs). Their new tool, Codeium, was born out of the insights they gleaned from their work in GPU software and solutions development, particularly with respect to generative AI, large language models, and supporting infrastructure. Codeium is a free AI-powered toolkit for developers, with in-house models and infrastructure - not another API wrapper.

    Cambrian explosion of generative models

    Cambrian explosion of generative models
    In this Fully Connected episode, Daniel and Chris explore recent highlights from the current model proliferation wave sweeping the world - including Stable Diffusion XL, OpenChat, Zeroscope XL, and Salesforce XGen. They note the rapid rise of open models, and speculate that just as in open source software, open models will dominate the future. Such rapid advancement creates its own problems though, so they finish by itemizing concerns such as cybersecurity, workflow productivity, and impact on human culture.

    From ML to AI to Generative AI

    From ML to AI to Generative AI
    Chris and Daniel take a step back to look at how generative AI fits into the wider landscape of ML/AI and data science. They talk through the differences in how one approaches “traditional” supervised learning and how practitioners are approaching generative AI based solutions (such as those using Midjourney or GPT family models). Finally, they talk through the risk and compliance implications of generative AI, which was in the news this week in the EU.

    AI trends: a Latent Space crossover

    AI trends: a Latent Space crossover
    Daniel had the chance to sit down with @swyx and Alessio from the Latent Space pod in SF to talk about current AI trends and to highlight some key learnings from past episodes. The discussion covers open access LLMs, smol models, model controls, prompt engineering, and LLMOps. This mashup is magical. Don’t miss it!

    Controlled and compliant AI applications

    Controlled and compliant AI applications
    You can’t build robust systems with inconsistent, unstructured text output from LLMs. Moreover, LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities. In this episode, Chris interviews Daniel about his new company called Prediction Guard, which addresses these issues. They discuss some practical methodologies for getting consistent, structured output from compliant AI systems. These systems, driven by open access models and various kinds of LLM wrappers, can help you delight customers AND navigate the increasing restrictions on “GPT” models.

    Data augmentation with LlamaIndex

    Data augmentation with LlamaIndex
    Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.

    Creating instruction tuned models

    Creating instruction tuned models
    At the recent ODSC East conference, Daniel got a chance to sit down with Erin Mikail Staples to discuss the process of gathering human feedback and creating an instruction tuned Large Language Models (LLM). They also chatted about the importance of open data and practical tooling for data annotation and fine-tuning. Do you want to create your own custom generative AI models? This is the episode for you!

    The last mile of AI app development

    The last mile of AI app development
    There are a ton of problems around building LLM apps in production and the last mile of that problem. Travis Fischer, builder of open AI projects like @ChatGPTBot, joins us to talk through these problems (and how to overcome them). He helps us understand the hierarchy of complexity from simple prompting to augmentation, agents, and fine-tuning. Along the way we discuss the frontend developer community that is rapidly adopting AI technology via Typescript (not Python).

    Large models on CPUs

    Large models on CPUs
    Model sizes are crazy these days with billions and billions of parameters. As Mark Kurtz explains in this episode, this makes inference slow and expensive despite the fact that up to 90%+ of the parameters don’t influence the outputs at all. Mark helps us understand all of the practicalities and progress that is being made in model optimization and CPU inference, including the increasing opportunities to run LLMs and other Generative AI models on commodity hardware.

    Causal inference

    Causal inference
    With all the LLM hype, it’s worth remembering that enterprise stakeholders want answers to “why” questions. Enter causal inference. Paul Hünermund has been doing research and writing on this topic for some time and joins us to introduce the topic. He also shares some relevant trends and some tips for getting started with methods including double machine learning, experimentation, difference-in-difference, and more.

    Capabilities of LLMs 🤯

    Capabilities of LLMs 🤯
    Large Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, we’ve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast. Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But that’s not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.

    Computer scientists as rogue art historians

    Computer scientists as rogue art historians
    What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”

    Accelerated data science with a Kaggle grandmaster

    Accelerated data science with a Kaggle grandmaster
    Daniel and Chris explore the intersection of Kaggle and real-world data science in this illuminating conversation with Christof Henkel, Senior Deep Learning Data Scientist at NVIDIA and Kaggle Grandmaster. Christof offers a very lucid explanation into how participation in Kaggle can positively impact a data scientist’s skill and career aspirations. He also shared some of his insights and approach to maximizing AI productivity uses GPU-accelerated tools like RAPIDS and DALI.

    AI search at You.com

    AI search at You.com
    Neural search and chat-based search are all the rage right now. However, You.com has been innovating in these topics long before ChatGPT. In this episode, Bryan McCann from You.com shares insights related to our mental model of Large Language Model (LLM) interactions and practical tips related to integrating LLMs into production systems.

    End-to-end cloud compute for AI/ML

    End-to-end cloud compute for AI/ML
    We’ve all experienced pain moving from local development, to testing, and then on to production. This cycle can be long and tedious, especially as AI models and datasets are integrated. Modal is trying to make this loop of development as seamless as possible for AI practitioners, and their platform is pretty incredible! Erik from Modal joins us in this episode to help us understand how we can run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without our own infrastructure.