Logo

    What You Can Do with Vector Search

    enJanuary 17, 2024
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    About this Episode

    TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. 

    In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. 

    Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.

    Learn more from The New Stack about Zilliz and vector database search:

    Improving ChatGPT’s Ability to Understand Ambiguous Prompts

    Create a Movie Recommendation Engine with Milvus and Python

    Using a Vector Database to Search White House Speeches

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

     

    Recent Episodes from The New Stack Podcast

    Is GitHub Copilot Dependable? These Demos Aren’t Promising

    Is GitHub Copilot Dependable? These Demos Aren’t Promising

    This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. 

    However, she also revealed its limitations, particularly in reliability. Despite being designed to assist with tasks across Microsoft 365, Copilot's performance fell short during Westenberg's tests, failing to retrieve necessary information from her email and Microsoft Teams meetings. While Copilot proves useful for coding, providing helpful code snippets, its effectiveness diminishes for more complex projects. Westenberg's demonstrations underscored both the strengths and weaknesses of Copilot, emphasizing the need for improvement, especially in reliability, to fulfill its promise as a versatile work companion.

     

    Learn more from The New Stack about Copilot 

    Microsoft One-ups Google with Copilot Stack for Developers 

    Copilot Enterprises Introduces Search and Customized Best Practices 

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game

     

    The New Monitoring for Services That Feed from LLMs

    The New Monitoring for Services That Feed from LLMs

    This New Stack Makers podcast co-hosted by Adrian Cockroft, analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs. LangChain acts as middleware, connecting LLMs with services, akin to the Java Database Controller. LangChain's monitoring capabilities led to the development of LangSmith, a monitoring tool. Another tool, LangKit by WhyLabs, offers similar functionalities but is less integrated. This reflects the typical evolution of open-source projects into commercial products. LangChain recently secured funding, indicating growing interest in such monitoring solutions. Cockcroft emphasizes the importance of enterprise-level support and tooling for integrating these solutions into commercial environments. This discussion  underscores the evolving landscape of monitoring services powered by LLMs and the emergence of specialized tools to address associated challenges.

     

    Learn more from The New Stack about LangChain: 

    LangChain: The Trendiest Web Framework of 2023, Thanks to AI 

    How Retool AI Differs from LangChain (Hint: It's Automation) 

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game

     

    How Platform Engineering Supports SRE

    How Platform Engineering Supports SRE

    In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). 

    Parker highlighted the role of IDPs in automating repetitive tasks, allowing SREs to focus on optimizing application performance. Standardization is key, ensuring observability and monitoring solutions align with best practices and cater to SRE needs. By providing standardized service level indicators (SLIs) and key performance indicators (KPIs), IDPs enable SREs to maintain reliability efficiently. Parker stresses the importance of avoiding siloed solutions by establishing standardized practices and tools for effective monitoring and incident response. Overall, the deployment of IDPs aims to streamline operations, reduce incidents, and enhance organizational value by empowering SREs to concentrate on system maintenance and improvements.

    Learn more from The New Stack about UST: 

    Cloud Cost-Unit Economics- A Modern Profitability Model 

    Cloud Native Users Struggle to Achieve Benefits, Report Says 

    John our community of newsletter subscribers to stay on top of the news and at the top of your game

    Internal Developer Platforms: Helping Teams Limit Scope

    Internal Developer Platforms: Helping Teams Limit Scope

    In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. 

    He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. 

    Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects.

     

    Learn more from The New Stack about Tanzu and internal developer platforms:

    VMware Unveils a Pile of New Data Services for Its Cloud VMware 

    VMware Expands Tanzu into a Full Platform Engineering Environment 

    VMware Targets the Platform Engineer

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

     

    How the Kubernetes Gateway API Beats Network Ingress

    How the Kubernetes Gateway API Beats Network Ingress

    In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. 

    The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.

    While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.

    Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:

    Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future 

    API Gateway, Ingress Controller or Service Mesh: When to Use What and Why 

    Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? 

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

     

     

     

     

    What You Can Do with Vector Search

    What You Can Do with Vector Search

    TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. 

    In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. 

    Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.

    Learn more from The New Stack about Zilliz and vector database search:

    Improving ChatGPT’s Ability to Understand Ambiguous Prompts

    Create a Movie Recommendation Engine with Milvus and Python

    Using a Vector Database to Search White House Speeches

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

     

    How Ethical Hacking Tricks Can Protect Your APIs and Apps

    How Ethical Hacking Tricks Can Protect Your APIs and Apps

    TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data.

    Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity.

    For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges.

    Learn more from The New Stack about Imperva and API security:

    What Developers Need to Know about Business Logic Attacks

    Why Your APIs Aren’t Safe — and What to Do about It

    The Limits of Shift-Left: What’s Next for Developer Security

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    2023 Top Episodes - What’s Platform Engineering?

    2023 Top Episodes - What’s Platform Engineering?

    Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.

    This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users."

    This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.

    Learn more from The New Stack about Platform Engineering and Humanitec:

    Platform Engineering Overview, News, and Trends

    The Hype Train Is Over. Platform Engineering Is Here to Stay

    9 Steps to Platform Engineering Hell

    2023 Top Episodes - The End of Programming is Nigh

    2023 Top Episodes - The End of Programming is Nigh

    Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again.

    If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.

    Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.

    Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.

    For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.

    Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. 

    Learn more from The New Stack about AI and the future of software development:

    Top 5 Large Language Models and How to Use Them Effectively

    30 Non-Trivial Ways for Developers to Use GPT-4

    Developer Tips in AI Prompt Engineering

    The New Age of Virtualization

    The New Age of Virtualization

    Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.

    The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.

    Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.

    Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:

    The Future of VMs on Kubernetes: Building on KubeVirt

    A Platform for Kubernetes

    Scaling Open Source Community by Getting Closer to Users

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io