Logo
    Search

    This startup uses a team of AI agents to write and review their pull requests

    enJune 07, 2024

    Podcast Summary

    • Future of junior engineering rolesAdvancements in AI and machine learning may reduce the need for traditional junior engineering roles, while the labor market shifts towards profitability over growth presents challenges for some tech workers. Startups like Squire AI are innovating to help developers add new meaning to their codebase.

      The future of software development may involve less reliance on traditional junior engineering roles due to advancements in AI and machine learning. Meanwhile, the labor market is experiencing a shift towards profitability over growth, making it challenging for some tech workers to find new jobs. Samuel Patel, CEO and cofounder of Squire AI, shared his background in computer science and gaming that led him to a career in software development and eventually founding his own startup. Squire AI initially focused on just-in-time documentation for developers but had to pivot when large language models like LLM emerged. Patel and his team are now building tools to help developers give new meaning to their codebase, making it an interesting addition to the competitive landscape, including Stack Overflow for Teams. The conversation also touched on the evolution of Squire AI and the challenges they faced in the ever-changing software development landscape.

    • Squire AI agents evolutionSquire AI evolved from just-in-time documentation to code review assistance using LLMs, aiming to integrate throughout SDLC for developers assistance

      Squire AI is a suite of agents designed to automate smaller tasks within the software development life cycle. The evolution of Squire AI began with just-in-time documentation, but the emergence of Large Language Models (LLMs) led them to pivot towards creating agents that could help developers understand code ownership and responsibilities. The latest iteration, Squire AI, aims to provide constructive feedback during code reviews by traversing the codebase, searching for symbols, meaning, and context to ensure code quality and adherence to best practices. The future of agents, according to Squire AI, is in their atomicity and ability to work together in a multi-agent system to tackle increasingly complex tasks. These agents employ techniques such as reflection, tool use, planning, and collaboration to provide valuable feedback and utilization of each other. Today, Squire AI focuses on code reviews, but the ultimate goal is to integrate these agents throughout the entire software development life cycle to assist developers at every stage.

    • Master Models with fine-grain controlFuture AI development will create master models capable of leveraging multiple models for specific tasks, offering opinions and suggestions, and collaborating with humans effectively.

      The future of AI development is heading towards the creation of master models with fine-grain control over individual agents' knowledge and the ability to leverage multiple models for specific tasks. These models will not only be able to reason and make decisions but also offer opinions and suggestions, acting more like a senior employee. The Hugging GPT paper is an example of this direction, where the model can find and use other models to complete tasks. AI agents, such as Reflection, will provide criticism and suggestions, and tools like tree of thought can help determine the best possible path to improve outcomes. The consensus is that we're moving towards a future where AI will be able to reason, make decisions, offer opinions, and collaborate with humans in a more effective and efficient way.

    • Agentic workflows for LLMsAgentic workflows allow LLMs to focus on specific tasks, eliminating confusion and leading to efficient and accurate usage through a 'for loop' system where LLMs can use other agents as tools and maintain control over outcomes.

      The future of Large Language Models (LLMs) lies in their ability to exhibit divergent thought and generate specialized outputs. This approach, known as agentic workflows, involves training models to focus on specific tasks and eliminating confusion. The use of smaller, specialized models, like CodeLama, can lead to similar or better outcomes than using large, heavy models for every task. However, there's a risk of models becoming overly specialized and losing proficiency in other areas. Companies like ours are using a variety of technologies, such as Python, TypeScript, graph databases, and embeddings, to build these systems. Agentic workflows involve putting LLMs in a "for loop," allowing them to think, act, and reconsider their actions. Our system enables agents to use other agents as tools, ensuring syntactical accuracy and maintaining control over the outcomes. This approach can lead to more efficient and accurate LLM usage.

    • OpenAI agent hierarchy and business modelOpenAI is developing a hierarchy of agents that work together to achieve specific outcomes, with a focus on per-seat pricing, predictable costs, and specific models for balancing cost and value. Innovations in data centers, energy, and compute resources are needed to support the future agentic workforce, with OpenAI exploring the possibility of selling excess compute power as heat.

      OpenAI is developing a hierarchy of agents that work together to achieve specific outcomes, with the parent agent being the one users interact with most. This agent interfaces with various tools and other agents to understand code structures and provide the desired outcome. OpenAI's focus on per-seat pricing aims to provide predictable costs and control expenses, as usage-based pricing can be unpredictable. They are also developing more specific models to balance cost and value, with smaller models used for specific tasks. The increasing demand for AI agents will require innovations in data centers, energy, and compute resources to support the future agentic workforce. OpenAI is also exploring the possibility of selling excess compute power as heat. The cost of inference remains high, but OpenAI is working on new techniques for efficiency and has recently made some new offerings free to the public. The business model revolves around providing value to businesses while managing costs. The development of these agents and the increasing demand for AI technology will necessitate innovations in various areas to support the future workforce.

    • Energy efficiency in AIAddressing energy constraints is crucial for maximizing efficiency and value from AI and data centers. Renewable energy solutions and specialized models can help reduce energy consumption.

      There's an opportunity to maximize efficiency and extract more value from AI and data centers by addressing the energy issue. The discussion highlighted the potential bottleneck of building new data centers due to energy constraints, as well as the need for more advanced grids to effectively transfer renewable energy. The future of AI lies in people owning their own AI and having energy-efficient computers in their homes. Additionally, being selective about the models used based on the task at hand can help save costs and reduce energy consumption. While there will still be a place for large, general models, specialized models will likely take over as tasks become more specific. Overall, it's important to consider energy efficiency and the potential for renewable energy solutions to support the growth of AI technology.

    • Specialized models vs sharing knowledgeSpecialized models are important for energy and cost efficiency in completing specific tasks, while sharing knowledge within the tech community can benefit thousands through platforms like Stack Overflow.

      As technology advances, we can expect to see increasingly specialized models being used for specific tasks due to energy and cost efficiency. For instance, there are models designed specifically for generating new ideas for CRISPR proteins, which an average language model might not be able to do. Meanwhile, in the world of programming, a great example of shared knowledge comes from Bharath Haba, who asked a question on Stack Overflow about disabling source maps for React JS applications. This question helped over a thousand people and received a great answer with 40 upvotes. These examples highlight the importance of both specialized models and the sharing of knowledge within the tech community. If you're interested in contributing to this community, you can join the conversation on Stack Overflow or listen to the podcast for engaging discussions on various tech topics. And remember, leaving a rating and review is the nicest thing you can do besides sending money and free swag.

    Recent Episodes from The Stack Overflow Podcast

    How to build open source apps in a highly regulated industry

    How to build open source apps in a highly regulated industry

    Before Medplum, Reshma founded and exited two startups in the healthcare space – MedXT (managing medical images online acquired by Box) and Droplet (at-home diagnostics company acquired by Ro). Reshma has a B.S. in computer science and a Masters of Engineering from MIT.

    You can learn more about Medplum here and check out their Github, which has over 1,200 stars, here.

    You can learn more about Khilnani on her website, GitHub, and on LinkedIn.

    Congrats to Stack Overflow user Kvam for earning a Lifeboat Badge with an answer to the question: 

    What is the advantage of using a Bitarray when you can store your bool values in a bool[]?

    A very special 5-year-anniversary edition of the Stack Overflow podcast!

    A very special 5-year-anniversary edition of the Stack Overflow podcast!

    Cassidy reflect on her time as a CTO of a startup and how the shifting environment for funding has created new pressures and incentives for founders, developers, and venture capitalists.

    Ben tries to get a bead on a new Moore’s law for the GenAI era: when will we start to see diminishing returns and fewer step factor jumps? 

    Ben and Cassidy remember the time they made a viral joke of a keyboard!

    Ryan sees how things goes in cycles. A Stack Overflow job board is back! And what do we make of the trend of AI assisted job interviews where cover letters and even technical interviews have a bot in the background helping out.

    Congrats to Erwin Brandstetter for winning a lifeboat badge with an answer to this question:  How do I convert a simple select query like select * from customers into a stored procedure / function in pg?

    Say goodbye to "junior" engineering roles

    Say goodbye to "junior" engineering roles

    How would all this work in practice? Of course, any metric you set out can easily become a target that developers look to game. With Snapshot Reviews, the goal is to get a high level overview of a software team’s total activity and then use AI to measure the complexity of the tasks and output.

    If a pull request attached to a Jira ticket is evaluated as simple by the system, for example, and a programmer takes weeks to finish it, then their productivity would be scored poorly. If a coder pushes code changes only once or twice a week, but the system rates them as complex and useful, then a high score would be awarded. 

    You can learn more about Snapshot Reviews here.

    You can learn more about Flatiron Software here.

    Connect with Kirim on LinkedIn here.

    Congrats to Stack Overflow user Cherry who earned a great question badge for asking: Is it safe to use ALGORITHM=INPLACE for MySQL?

    Making ETL pipelines a thing of the past

    Making ETL pipelines a thing of the past

    RelationalAI’s first big partner is Snowflake, meaning customers can now start using their data with GenAI without worrying about the privacy, security, and governance hassle that would come with porting their data to a new cloud provider. The company promises it can also add metadata and a knowledge graph to existing data without pushing it through an ETL pipeline.

    You can learn more about the company’s services here.

    You can catch up with Cassie on LinkedIn.

    Congrats to Stack Overflow user antimirov for earning a lifeboat badge by providing a great answer to the question: 

    How do you efficiently compare two sets in Python?

    The world’s most popular web framework is going AI native

    The world’s most popular web framework is going AI native

    Palmer says that a huge percentage of today’s top websites, including apps like ChartGPT, Perplexity, and Claude, were built with Vercel’s Next.JS. 

    For the second goal, you can see what Vercel is up to with its v0 project, which lets developers use text prompts and images to generate code. 

    Third, the Vercel AI SDK, which aims to to help developers build conversational, streaming, and chat user interfaces in JavaScript and TypeScript. You can learn more here.

    If you want to catch Jared posting memes, check him out on Twitter. If you want to learn more abiout the AI SDK, check it out 

    here.

    A big thanks to Pierce Darragh for providing a great answer and earning a lifeboat badge by saving a question from the dustinbin of history. Pierce explained: How you can split documents into training set and test set

    Can software startups that need $$$ avoid venture captial?

    Can software startups that need $$$ avoid venture captial?

    You can find Shestakofsky on his website or check him out on X.

    Grab a copy of his new book: Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality. 

    As he writes on his website, the book:

    Draws on 19 months of participant-observation research to examine how investors’ demand for rapid growth created organizational problems that managers solved by combining high-tech systems with low-wage human labor. The book shows how the burdens imposed on startups by venture capital—as well as the benefits and costs of “moving fast and breaking things”—are unevenly distributed across a company’s workforce and customers. With its focus on the financialization of innovation, Behind the Startup explains how the gains generated by tech startups are funneled into the pockets of a small cadre of elite investors and entrepreneurs. To promote innovation that benefits the many rather than the few, Shestakofsky argues that we should focus less on fixing the technology and more on changing the financial infrastructure that supports it.

    A big thanks to our user of the week, Parusnik, who was awarded a Great Question badge for asking: How to run a .NET Core console application on Linux?

    An open-source development paradigm

    An open-source development paradigm

    Temporal is an open-source implementation of durable execution, a development paradigm that preserves complete application state so that upon host or software failure it can seamlessly migrate execution to another machine. Learn how it works or dive into the docs. 

    Temporal’s SaaS offering is Temporal Cloud.

    Replay is a three-day conference focused on durable execution. Replay 2024 is September 18-20 in Seattle, Washington, USA. Get your early bird tickets or submit a talk proposal!

    Connect with Maxim on LinkedIn.

    User Honda hoda earned a Famous Question badge for SQLSTATE[01000]: Warning: 1265 Data truncated for column.