Logo

    Building toward a bright post-AGI future with Eric Steinberger from Magic.dev

    enAugust 30, 2024
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    Podcast Summary

    • Early AI background of Magic CEOCEO Eric Steinberger's diverse background in AI, from fascination to coding, reinforcement learning, algorithm development, and now leading Magic, which simplifies AI development through long-context windows and online optimization.

      Eric Steinberger, the CEO of Magic, has had a diverse background that led him to the field of AI. Starting from a fascination with AI at a young age, he learned to code and delved into reinforcement learning. Later, he worked on developing more efficient algorithms at DeepMind and Meta. Magic, the company he co-founded, focuses on building a system that writes code and comes up with ideas, simplifying the process of developing a system capable of doing everything. They took a different architectural approach, focusing on long-context windows and building an online optimizer that brings compute to the data. This approach allows models to learn from large amounts of data and adapt to fast-changing information, making them more effective.

    • Long-term projects and model learningFor long-term projects, models that can learn from and remember all data are more effective than retrieval systems. Allowing users to choose their level of compute for inference and considering output distribution are also important.

      When it comes to generating high-quality outputs, particularly for long-term projects, using a model that can learn from and remember all the data, rather than relying on retrieval systems that select a subset of data for each completion, is more effective. This is based on the assumption that the model can learn the heuristics better than having a heuristic. The discussion also touched upon the importance of considering the distribution of outputs and allowing users to choose their level of compute for inference. Additionally, the challenge of regulating the amount of compute in server lies in finding the right algorithms to do so. As for the company's goal of eventually building AGI, the design choices may depend on whether the plan is to iterate towards a system that is very good at writing code and then use that to build the next version or if the goal is to build AGI directly. The conversation also mentioned that safety risks, while a concern, may not be as significant as some people think and are likely resolvable.

    • AGI developmentContinually addressing safety and alignment issues at each stage of AGI development is crucial for progress, while automating this process is essential for prioritizing safety. Benefits of AGI include increased productivity and economic gains, but challenges include job displacement and ethical considerations.

      The development of advanced artificial general intelligence (AGI) is a complex and evolving process that requires a recursive, iterative approach with a focus on safety and alignment. The speaker emphasizes that the only way to reasonably approach this is by continually asking the model to solve alignment and safety issues at each stage, while also addressing product-level problems. He believes that automating this process is crucial for prioritizing safety and making progress towards AGI. Furthermore, the speaker highlights the potential benefits of AGI, including increased productivity and automation of work, which could lead to significant economic gains. However, he acknowledges the potential challenges and concerns, particularly regarding job displacement and ethical considerations. The speaker's company, Fermi NVR, is pursuing this goal with a large cluster of computers, recognizing the need for significant compute resources to make progress. Despite the challenges, the speaker remains optimistic about the potential benefits of AGI and the role his company can play in its development.

    • Automating coding tasksCreating a trusted, reliable coding assistant requires a strong team, significant resources, and a market with a high potential for shifting from manual to automated tasks. The goal is to create an assistant that feels like a genius colleague, making coding more efficient and productive.

      Creating an assistant with high levels of trust and reliability to handle most, if not all, coding tasks is a significant goal in the tech industry. The speaker acknowledges the challenges in reaching this level of automation, as each iteration brings improvements but also requires additional time and resources. He emphasizes the importance of a strong team to help achieve this goal, noting that recruitment was initially difficult but became easier with funding and demonstrable progress. The market potential for such a product is also highlighted as a step function moment where users shift from manually writing and reviewing code to relying on an assistant for most or even all coding tasks. The speaker believes that the leap to full automation is not a large one, and that adding features to verticals one by one is a feasible approach. Overall, the goal is to create an assistant that feels like a true genius colleague, making coding more efficient and productive for developers.

    • Exceptional hires for AGI developmentIdentifying and hiring individuals with exceptional drive, loyalty, and deep understanding of their field is crucial for building a successful and innovative organization in AGI development. The culture of the organization should be mission-focused and long-term, and a rational debate is necessary to navigate the complexities of AGI.

      Identifying and hiring individuals with exceptional drive, loyalty, and deep understanding of their field is crucial for building a successful and innovative organization, especially in the context of advanced technology development like Artificial General Intelligence (AGI). These individuals may not be immediately obvious, but their contributions can significantly impact the mission and the industry as a whole. The culture of the organization should be centered around the mission, deep productivity, and a commitment to the long-term goals, rather than short-term gains or external validation. The implications of AGI are complex and multifaceted, with potential benefits and risks. A rational and nuanced debate is necessary to navigate these complexities and ensure the best possible outcome. Ultimately, a free and competitive market, guided by appropriate guardrails, offers the best chance for optimizing the development and implementation of AGI. The future will likely involve a blend of automation and human creativity, with new forms of labor and value emerging.

    • Societal changes with AGIThe transition to an automated economy with AGI brings challenges, but with careful planning and a focus on the greater good, we can create a world where everyone has access to abundance and infinite computer capabilities.

      As we move towards a world with advanced artificial general intelligence (AGI), there will be significant societal changes. Some individuals, particularly those who find meaning and fulfillment in competitive work, may feel deeply frustrated. However, there are also opportunities for growth and new forms of contribution to society. Eric is focused on building an automation engine that can answer complex questions, but his ultimate goal is to ensure a positive future for humanity in 30 years. He believes that if we can keep the world from becoming terrible, it will be amazing. The Riemann hypothesis and other complex problems may be solved as a result of this technology, but Eric's personal north star is the long-term well-being of society. The transition to an automated economy will bring challenges, but with careful planning and a focus on the greater good, we can create a world where everyone has access to abundance and infinite computer capabilities.

    • AI interaction with tools and interfacesThe future of AI interaction will focus on mimicking human behavior with tools adapting to serve AI systems, market demand will shape the direction, and AI may eventually surpass human capabilities.

      The future of AI interaction with other tools and interfaces will be a significant focus, with AI systems potentially becoming the main point of interaction and tools adapting to serve them better. The market's desire will play a significant role in determining the direction of this integration, and it's anticipated that AI systems will eventually be able to perform tasks like humans do, or even surpass them. The conversation also touched on the idea that companies may integrate AI into their existing systems, and that competition and acquisitions are likely to occur. Ultimately, the goal is for AI to interact with tools in a way that mimics human behavior, with the tools themselves becoming secondary. The conversation also emphasized the importance of experimentation and trying out different approaches to understand what the market wants.

    Recent Episodes from No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

    Future of LLM Markets, Consolidation, and Small Models with Sarah and Elad

    Future of LLM Markets, Consolidation, and Small Models with Sarah and Elad
    In this episode of No Priors, Sarah and Elad go deep into what's on everyone’s mind. They break down new partnerships and consolidation in the LLM market, specialization of AI models, and AMD’s strategic moves. Plus, Elad is looking for a humanoid robot.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil  Show Notes: (0:00) Introduction (0:24) LLM market consolidation  (2:18) Competition and decreasing API costs (3:58) Innovation in LLM productization  (8:20) Comparing  the LLM and social network market (11:40) Increasing competition in image generation (13:21) Trend in smaller models with higher performance (14:43) Areas of innovation (17:33) Legacy of AirBnB and Uber pushing boundaries (24:19) AMD Acquires ZT  (25:49) Elad’s looking for a Robot

    The Road to Autonomous Intelligence with Andrej Karpathy

    The Road to Autonomous Intelligence with Andrej Karpathy
    Andrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and  how AI capabilities could be further integrated with human cognition.  Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Karpathy Show Notes:  (0:00) Introduction (0:33) Evolution of self-driving cars (2:23) The Tesla  vs. Waymo approach to self-driving  (6:32) Training Optimus  with automotive models (10:26) Reasoning behind the humanoid form factor (13:22) Existing challenges in robotics (16:12) Bottlenecks of AI progress  (20:27) Parallels between human cognition and AI models (22:12) Merging human cognition with AI capabilities (27:10) Building high performance small models (30:33) Andrej’s current work in AI-enabled education (36:17) How AI-driven education reshapes knowledge networks and status (41:26) Eureka Labs (42:25) What young people study to prepare for the future

    Building toward a bright post-AGI future with Eric Steinberger from Magic.dev

    Building toward a bright post-AGI  future with Eric Steinberger from Magic.dev
    Today on No Priors, Sarah Guo and Elad Gil are joined by Eric Steinberger, the co-founder and CEO of Magic.dev. His team is developing a software engineer co-pilot that will act more like a colleague than a tool. They discussed what makes Magic stand out from the crowd of AI co-pilots, the evaluation bar for a truly great AI assistant, and their predictions on what a post-AGI world could look like if the transition is managed with care.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EricSteinb Show Notes:  (0:00) Introduction (0:45) Eric’s journey to founding Magic.dev (4:01) Long context windows for more accurate outcomes (10:53) Building a path toward AGI (15:18) Defining what is enough compute for AGI (17:34) Achieving Magic’s final UX (20:03) What makes a good AI assistant (22:09) Hiring at Magic (27:10) Impact of AGI (32:44) Eric’s north star for Magic (36:09) How Magic will interact in other tools

    Cloud Strategy in the AI Era with Matt Garman, CEO of AWS

    Cloud Strategy in the AI Era with Matt Garman, CEO of AWS
    In this episode of No Priors, hosts Sarah and Elad are joined by Matt Garman, the CEO of Amazon Web Services. They talk about the evolution of Amazon Web Services (AWS) from its inception to its current position as a major player in cloud computing and AI infrastructure. In this episode they touch on AI commuting hardware,  partnerships with AI startups, and the challenges of scaling for AI workloads. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil  Show Notes:  (00:00) Introduction  (00:23) Matt’s early days at Amazon (02:53) Early conception of AWS (06:36) Understanding the full opportunity of cloud compute (12:21) Blockers to cloud migration (14:19) AWS reaction to Gen AI (18:04) First-party models at hyperscalers (20:18) AWS point of view on open source (22:46) Grounding and knowledge bases (26:07) Semiconductors and data center capacity for AI workloads (31:15) Infrastructure investment for AI startups (33:18) Value creation in the AI ecosystem (36:22) Enterprise adoption  (38:48) Near-future predictions for AWS usage (41:25) AWS’s role for startups

    The marketplace for AI compute with Jared Quincy Davis from Foundry

    The marketplace for AI compute with Jared Quincy Davis from Foundry
    In this episode of No Priors, hosts Sarah and Elad are joined by Jared Quincy Davis, former DeepMind researcher and the Founder and CEO of Foundry, a new AI cloud computing service provider. They discuss the research problems that led him to starting Foundry, the current state of GPU cloud utilization, and Foundry's approach to improving cloud economics for AI workloads. Jared also touches on his predictions for the GPU market and the thinking behind his recent paper on designing compound AI systems. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @jaredq_ Show Notes:  (00:00) Introduction  (02:42) Foundry background (03:57) GPU utilization for large models (07:29) Systems to run a large model (09:54) Historical value proposition of the cloud (14:45) Sharing cloud compute to increase efficiency  (19:17) Foundry’s new releases (23:54) The current state of GPU capacity (29:50) GPU market dynamics (36:28) Compound systems design (40:27) Improving open-ended tasks

    How AI can help build smarter systems for every team with Eric Glyman and Karim Atiyeh of Ramp

    How AI can help build  smarter systems for every team  with Eric Glyman and Karim Atiyeh of Ramp
    In this episode of No Priors, hosts Sarah and Elad are joined by Ramp co-founders Eric Glyman and Karim Atiyeh of Ramp. The pair has been working to build one of the fastest growing fintechs since they were teenagers. This conversation focuses on how Ramp engineers have been building new systems to help every team from sales and marketing to product. They’re building best-in-class SaaS solutions just for internal use to make sure their company remains competitive. They also get into how AI will augment marketing and creative fields, the challenges of selling productivity, and how they’re using LLMs to create internal podcasts using sales calls to share what customers are saying with the whole team.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @eglyman l @karimatiyeh Show Notes:  (0:00) Introduction to Ramp (3:17) Working with startups (8:13) Ramp’s implementation of AI (14:10) Resourcing and staffing (17:20) Deciding when to build vs buy (21:20) Selling productivity (25:01) Risk mitigation when using AI (28:48) What the AI stack is missing (30:50) Marketing with AI (37:26) Designing a modern marketing team (40:00) Giving creative freedom to marketing teams (42:12) Augmenting bookkeeping (47:00) AI-generated podcasts

    Innovating Spend Management through AI with Pedro Franceschi from Brex

    Innovating Spend Management through AI with Pedro Franceschi from Brex
    Hunting down receipts and manually filling out invoices kills productivity. This week on No Priors, Sarah Guo and Elad Gil sit down with Pedro Franceschi, co-founder and CEO of Brex. Pedro discusses how Brex is harnessing AI to optimize spend management and automate tedious accounting and compliance tasks for teams. The conversation covers the reliability challenges in AI today, Pedro’s insights on the future of fintech in an AI-driven world, and the major transitions Brex has navigated in recent years. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Pedroh96 Show Notes:  (0:00) Introduction (0:32) Brex’s business and transitioning to solo CEO (3:04) Building AI into Brex  (7:09) Solving for risk and reliability in AI-enabled financial products (11:41) Allocating resources toward AI investment (14:00) Innovating data use in marketing  (20:00) Building durable businesses in the face of AI (25:36) AI’s impact on finance (29:15) Brex’s decision to focus on startups and enterprises

    Google DeepMind's Vision for AI, Search and Gemini with Oriol Vinyals from Google DeepMind

    Google DeepMind's Vision for AI, Search and Gemini with Oriol Vinyals from Google DeepMind
    In this episode of No Priors, hosts Sarah and Elad are joined by Oriol Vinyals, VP of Research, Deep Learning Team Lead, at Google DeepMind and Technical Co-lead of the Gemini project. Oriol shares insights from his career in machine learning, including leading the AlphaStar team and building competitive StarCraft agents. We talk about Google DeepMind, forming the Gemini project, and integrating AI technology throughout Google products. Oriol also discusses the advancements and challenges in long context LLMs, reasoning capabilities of models, and the future direction of AI research and applications. The episode concludes with a reflection on AGI timelines, the importance of specialized research, and advice for future generations in navigating the evolving landscape of AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @oriolvinyalsml Show Notes:  (00:00) Introduction to Oriol Vinyals (00:55) The Gemini Project and Its Impact (02:04) AI in Google Search and Chat Models (08:29) Infinite Context Length and Its Applications (14:42) Scaling AI and Reward Functions (31:55) The Future of General Models and Specialization (38:14) Reflections on AGI and Personal Insights (43:09) Will the Next Generation Study Computer Science? (45:37) Closing thoughts

    Low-Code in the Age of AI and Going Enterprise, with Howie Liu from Airtable

    Low-Code in the Age of AI and Going Enterprise, with Howie Liu from Airtable
    This week on No Priors, Sarah Guo and Elad Gil are joined by Howie Liu, the co-founder and CEO of Airtable. Howie discusses their Cobuilder launch, the evolution of Airtable from a simple productivity tool to an enterprise app platform with integrated AI capabilities. They talk about why the conventional wisdom of “app not platform” can be wrong,  why there’s a future for low-code in the age of AI and code generation, and where enterprises need help adopting AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Howietl Show Notes:  (00:00) Introduction (00:29) The Origin and Evolution of Airtable (02:31) Challenges and Successes in Building Airtable (06:09) Airtable's Transition to Enterprise Solutions (09:44) Insights on Product Management (16:23) Integrating AI into Airtable (21:55) The Future of No Code and AI (30:30) Workshops and Training for AI Adoption (36:28) The Role of Code Generation in No Code Platforms

    How AI is opening up new markets and impacting the startup status quo with Sarah Guo and Elad Gil

    How AI is opening up new markets and impacting the startup status quo with Sarah Guo and Elad Gil
    This week on No Priors, we have a host-only episode. Sarah and Elad catch up to discuss how tech history may be repeating itself. Much like in the early days of the internet, every company is clamoring to incorporate AI into their products or operations while some legacy players are skeptical that investment in AI will pay off. They also get into new opportunities and capabilities that AI is opening up, whether or not incubators are actually effective, and what companies are poised to stand the test of time in the changing tech landscape. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes:  (0:00) Introduction (0:16) Old school operators AI misunderstandings (5:10) Tech history is repeating itself with slow AI adoption (6:09) New AI Markets (8:48) AI-backed buyouts (13:03) AI incubation (17:18) Exciting incubating applications (18:26) AI and the public markets (22:20) Staffing AI companies  (25:14) Competition and shrinking head count