Logo
    Search

    Podcast Summary

    • GPT 3 generating human-like text and performing tasksGPT 3, a text predictor, generates human-like text and performs tasks like arithmetic, code writing, and design, showcasing potential for general AI progress.

      GPT 3, a text predictor developed by OpenAI, is making waves in the tech world due to its ability to generate human-like text and even perform tasks like arithmetic, code writing, and design, showcasing the potential for the model to transcend the boundaries of text and code. The model, which was released in late May but only recently made commercially available, has been used to generate cherry-picked examples such as forum posts, comments, press releases, poetry, screenplays, articles, strategy documents, and even code for designing websites. These examples demonstrate the model's impressive ability to understand and generate human-like text and even perform tasks that were previously thought to require human intelligence. The excitement surrounding GPT 3 lies in its potential as a promising sign of progress towards general artificial intelligence, as it can handle a wide range of natural language processing tasks without the need for retraining. This could lead to significant advancements in areas such as customer service, content creation, and more, making it a game-changer in the tech industry.

    • GPT 3: OpenAI's First Commercial ProductGPT 3 is OpenAI's first commercial product, an autoregressive language model offering superpowers to developers and businesses via an API, with access limited to ensure responsible usage.

      GPT 3 is a pretrained machine learning model for natural language processing tasks, accessible via an API. This model, which is based on a transformer architecture, is a large-scale neural network that can perform a wide range of NLP tasks, such as answering questions or generating text. The API serves as a gateway for developers and businesses to access this technology without having to train the model from scratch or possess the required computational resources. The controversy surrounding the release of GPT 3 led OpenAI to limit access to the API, ensuring responsible usage and preventing potential misuse. While the term "GPT 3" is commonly used to refer to OpenAI's API, it actually encompasses a combination of technologies, making it OpenAI's first commercial product. This autoregressive language model is a significant advancement in the field of AI, offering superpowers to developers and businesses by lowering the barrier to entry for AI applications.

    • Transformer networks process large contexts for improved understandingTransformer networks use vast training data to understand and generate human-like text by processing large contexts, allowing for improved disambiguation of words and better few shot learning capabilities.

      Transformer networks represent a significant shift in natural language processing (NLP) compared to recurrent neural networks (RNNs). Transformer networks can process large sentences in context, allowing them to consider the meaning of words in relation to the entire sentence, rather than sequentially as RNNs do. This is particularly useful for disambiguating words with multiple meanings, as the transformer network can consider the surrounding context to determine the correct sense. OpenAI has been applying the transformer architecture to larger and larger datasets, starting with Wikipedia and open source textbooks, and most recently training it on Common Crawl, a vast collection of web page text. This has resulted in a neural network with a staggering 175 billion parameters, significantly larger than previous attempts. The term "few shot learning" refers to the model's ability to perform various NLP tasks based on a few examples. For instance, given an analogy like "King is to queen as X is to Y," the model can be asked to fill in the blanks with the correct answer based on the provided example. This is in contrast to zero shot learning, where the model is expected to understand a concept without any examples. The transformer network's impressive performance is a result of its ability to process large contexts and the vast amount of training data it was given. This has led to a significant increase in the model's ability to understand and generate human-like text.

    • Exploring new approaches to prime GPT 3 for similar tasksWhile GPT 3 excels at certain tasks, it struggles with others and can lose coherence or contradict itself. New approaches involve priming the model with a few examples without adjusting weights, inspired by human learning.

      While we have advanced natural language processing models like GPT 3, they still have limitations and are not yet capable of true general intelligence. The model learns by adjusting its weights based on labeled data during training, but in the discussion, a new approach was introduced where the model is given a few examples without adjusting the weights, instead priming it for similar tasks. This method is inspired by how humans learn, as we can recognize patterns and solve new problems based on a few examples, rather than needing extensive data. The researchers noted that GPT 3 excels at certain tasks but struggles with others, just like humans. However, it's important to remember that the model is not truly intelligent, as it can lose coherence, contradict itself, and produce non-sequitur sentences or paragraphs. These limitations were humorously pointed out by the speaker, who noted that human writing can also exhibit these issues. Despite these limitations, GPT 3 represents a significant advancement in natural language processing, as it consistently performs well in various subtasks compared to the state of the art. The speaker emphasized the importance of understanding both the capabilities and limitations of such models to avoid overestimating their intelligence and potential applications.

    • Exploring the potential of GPT 3 for startupsDespite its advanced capabilities, GPT 3's use in startups is still in the research phase due to challenges like size, computational requirements, and the gap between its capabilities and startup needs. However, its potential is exciting and the future holds promise for easier and more accessible use through refined APIs and improved performance.

      While the new natural language model, GPT 3, is one of the most advanced we've seen, it's still in the research phase and it's unclear if it can be used to build successful startups like chatbots, customer support agents, or mental health apps. The hope is that as the technology develops, it will become easier and more accessible for startups to use through refined APIs and improved performance. However, there are challenges to overcome, such as the model's size and computational requirements, which make it currently impractical for many applications. Additionally, there is a gap between the capabilities of the model and the specific needs of startups, requiring further development and adaptation. The use of APIs for this technology can be a double-edged sword, as they democratize access to powerful tools but also create pressure to differentiate through proprietary elements. Overall, while the potential of GPT 3 is exciting, it's important to remember that it's still early days and there is much work to be done before it can be fully utilized in the startup world.

    • New commercial product reduces costs and time for building machine learning modelsOpenAI's GPT-3 offers potential for significant cost and time savings in machine learning model building, but careful handling of prompts and sampling hyperparameters is required.

      OpenAI's new commercial product, which is based on their large language model called GPT-3, has the potential to significantly reduce the costs and time required for building machine learning models, especially for startups. This could lead to intense competition among tech giants and new players in the market, as they all aim to offer similar text understanding capabilities to their customers. However, this isn't a plug-and-play solution, as the use of the model requires careful handling of prompt and sampling hyperparameters, and priming is more of an art than a science. The future of data science may involve a shift towards understanding this new programming paradigm, which is fundamentally different from the traditional programming skills that have been widely taught and practiced so far. The exact skills required for working with such models are yet to be determined, but it's clear that this represents a new frontier in the field of artificial intelligence.

    • The future of programming: Priming AI systems with the right examplesThe future of programming may shift towards showing AI systems the right examples for optimal performance, leading to new tasks and economic opportunities, but also requiring safety measures to ensure ethical and accurate predictions.

      The future of programming might shift from focusing on traditional tasks like memory allocation and efficient search algorithms to showing AI systems the right examples for optimal performance. This new approach, as described by Vitalik Buterin, could lead to significant changes in the job market, with new tasks like example selection and debugging emerging. These tasks, which involve human expertise, could generate economic value and make the field more inclusive for a wider range of people. However, there are concerns about potential issues with the data used to train these AI systems, which could result in biased or culturally offensive responses. To mitigate these risks, it's crucial to establish safety measures and APIs to ensure that AI systems are making accurate and ethical predictions. Overall, the future of programming could look very different, with a greater emphasis on human-AI collaboration and the ability to prime AI systems with the right examples for optimal performance.

    • Considering Ethical Implications and Implementing Filters in AI SystemsAs AI technology advances, it's crucial to teach social norms and implement filters to mitigate unwanted outputs, ensuring AI systems measure and answer what they're supposed to, and get smarter in a responsible manner.

      As we continue to advance in AI technology, particularly with models like GPT 3, it's crucial to consider the ethical implications and the importance of teaching social norms to these systems. The Internet, as a vast repository of knowledge, reflects both the good and the bad aspects of humanity. While changing human behavior and societal systems is challenging, modifying technical systems offers a solution. The discussion highlighted the potential for implementing filters and checks and balances within these systems to mitigate unwanted outputs. For instance, a text document generated by a model could be filtered to remove sexism or racism. This second step provides an additional layer of safety and control. Kevin Lacker's Turing test experiment demonstrated the importance of asking questions that no normal human would ever talk about to evaluate a system's understanding of common sense and logic. The Turing test's modernization has been a topic of debate over the years, and it's likely that Turing himself would propose a different test in today's context. The ultimate question remains: how do we ensure that these AI systems are measuring and answering what they're supposed to, and that they're getting smarter? While comparing their performance against other state-of-the-art techniques on various natural language processing tasks is a start, it's essential to keep asking philosophical questions and engaging in thoughtful discussions to address these complex issues.

    • The Debate Over GPT-3's Understanding and Implications for AIGPT-3's impressive language skills raise questions about its true understanding of the world. While some view it as a toy, others see it as a game-changer for natural language processing.

      While GPT-3's high BLEU scores indicate excellent translation capabilities, it raises questions about its true understanding of the world. The fact that it aces 2-digit arithmetic doesn't necessarily mean it comprehends the concept behind numbers or their real-world significance. It's a philosophical question about the importance of understanding versus application. In education, a student's ability to solve problems and apply knowledge in real-life situations matters more than their metacognitive awareness. Similarly, even if GPT-3 excels at a wide range of language tasks, insisting that it doesn't "get" the world or language is debatable. The viral nature of GPT-3's capabilities, with its "TikTok videos of nerds" demonstrations, highlights both its toy-like appeal and its potential as a significant step towards general intelligence. While some argue it's a toy due to its sandbox environment, others see it as a game-changer because it doesn't have a specific end use case and performs well across various tasks. GPT-3's beginnings may resemble a toy, but its potential to revolutionize natural language processing is undeniable. The debate surrounding GPT-3's understanding and its implications for innovation is an exciting development in the field of artificial intelligence. As researchers continue to explore its capabilities, we can expect more fascinating discoveries and discussions about the future of AI.

    • Exploring the tantalizing potential of AI models and APIsAI models and APIs are surpassing expectations in various tasks, hinting at the possibility of artificial general intelligence, but we're not there yet.

      The latest advancements in AI models and APIs are showing surprising effectiveness across a broad range of tasks, some of which were not initially intended by their designers. This has led some experts to speculate that we may be on the path to achieving artificial general intelligence. However, it's important to note that we're still a long way from definitively reaching that goal. The analogy of Tantalus from Greek mythology is fitting here, as the tantalizing fruit dangling just out of reach represents the intriguing potential of these technologies, which is both tantalizingly close yet still elusive. While there are limitations to how big these models can grow and how effective the APIs will be for regular programmers, the fact that they're performing well in diverse areas is both surprising and exciting. Ultimately, this is an important step forward in the ongoing quest to develop advanced AI systems that can truly understand and adapt to the complexities of the human world.

    Recent Episodes from a16z Podcast

    Founders Playbook: Lessons from Riot, Discord, & More

    Founders Playbook: Lessons from Riot, Discord, & More

    Gaming is not just entertainment—it's a revolution reshaping our culture, technology, and economy. 

    a16z’s Jonathan Lai and Andrew Chen dive into the current gaming renaissance and its future impact. Joining them are Michael Chow, CEO and Steven Snow, CPO of The Believer Company, and Eros Resmini, Founder and Managing Partner of The Mini Fund.

    They explore the intersection of tech, art, psychology, and design in gaming, discussing how startups can navigate intense competition, distribution challenges, and high production costs. With insights from these industry leaders, this episode covers the transformative potential of AI, the importance of player feedback, and strategies to stand out in a crowded market.

    Recorded during Speedrun, a16z’s extensive games accelerator, this episode offers a glimpse into the strategies and innovations driving the gaming industry forward.

     

    Resources: 

    Find Steven on Twitter: https://twitter.com/StevenSnow

    Find Michael on LinkedIn: https://www.linkedin.com/in/believer-paladin/

    Find Eros on Twitter: https://twitter.com/erosresmini

    Find Jonathan on Twitter: https://twitter.com/Tocelot

    Find Andrew on Twitter: https://twitter.com/andrewchen

    Learn more about Speedrun: https://a16z.com/games/speedrun/

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Transitioning From Gymnast to Investor with Aly Raisman

    Transitioning From Gymnast to Investor with Aly Raisman

    Former gymnast and current investor Aly Raisman joins general partner Julie Yoo and investment partner Daisy Wolf of a16z Bio + Health.

    In this episode, Aly Raisman shares her quest for healthier living—physically, mentally, and financially—on her journey from gymnast to a business investor. Having transitioned from an intensely structured routine, Aly emphasizes the need for more open conversations about mental health and financial literacy. She speaks passionately about the gap in women’s health solutions and hopes to inspire entrepreneurs to create impactful businesses. Aly’s experiences as a patient, survivor, and global figure adds a unique dimension to her perspective as an investor. This candid conversation with Aly and Julie Yoo sheds light on Aly’s passion for more education within the investment space, offering invaluable insights for entrepreneurs, particularly in biotech and healthcare.

     

    Resources: 

    Find Aly on Twitter: https://x.com/aly_raisman

    Find Julie on Twitter: https://x.com/julesyoo

    Find Daisy on Twitter: https://x.com/daisydwolf

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Live at Tech Week: Delivering AI Products to Millions

    Live at Tech Week: Delivering AI Products to Millions

    Less than two years since the breakthrough of text-based AI, we now see incredible developments in multimodal AI models and their impact on millions of users.

    As part of New York Tech Week, we brought together a live audience and three leaders from standout companies delivering AI-driven products to millions. Gaurav Misra, Cofounder and CEO of Captions, Carles Reina, Chief Revenue Officer of ElevenLabs, and Laura Burkhauser, VP of Product at Descript discuss the challenges and opportunities of designing AI-driven products, solving real customer problems, and effective marketing.

    From the critical need for preventing AI misuse to ensuring international accessibility, they cover essential insights for the future of AI technology.

     

    Resources: 

    Find Laura on Twitter: https://x.com/burkenstocks

    Find Carles on Twitter :https://twitter.com/carles_reina

    Find Gaurav of Twitter: https://twitter.com/gmharhar

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

    Marc Andreessen on Building Netscape & the Birth of the Browser

    Marc Andreessen on Building Netscape & the Birth of the Browser

    "The Ben & Marc Show," featuring a16z co-founders Marc Andreessen and Ben Horowitz. 

    In this special episode, Marc and Ben dive deep into the REAL story behind the creation of Netscape—a web browser co-created by Marc that revolutionized the internet and changed the world. As Ben notes at the top, until today, this story has never been fully told either in its entirety or accurately. 

    In this one-on-one conversation, Marc and Ben discuss Marc's early life and how it shaped his journey into technology, the pivotal moments at the University of Illinois that led to the development of Mosaic (a renegade browser that Marc developed as an undergrad), and the fierce competition and legal battles that ensued as Netscape rose to prominence. 

    Ben and Marc also reflect on the broader implications of Netscape's success, the importance of an open internet, and the lessons learned that still resonate in today's tech landscape (especially with AI). That and much more. Enjoy!

    Watch the FULL Episode on YouTune: https://youtu.be/8aTjA_bGZO4

     

    Resources: 

    Marc on X: https://twitter.com/pmarca 

    Marc’s Substack: https://pmarca.substack.com/ 

    Ben on X: https://twitter.com/bhorowitz 

    Book mentioned on this episode: 

    - “Expert Political Judgment” by Philip E. Tetlock https://bit.ly/45KzP6M 

    TV Series mentioned on this episode: 

    - “The Mandalorian” (Disney+) https://bit.ly/3W0Zyoq

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    The Art of Technology, The Technology of Art

    The Art of Technology, The Technology of Art

    We know that technology has changed art, and that artists have evolved with every new technology — it’s a tale as old as humanity, moving from cave paintings to computers. Underlying these movements are endless debates around inventing versus remixing; between commercialism and art; between mainstream canon and fringe art; whether we’re living in an artistic monoculture now (the answer may surprise you); and much much more. 

    So in this new episode featuring Berlin-based contemporary artist Simon Denny -- in conversation with a16z crypto editor in chief Sonal Chokshi -- we discuss all of the above debates. We also cover how artists experimented with the emergence of new technology platforms like the web browser, the iPhone, Instagram and social media; to how generative art found its “native” medium on blockchains, why NFTs; and other art movements. 

    Denny also thinks of entrepreneurial ideas -- from Peter Thiel's to Chris Dixon's Read Write Own -- as an "aesthetic"; and thinks of technology artifacts (like NSA sketches!) as art -- reflecting all of these in his works across various mediums and contexts. How has technology changed art, and more importantly, how have artists changed with technology? How does art change our place in the world, or span beyond space? It's about optimism, and seeing things anew... all this and more in this episode.

     

    Resources: 

    Find Denny on Twitter: https://x.com/dennnnnnnnny

    Find Sonal on Twitter: https://x.com/smc90

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Cybersecurity's Past, Present, and AI-Driven Future

    Cybersecurity's Past, Present, and AI-Driven Future

    Is it time to hand over cybersecurity to machines amidst the exponential rise in cyber threats and breaches?

    We trace the evolution of cybersecurity from minimal measures in 1995 to today's overwhelmed DevSecOps. Travis McPeak, CEO and Co-founder of Resourcely, kicks off our discussion by discussing the historical shifts in the industry. Kevin Tian, CEO and Founder of Doppel, highlights the rise of AI-driven threats and deepfake campaigns. Feross Aboukhadijeh, CEO and Founder of Socket, provides insights into sophisticated attacks like the XZ Utils incident. Andrej Safundzic, CEO and Founder of Lumos, discusses the future of autonomous security systems and their impact on startups.

    Recorded at a16z's Campfire Sessions, these top security experts share the real challenges they face and emphasize the need for a new approach. 

    Resources: 

    Find Travis McPeak on Twitter: https://x.com/travismcpeak

    Find Kevin Tian on Twitter: https://twitter.com/kevintian00

    Find Feross Aboukhadijeh on Twitter: https://x.com/feross

    Find Andrej Safundzic on Twitter: https://x.com/andrejsafundzic

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

    The Science and Supply of GLP-1s

    The Science and Supply of GLP-1s

    Brooke Boyarsky Pratt, founder and CEO of knownwell, joins Vineeta Agarwala, general partner at a16z Bio + Health.

    Together, they talk about the value of obesity medicine practitioners, patient-centric medical homes, and how Brooke believes the metabolic health space will evolve over time.

    This is the second episode in Raising Health’s series on the science and supply of GLP-1s. Listen to last week's episode to hear from Carolyn Jasik, Chief Medical Officer at Omada Health, on GLP-1s from a clinical perspective.

     

    Listen to more from Raising Health’s series on GLP-1s:

    The science of satiety: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-carolyn-jasik

    Payers, providers and pricing: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-chronis-manolis

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    The State of AI with Marc & Ben

    The State of AI with Marc & Ben

    In this latest episode on the State of AI, Ben and Marc discuss how small AI startups can compete with Big Tech’s massive compute and data scale advantages, reveal why data is overrated as a sellable asset, and unpack all the ways the AI boom compares to the internet boom.

     

    Subscribe to the Ben & Marc podcast: https://link.chtbl.com/benandmarc

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Predicting Revenue in Usage-based Pricing

    Predicting Revenue in Usage-based Pricing

    Over the past decade, usage-based pricing has soared in popularity. Why? Because it aligns cost with value, letting customers pay only for what they use. But, that flexibility is not without issues - especially when it comes to predicting revenue. Fortunately, with the right process and infrastructure, your usage-based revenue can become more predictable than the traditional seat-based SaaS model. 

    In this episode from the a16z Growth team, Fivetran’s VP of Strategy and Operations Travis Ferber and Alchemy’s Head of Sales Dan Burrill join a16z Growth’s Revenue Operations Partner Mark Regan. Together, they discuss the art of generating reliable usage-based revenue. They share tips for avoiding common pitfalls when implementing this pricing model - including how to nail sales forecasting, adopting the best tools to track usage, and deal with the initial lack of customer data. 

    Resources: 

    Learn more about pricing, packaging, and monetization strategies: a16z.com/pricing-packaging

    Find Dan on Twitter: https://twitter.com/BurrillDaniel

    Find Travis on LinkedIn: https://www.linkedin.com/in/travisferber

    Find Mark on LinkedIn: https://www.linkedin.com/in/mregan178

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    California's Senate Bill 1047: What You Need to Know

    California's Senate Bill 1047: What You Need to Know

    On May 21, the California Senate passed bill 1047.

    This bill – which sets out to regulate AI at the model level – wasn’t garnering much attention, until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August that would cement it into law. In this episode, a16z General Partner Anjney Midha and Venture Editor Derrick Harris breakdown everything the tech community needs to know about SB-1047.

    This bill really is the tip of the iceberg, with over 600 new pieces of AI legislation swirling in the United States. So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.

    Read the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

    a16z Podcast
    enJune 06, 2024

    Related Episodes

    Getting Waymo into autonomous driving (Practical AI #103)

    Getting Waymo into autonomous driving (Practical AI #103)
    Waymo’s mission is to make it safe and easy for people and things to get where they’re going. After describing the state of the industry, Drago Anguelov - Principal Scientist and Head of Research at Waymo - takes us on a deep dive into the world of AI-powered autonomous driving. Starting with Waymo’s approach to autonomous driving, Drago then delights Daniel and Chris with a tour of the algorithmic tools in the autonomy toolbox.

    Collaboration & evaluation for LLM apps

    Collaboration & evaluation for LLM apps
    Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.

    Towards stability and robustness (Practical AI #141)

    Towards stability and robustness (Practical AI #141)
    9 out of 10 AI projects don’t end up creating value in production. Why? At least partly because these projects utilize unstable models and drifting data. In this episode, Roey from BeyondMinds gives us some insights on how to filter garbage input, detect risky output, and generally develop more robust AI systems.

    Stable Diffusion (Practical AI #193)

    Stable Diffusion (Practical AI #193)
    The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2). (Image from stability.ai)