Logo
    Search

    When Will AI Hit the Enterprise? Ben Horowitz and Ali Ghodsi Discuss

    enOctober 06, 2023

    Podcast Summary

    • Enterprises Cautiously Approach Generative AI AdoptionEnterprises prioritize data privacy, security, and accuracy, causing a slower adoption rate of generative AI compared to consumer and developer markets.

      While the adoption of generative AI is ramping up in sectors like consumer and developer markets, enterprises are moving more cautiously. Enterprises value their proprietary data deeply and are concerned about privacy, security, and potential data leaks. Additionally, they require high accuracy for many use cases, raising questions about whether they can build better models in-house or if they truly need the AI to be precise. The recent withdrawal of OpenAI's ChatGPT enterprise offering may add another layer of uncertainty. These challenges contribute to the slower adoption rate of generative AI in enterprises compared to other sectors. Despite the hurdles, cracking the code and successfully implementing generative AI in enterprises can lead to robust, long-term business opportunities.

    • Large enterprises' cautious approach to generative AICEOs and boards recognize potential value of data but hesitant to give it away, leading to internal politics and strategic decisions on build or buy approach.

      Large enterprises are cautiously approaching the implementation of generative AI technology due to internal politics and the fear of losing intellectual property. The use of generative AI is seen as a potential competitive advantage, leading to a "food fight" among different departments and teams within the enterprise over who should own and control it. The CEOs and boards recognize the potential value of their data in building a superior model, but they are hesitant to give it away to external companies. Instead, they prefer to build and own the technology themselves, despite the significant resources and expertise required. The decision to build or buy depends on the specific data set and use case, and some enterprises are turning to acquisitions or partnerships to scale their efforts. Overall, the implementation of generative AI in large enterprises is a complex and strategic process, with significant potential rewards but also significant challenges.

    • Enterprises can choose between larger, more general models and smaller, enterprise-specific modelsEnterprises can save costs and achieve high accuracy with smaller, enterprise-specific models for their unique tasks. Larger, more general models offer more intelligence but require significant investment and resources.

      For enterprises with specific use cases, it's more beneficial to train and use smaller, enterprise-specific models rather than investing in larger, more general models. These smaller models offer faster latency, lower costs, and can achieve high levels of accuracy for their particular tasks. However, for those with the resources to build and fine-tune larger models, they offer more intelligence and can handle a wider range of tasks, but come with higher costs and more complex fine-tuning requirements. The decision between the two ultimately depends on the specific needs, resources, and priorities of the enterprise. Databricks, as a company, is building larger models to cater to those who can afford the higher costs and have the necessary resources to fine-tune them. The larger models follow scaling laws and are more intelligent, but they require a significant investment in terms of data, computational resources, and fine-tuning expertise. For most enterprises, however, the smaller, enterprise-specific models offer a more cost-effective and efficient solution for their unique use cases.

    • Focus on developing smaller, specialized models for specific tasksDespite the ultimate goal of creating one large, intelligent foundation model, current focus is on developing smaller, efficient models for specific tasks using techniques like prefix tuning and Lora. Demand for these models is high, but limited GPU resources hinder their availability.

      While the ultimate goal is to develop one large, intelligent foundation model, the current focus is on developing smaller, specialized models that can efficiently handle specific tasks. These techniques, such as prefix tuning and Lora, aim to achieve good results with minimal modifications and computational resources. However, none of these methods have proven to be a "slam dunk" solution. In the near future, it is expected that a smart, large foundation model will be developed, which can then be specialized for various tasks. However, this is not yet achievable, and in the meantime, there is high demand for specialized models that offer high accuracy and performance on specific tasks. Companies like Databricks are experiencing this demand and are unable to meet it due to limited GPU resources. The use cases for these models are expected to fragment, with each model specializing in a specific task, such as drawing pictures or memes. However, these models will likely build upon common base models, rather than starting from scratch each time.

    • The Future of AI: Foundational Technologies, Applications, and Open CollaborationOpen source initiatives and large language models contribute significantly to the growth and development of the AI industry, pushing boundaries and driving progress.

      While the current focus on large language models (LLMs) and AI dominates the conversation, it's essential to remember that applications and use cases will continue to emerge, and the value lies in building trustworthy and effective AI solutions for various industries. Cisco's dominance in the internet era serves as a reminder that companies leading in foundational technologies don't necessarily maintain their position forever. Open source has played a significant role in advancing AI, and it's unlikely that efforts to restrict it will be successful due to the ease of accessing and replicating models. The release of large models like Llama has been a game-changer, and continuing advancements will continue to push the boundaries of AI technology. From the perspective of a company like Databricks, open source initiatives like Mosaic and LLMs contribute to the overall growth and development of the AI industry. If LLM had not been open-sourced, we would be significantly behind in our understanding and progress of AI. The future of AI lies in the combination of foundational technologies, applications, and open collaboration.

    • The open source vs proprietary AI race and its challengesOpen source AI development faces resource requirements and incentives for proprietary models, while scarcity of GPUs limits participation and innovation in making AI more accessible. Predictions of hitting scaling walls before AGI and the need for human involvement in certain applications also challenge the field.

      The race between open source and proprietary AI development will continue, with open source eventually catching up but facing challenges due to the high resource requirements and the incentives for organizations and individuals to keep their best models proprietary. The scarcity of GPUs and the resulting inability of universities and smaller entities to participate in the latest AI research is leading to a crisis and driving innovation in making AI development more accessible and affordable. However, it's predicted that we may hit walls in scaling laws before achieving Artificial General Intelligence (AGI), and that human involvement will be necessary in many applications where accuracy and understanding are crucial. Additionally, the validity of current benchmarks for AI models has been questioned, as they may not accurately reflect real-world performance.

    • Ethical implications of large language modelsWhile language models excel on benchmarks, their real-world applications require careful consideration for ethical implications, including job displacement, potential misuse, and the need for human oversight.

      While large language models have shown impressive performance on various benchmarks, the correlation between benchmark scores and real-world applications, such as medical diagnosis, is uncertain. Memorizing exam questions does not equate to actual problem-solving abilities. The ethical implications of large models versus open source are complex, with concerns ranging from job displacement to potential malicious use. The responsibility lies in ensuring that technological advancements do not lead to harm and finding ways to mitigate negative impacts, rather than halting progress. The fear of a super-intelligent AI deciding to destroy humanity is a valid concern, but the idea of machines having free will is a misconception. Machines can perform vast computations, but they do not possess the ability to make decisions outside of their programming. It's essential to continue the conversation on the ethical implications of AI and find ways to harness its potential while minimizing potential risks.

    • AI's limitations make a catastrophic scenario unlikelyThe high cost and difficulty of training large AI models, along with the lack of progress in machine reproduction, make a catastrophic AI scenario unlikely in the near future.

      While the potential risk of advanced artificial intelligence (AI) causing harm to humans is a valid concern, current limitations in the cost, accessibility, and self-replication capabilities of AI models make a catastrophic scenario unlikely in the near future. The interviewee believes that the high cost and difficulty of training large AI models, along with the lack of progress in machine reproduction, are significant barriers to an AI surpassing human intelligence and causing harm. However, it's important to continue monitoring advancements in AI technology and addressing potential ethical concerns as the field progresses.

    Recent Episodes from a16z Podcast

    The Art of Technology, The Technology of Art

    The Art of Technology, The Technology of Art

    We know that technology has changed art, and that artists have evolved with every new technology — it’s a tale as old as humanity, moving from cave paintings to computers. Underlying these movements are endless debates around inventing versus remixing; between commercialism and art; between mainstream canon and fringe art; whether we’re living in an artistic monoculture now (the answer may surprise you); and much much more. 

    So in this new episode featuring Berlin-based contemporary artist Simon Denny -- in conversation with a16z crypto editor in chief Sonal Chokshi -- we discuss all of the above debates. We also cover how artists experimented with the emergence of new technology platforms like the web browser, the iPhone, Instagram and social media; to how generative art found its “native” medium on blockchains, why NFTs; and other art movements. 

    Denny also thinks of entrepreneurial ideas -- from Peter Thiel's to Chris Dixon's Read Write Own -- as an "aesthetic"; and thinks of technology artifacts (like NSA sketches!) as art -- reflecting all of these in his works across various mediums and contexts. How has technology changed art, and more importantly, how have artists changed with technology? How does art change our place in the world, or span beyond space? It's about optimism, and seeing things anew... all this and more in this episode.

     

    Resources: 

    Find Denny on Twitter: https://x.com/dennnnnnnnny

    Find Sonal on Twitter: https://x.com/smc90

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Cybersecurity's Past, Present, and AI-Driven Future

    Cybersecurity's Past, Present, and AI-Driven Future

    Is it time to hand over cybersecurity to machines amidst the exponential rise in cyber threats and breaches?

    We trace the evolution of cybersecurity from minimal measures in 1995 to today's overwhelmed DevSecOps. Travis McPeak, CEO and Co-founder of Resourcely, kicks off our discussion by discussing the historical shifts in the industry. Kevin Tian, CEO and Founder of Doppel, highlights the rise of AI-driven threats and deepfake campaigns. Feross Aboukhadijeh, CEO and Founder of Socket, provides insights into sophisticated attacks like the XZ Utils incident. Andrej Safundzic, CEO and Founder of Lumos, discusses the future of autonomous security systems and their impact on startups.

    Recorded at a16z's Campfire Sessions, these top security experts share the real challenges they face and emphasize the need for a new approach. 

    Resources: 

    Find Travis McPeak on Twitter: https://x.com/travismcpeak

    Find Kevin Tian on Twitter: https://twitter.com/kevintian00

    Find Feross Aboukhadijeh on Twitter: https://x.com/feross

    Find Andrej Safundzic on Twitter: https://x.com/andrejsafundzic

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

    The Science and Supply of GLP-1s

    The Science and Supply of GLP-1s

    Brooke Boyarsky Pratt, founder and CEO of knownwell, joins Vineeta Agarwala, general partner at a16z Bio + Health.

    Together, they talk about the value of obesity medicine practitioners, patient-centric medical homes, and how Brooke believes the metabolic health space will evolve over time.

    This is the second episode in Raising Health’s series on the science and supply of GLP-1s. Listen to last week's episode to hear from Carolyn Jasik, Chief Medical Officer at Omada Health, on GLP-1s from a clinical perspective.

     

    Listen to more from Raising Health’s series on GLP-1s:

    The science of satiety: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-carolyn-jasik

    Payers, providers and pricing: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-chronis-manolis

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    The State of AI with Marc & Ben

    The State of AI with Marc & Ben

    In this latest episode on the State of AI, Ben and Marc discuss how small AI startups can compete with Big Tech’s massive compute and data scale advantages, reveal why data is overrated as a sellable asset, and unpack all the ways the AI boom compares to the internet boom.

     

    Subscribe to the Ben & Marc podcast: https://link.chtbl.com/benandmarc

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Predicting Revenue in Usage-based Pricing

    Predicting Revenue in Usage-based Pricing

    Over the past decade, usage-based pricing has soared in popularity. Why? Because it aligns cost with value, letting customers pay only for what they use. But, that flexibility is not without issues - especially when it comes to predicting revenue. Fortunately, with the right process and infrastructure, your usage-based revenue can become more predictable than the traditional seat-based SaaS model. 

    In this episode from the a16z Growth team, Fivetran’s VP of Strategy and Operations Travis Ferber and Alchemy’s Head of Sales Dan Burrill join a16z Growth’s Revenue Operations Partner Mark Regan. Together, they discuss the art of generating reliable usage-based revenue. They share tips for avoiding common pitfalls when implementing this pricing model - including how to nail sales forecasting, adopting the best tools to track usage, and deal with the initial lack of customer data. 

    Resources: 

    Learn more about pricing, packaging, and monetization strategies: a16z.com/pricing-packaging

    Find Dan on Twitter: https://twitter.com/BurrillDaniel

    Find Travis on LinkedIn: https://www.linkedin.com/in/travisferber

    Find Mark on LinkedIn: https://www.linkedin.com/in/mregan178

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    California's Senate Bill 1047: What You Need to Know

    California's Senate Bill 1047: What You Need to Know

    On May 21, the California Senate passed bill 1047.

    This bill – which sets out to regulate AI at the model level – wasn’t garnering much attention, until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August that would cement it into law. In this episode, a16z General Partner Anjney Midha and Venture Editor Derrick Harris breakdown everything the tech community needs to know about SB-1047.

    This bill really is the tip of the iceberg, with over 600 new pieces of AI legislation swirling in the United States. So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.

    Read the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

    a16z Podcast
    enJune 06, 2024

    The GenAI 100: The Apps that Stick

    The GenAI 100: The Apps that Stick

    Consumer AI is moving fast, so who's leading the charge? 

    a16z Consumer Partners Olivia Moore and Bryan Kim discuss our GenAI 100 list and what it takes for an AI model to stand out and dominate the market.

    They discuss how these cutting-edge apps are connecting with their users and debate whether traditional strategies like paid acquisition and network effects are still effective. We're going beyond rankings to explore pivotal benchmarks like D7 retention and introduce metrics that define today's AI market.

    Note: This episode was recorded prior to OpenAI's Spring update. Catch our latest insights in the previous episode to stay ahead!

     

    Resources:

    Link to the Gen AI 100: https://a16z.com/100-gen-ai-apps

    Find Bryan on Twitter: https://twitter.com/kirbyman

    Find Olivia on Twitter: https://x.com/omooretweets

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    Finding a Single Source of AI Truth With Marty Chavez From Sixth Street

    Finding a Single Source of AI Truth With Marty Chavez From Sixth Street

    a16z General Partner David Haber talks with Marty Chavez, vice chairman and partner at Sixth Street Partners, about the foundational role he’s had in merging technology and finance throughout his career, and the magical promises and regulatory pitfalls of AI.

    This episode is taken from “In the Vault”, a new audio podcast series by the a16z Fintech team. Each episode features the most influential figures in financial services to explore key trends impacting the industry and the pressing innovations that will shape our future. 

     

    Resources: 
    Listen to more of In the Vault: https://a16z.com/podcasts/a16z-live

    Find Marty on X: https://twitter.com/rmartinchavez

    Find David on X: https://twitter.com/dhaber

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    A Big Week in AI: GPT-4o & Gemini Find Their Voice

    A Big Week in AI: GPT-4o & Gemini Find Their Voice

    This was a big week in the world of AI, with both OpenAI and Google dropping significant updates. So big that we decided to break things down in a new format with our Consumer partners Bryan Kim and Justine Moore. We discuss the multi-modal companions that have found their voice, but also why not all audio is the same, and why several nuances like speed and personality really matter.

     

    Resources:

    OpenAI’s Spring announcement: https://openai.com/index/hello-gpt-4o/

    Google I/O announcements: https://blog.google/technology/ai/google-io-2024-100-announcements/

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

     

    Remaking the UI for AI

    Remaking the UI for AI

    Make sure to check out our new AI + a16z feed: https://link.chtbl.com/aiplusa16z
     

    a16z General Partner Anjney Midha joins the podcast to discuss what's happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there's a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions. 

    Here's one small passage that speaks to his larger thesis on where we're heading:

    "I think why we're seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they're going to have to figure out how to get the models to run locally without ever leaving without ever the user's context, and data leaving the user's device. And that's going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.

    "We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency.  The open source language model ecosystem is just so ravenous."

    More from Anjney:

    The Quest for AGI: Q*, Self-Play, and Synthetic Data

    Making the Most of Open Source AI

    Safety in Numbers: Keeping AI Open

    Investing in Luma AI

    Follow everyone on X:

    Anjney Midha

    Derrick Harris

    Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

    a16z Podcast
    enMay 16, 2024

    Related Episodes

    Exploring the Isambard AI supercomputer

    Exploring the Isambard AI supercomputer

    The UK’s fastest supercomputer, the Isambard-AI, is due to be completed in Summer 2024. According to the teams involved, it will reach up to 200 quadrillion calculations per second, and will give researchers and industry leaders new possibilities in the UK: the opportunity to work with the huge potential AI has to offer in the fields of robotics, big data, climate research, and drug discovery.

    Our guest this week is one of the project leaders: Professor Simon McIntosh-Smith from University of Bristol. We’ll be looking at how Isambard-AI will be an open hub for all AI research in the UK, powered by around five-and-a-half-thousand GPUs. 

    This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

    Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

    About the expert: https://www.linkedin.com/in/simonmcintoshsmith/?originalSubdomain=uk

    Sources and statistics cited in this episode:
    Supercomputer name first used - https://www.hp.com/us-en/shop/tech-takes/history-of-supercomputing
    Exascale barrier broke for the first time - https://www.hpe.com/us/en/newsroom/news-advisory/2023/03/4-ways-supercomputing-will-change-the-world.html
    About Isambard-AI - https://www.bristol.ac.uk/news/2023/september/isambard-ai.html
    How the UK Government has invested £225 million - https://www.bristol.ac.uk/news/2023/november/supercomputer-announcement.html#:~:text=Isambard%2DAI%20will%20offer%20capacity,climate%20research%20and%20drug%20discovery.%22
    NASA’s 3D-printed engine to power space rockets - https://www.nasa.gov/centers-and-facilities/marshall/nasas-3d-printed-rotating-detonation-rocket-engine-test-a-success/

    Exploring the Isambard AI supercomputer

    Exploring the Isambard AI supercomputer

    The UK’s fastest supercomputer, the Isambard-AI, is due to be completed in Summer 2024. According to the teams involved, it will reach up to 200 quadrillion calculations per second, and will give researchers and industry leaders new possibilities in the UK: the opportunity to work with the huge potential AI has to offer in the fields of robotics, big data, climate research, and drug discovery.

    Our guest this week is one of the project leaders: Professor Simon McIntosh-Smith from University of Bristol. We’ll be looking at how Isambard-AI will be an open hub for all AI research in the UK, powered by around five-and-a-half-thousand GPUs. 

    This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

    Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

    About the expert: https://www.linkedin.com/in/simonmcintoshsmith/?originalSubdomain=uk

    Sources and statistics cited in this episode:
    Supercomputer name first used - https://www.hp.com/us-en/shop/tech-takes/history-of-supercomputing
    Exascale barrier broke for the first time - https://www.hpe.com/us/en/newsroom/news-advisory/2023/03/4-ways-supercomputing-will-change-the-world.html
    About Isambard-AI - https://www.bristol.ac.uk/news/2023/september/isambard-ai.html
    How the UK Government has invested £225 million - https://www.bristol.ac.uk/news/2023/november/supercomputer-announcement.html#:~:text=Isambard%2DAI%20will%20offer%20capacity,climate%20research%20and%20drug%20discovery.%22
    NASA’s 3D-printed engine to power space rockets - https://www.nasa.gov/centers-and-facilities/marshall/nasas-3d-printed-rotating-detonation-rocket-engine-test-a-success/

    Exploring the Isambard AI supercomputer

    Exploring the Isambard AI supercomputer

    The UK’s fastest supercomputer, the Isambard-AI, is due to be completed in Summer 2024. According to the teams involved, it will reach up to 200 quadrillion calculations per second, and will give researchers and industry leaders new possibilities in the UK: the opportunity to work with the huge potential AI has to offer in the fields of robotics, big data, climate research, and drug discovery.

    Our guest this week is one of the project leaders: Professor Simon McIntosh-Smith from University of Bristol. We’ll be looking at how Isambard-AI will be an open hub for all AI research in the UK, powered by around five-and-a-half-thousand GPUs. 

    This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

    Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

    About the expert: https://www.linkedin.com/in/simonmcintoshsmith/?originalSubdomain=uk

    Sources and statistics cited in this episode:
    Supercomputer name first used - https://www.hp.com/us-en/shop/tech-takes/history-of-supercomputing
    Exascale barrier broke for the first time - https://www.hpe.com/us/en/newsroom/news-advisory/2023/03/4-ways-supercomputing-will-change-the-world.html
    About Isambard-AI - https://www.bristol.ac.uk/news/2023/september/isambard-ai.html
    How the UK Government has invested £225 million - https://www.bristol.ac.uk/news/2023/november/supercomputer-announcement.html#:~:text=Isambard%2DAI%20will%20offer%20capacity,climate%20research%20and%20drug%20discovery.%22
    NASA’s 3D-printed engine to power space rockets - https://www.nasa.gov/centers-and-facilities/marshall/nasas-3d-printed-rotating-detonation-rocket-engine-test-a-success/

    Exploring the Isambard AI supercomputer

    Exploring the Isambard AI supercomputer

    The UK’s fastest supercomputer, the Isambard-AI, is due to be completed in Summer 2024. According to the teams involved, it will reach up to 200 quadrillion calculations per second, and will give researchers and industry leaders new possibilities in the UK: the opportunity to work with the huge potential AI has to offer in the fields of robotics, big data, climate research, and drug discovery.

    Our guest this week is one of the project leaders: Professor Simon McIntosh-Smith from University of Bristol. We’ll be looking at how Isambard-AI will be an open hub for all AI research in the UK, powered by around five-and-a-half-thousand GPUs. 

    This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

    Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

    About the expert: https://www.linkedin.com/in/simonmcintoshsmith/?originalSubdomain=uk

    Sources and statistics cited in this episode:
    Supercomputer name first used - https://www.hp.com/us-en/shop/tech-takes/history-of-supercomputing
    Exascale barrier broke for the first time - https://www.hpe.com/us/en/newsroom/news-advisory/2023/03/4-ways-supercomputing-will-change-the-world.html
    About Isambard-AI - https://www.bristol.ac.uk/news/2023/september/isambard-ai.html
    How the UK Government has invested £225 million - https://www.bristol.ac.uk/news/2023/november/supercomputer-announcement.html#:~:text=Isambard%2DAI%20will%20offer%20capacity,climate%20research%20and%20drug%20discovery.%22
    NASA’s 3D-printed engine to power space rockets - https://www.nasa.gov/centers-and-facilities/marshall/nasas-3d-printed-rotating-detonation-rocket-engine-test-a-success/

    Exploring the Isambard AI supercomputer

    Exploring the Isambard AI supercomputer

    The UK’s fastest supercomputer, the Isambard-AI, is due to be completed in Summer 2024. According to the teams involved, it will reach up to 200 quadrillion calculations per second, and will give researchers and industry leaders new possibilities in the UK: the opportunity to work with the huge potential AI has to offer in the fields of robotics, big data, climate research, and drug discovery.

    Our guest this week is one of the project leaders: Professor Simon McIntosh-Smith from University of Bristol. We’ll be looking at how Isambard-AI will be an open hub for all AI research in the UK, powered by around five-and-a-half-thousand GPUs. 

    This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

    Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

    About the expert: https://www.linkedin.com/in/simonmcintoshsmith/?originalSubdomain=uk

    Sources and statistics cited in this episode:
    Supercomputer name first used - https://www.hp.com/us-en/shop/tech-takes/history-of-supercomputing
    Exascale barrier broke for the first time - https://www.hpe.com/us/en/newsroom/news-advisory/2023/03/4-ways-supercomputing-will-change-the-world.html
    About Isambard-AI - https://www.bristol.ac.uk/news/2023/september/isambard-ai.html
    How the UK Government has invested £225 million - https://www.bristol.ac.uk/news/2023/november/supercomputer-announcement.html#:~:text=Isambard%2DAI%20will%20offer%20capacity,climate%20research%20and%20drug%20discovery.%22
    NASA’s 3D-printed engine to power space rockets - https://www.nasa.gov/centers-and-facilities/marshall/nasas-3d-printed-rotating-detonation-rocket-engine-test-a-success/