Logo
    Search

    Listener Q&A: 2024 Tech Market Predictions, Long Term Implications of Today’s GPU Crunch, and Will AI Agents Bring Us Happiness?

    enAugust 10, 2023

    Podcast Summary

    • GPU market crunch in AI industryThe GPU market is facing a crunch due to high demand from AI industry, with NVIDIA leading in high-end processors. Pandemic disruptions and complex manufacturing processes complicate capacity expansion. Demand for GPUs outpaces current capacity, with deliveries expected in small quantities in Sept and larger quantities from Dec-Apr.

      The GPU market is currently experiencing a crunch due to the high demand for GPUs in the AI industry. The industry's structure is such that there are only a few major producers, NVIDIA and AMD, with NVIDIA being significantly ahead in high-end processors for large-scale training. The pandemic supply disruption has further complicated the situation, as expanding capacity is a lengthy and expensive process. The physical manufacturing process is complex and prone to yield issues. The demand for GPUs in the AI world has grown rapidly, creating a significant gap between the current capacity and what's needed. The industry is looking at deliveries in small quantities in September and larger quantities from December through April. Large cloud players, who are already the biggest consumers of GPUs, are exploring alternative suppliers to meet their near-term needs. It's unclear whether this is a long-term or short-term issue due to the lack of visibility into the price elasticity of these components. Overall, the GPU market's current state highlights the challenges of keeping up with the rapid pace of technological advancements in the AI industry.

    • GPU bottleneck driving new business models and solutionsThe GPU bottleneck in the AI industry is leading to new monetization opportunities, a shift in the market towards GPU access or ownership, and a growing demand for more efficient solutions.

      The current GPU bottleneck in the AI industry is leading to new monetization opportunities and a shift in the market. The demand for GPUs to train larger models and increase training times is outpacing the ability to scale the physical manufacturing process. This has resulted in the emergence of companies that provide GPU access or ownership in innovative ways, such as CoreWeave and FoundryML. Additionally, there's a potential for a resurgence of GPU usage in crypto mining, as it may be more economical to rent out GPUs for AI training and inference instead. Furthermore, startups that have built semiconductors specifically for AI training, like Cerebras, are experiencing strong demand due to the hardware supply crunch. These companies are signing large deals with countries and organizations desperate for solutions to overcome the bottleneck. When scaling is blocked on capacity, researchers are turning to efficiency as a solution. This area has not been a major focus in the past, but it's becoming increasingly important as the hardware supply crunch continues. Research into dynamically figuring out or routing to efficient models is undervalued and could offer significant benefits. In summary, the GPU bottleneck is leading to new business models, a shift in the use of existing GPU capacity, and a growing demand for more efficient solutions. These trends are likely to have cascading effects on the startup ecosystem and the wider AI industry.

    • AI growth hindered by compute supply crunchThe rapid growth of AI is being hindered by a compute supply crunch, with NVIDIA's chips being the most advanced option currently available. Researchers and organizations are exploring ways to use less compute for training models to mitigate this issue.

      The current AI landscape is experiencing rapid growth, driven by advancements in technology and increasing demand. However, this growth is being hindered by a compute supply crunch, with NVIDIA's chips being the most advanced option currently available. This bottleneck could cause ongoing issues, especially as more enterprises begin to adopt AI technology in the coming years. To mitigate this, researchers and organizations are exploring ways to use less compute for training models, such as frugal GPT work and smarter data choices. Despite these challenges, the future of AI adoption is promising, with significant potential for innovation and growth. It's important to note that we're still in the early stages of this wave of AI adoption, and the hype cycle is likely to continue, driven by ongoing excitement and increasing adoption by both startups and incumbent enterprises.

    • Exploring the potential of AI agentsAI agents are advanced chatbots with planning mechanisms, enabling them to take autonomous actions and complete complex tasks. They're expected to range from consumer to specialized applications, potentially revolutionizing industries and growing into broad-based platforms.

      We're still in the early stages of exploring the applications and constraints of AI, specifically in the realm of agents. These agents, which go beyond simple chatbots to include planning mechanisms that can take autonomous actions and complete more complex tasks, are expected to range from consumer applications to specialized ones that write executable code and use multiple tools. The overall paradigm shift towards agents is significant as it allows for more sophisticated tasks to be automated, and the potential for vertical applications to eventually grow into broad-based platforms. The excitement and interest in this area are driven by the potential for agents to revolutionize various industries, from analytics and enterprise automation to legal tasks, by handling multiple steps and returning results or reports to end-users. Despite the potential for vertical applications to dominate, there's also the possibility that broad-based platforms will emerge as the technology advances. Overall, the exploration and development of AI agents mark an important step forward in the collective understanding and practical application of this technology.

    • Start with a focused use case in agent developmentFocusing on a specific use case early in agent development allows for a better understanding of the user base and delivering a high-quality product, rather than starting with a broad, abstract idea without a clear use case.

      When developing technology, especially in the agent world, it's more effective to start with a targeted, focused use case rather than trying to do everything at once. The speaker mentions how Facebook started as a small platform for a few colleges and gradually expanded, while Google was a broad-based tool from the start. In the agent world, it's essential to define what the assistant does before building it, as starting with a narrow focus allows for a better understanding of the user base and the ability to deliver a high-quality product. The speaker also mentions the Y Combinator philosophy of delighting a small number of people rather than having a large number indifferent to the product. Starting with a broad, abstract idea without a clear use case can lead to a lack of depth and not delivering on the product's potential. The speaker emphasizes the importance of balancing infrastructure and research-driven approaches with a product engineering mindset to ensure that the technology is delivering value to its users.

    • Focusing on specific tasks in AI research leads to advancementsFocusing on achievable tasks in AI research, such as product development, research methods, or infrastructure tooling, can lead to significant progress in the field. Infrastructure tooling can enable others to develop AI agents quickly, leading to successful businesses or approaches.

      Focusing on specific, achievable tasks in the field of AI research, whether it be through product development, research-driven methods, or infrastructure tooling, can lead to significant advancements. Happiness in this context can be defined as the absence of writing boilerplate code or the successful completion of a task. By focusing on these specific tasks, researchers and developers can make meaningful strides in the field. Moreover, there is a third approach to consider: infrastructure tooling. This approach involves building the infrastructure that enables others to develop AI agents quickly. This can lead to successful businesses or approaches, depending on the market liquidity of the product or the existence of an existing product area. Looking ahead to the future, there are expected to be four distinct markets in the tech industry in the coming years. One of these markets will be AI, which is expected to continue advancing in various ways and may look expensive at the time but will look cheap in hindsight. It is important for researchers and developers to understand the specific focus of their work and how it fits into the larger context of the tech industry.

    • Predicted turbulence in tech market for mid to late-stage private companiesA third of these companies will fail, a third will reach peak valuation, and a third will continue to grow. Market turbulence will lead to easier hiring, potential commercial real estate ramifications, and impact the venture capital community. Companies should try to disassociate from inflated valuations as market correction may take several years.

      The tech market, specifically for mid to late-stage private companies outside of AI, is expected to experience significant turbulence in the coming years. About a third of these companies are predicted to go under, a third will reach their peak valuation, and a third will continue to grow. This carnage in the market will lead to easier hiring, potential ramifications for commercial real estate, and impact the venture capital community. Companies that raised significant valuations during this period should try to disassociate from them as the market correction may take several years to fully play out. Valuation is seen as ephemeral, and many tech companies in public markets have experienced down rounds in the last year and a half. Historically, it takes a decade for technology companies to recover from market downturns, but startups may not have that luxury. The full effects of this correction are expected to unfold in 2024 and 2025.

    • Burning cash without revenue concern for investorsFounders should manage costs and pivot or shut down unsuccessful businesses, focusing on the underlying business case and model

      The burning of excessive cash without substantial revenue to show for it is a major concern for investors, even if a company is forced to reprice its stock in public markets. Founders should be mindful of their cost profile and adjust it accordingly to avoid wasting valuable time during their prime entrepreneurial years. This period, which is typically free of major complications, is the best time to take risks and start a company. However, it's crucial to remember that not all ventures will succeed. Therefore, founders must be willing to pivot or shut down a business that isn't working. In essence, the underlying business case and model are what truly matter, not just the valuation.

    Recent Episodes from No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia
    This week on No Priors, Sarah Guo and Elad Gil sit down with Karan Goel and Albert Gu from Cartesia. Karan and Albert first met as Stanford AI Lab PhDs, where their lab invented Space Models or SSMs, a fundamental new primitive for training large-scale foundation models. In 2023, they Founded Cartesia to build real-time intelligence for every device. One year later, Cartesia released Sonic which generates high quality and lifelike speech with a model latency of 135ms—the fastest for a model of this class. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @krandiash | @_albertgu Show Notes:  (0:00) Introduction (0:28) Use Cases for Cartesia and Sonic  (1:32) Karan Goel & Albert Gu’s professional backgrounds (5:06) Steady State Models (SSMs) versus Transformer Based Architectures  (11:51) Domain Applications for Hybrid Approaches  (13:10) Text to Speech and Voice (17:29) Data, Size of Models and Efficiency  (20:34) Recent Launch of Text to Speech Product (25:01) Multimodality & Building Blocks (25:54) What’s Next at Cartesia?  (28:28) Latency in Text to Speech (29:30) Choosing Research Problems Based on Aesthetic  (31:23) Product Demo (32:48) Cartesia Team & Hiring

    Can AI replace the camera? with Joshua Xu from HeyGen

    Can AI replace the camera? with Joshua Xu from HeyGen
    AI video generation models still have a long way to go when it comes to making compelling and complex videos but the HeyGen team are well on their way to streamlining the video creation process by using a combination of language, video, and voice models to create videos featuring personalized avatars, b-roll, and dialogue. This week on No Priors, Joshua Xu the co-founder and CEO of HeyGen,  joins Sarah and Elad to discuss how the HeyGen team broke down the elements of a video and built or found models to use for each one, the commercial applications for these AI videos, and how they’re safeguarding against deep fakes.  Links from episode: HeyGen McDonald’s commercial Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil |  @joshua_xu_ Show Notes:  (0:00) Introduction (3:08) Applications of AI content creation (5:49) Best use cases for Hey Gen (7:34) Building for quality in AI video generation (11:17) The models powering HeyGen (14:49) Research approach (16:39) Safeguarding against deep fakes (18:31) How AI video generation will change video creation (24:02) Challenges in building the model (26:29) HeyGen team and company

    How the ARC Prize is democratizing the race to AGI with Mike Knoop from Zapier

    How the ARC Prize is democratizing  the race to AGI with Mike Knoop from Zapier
    The first step in achieving AGI is nailing down a concise definition and  Mike Knoop, the co-founder and Head of AI at Zapier, believes François Chollet got it right when he defined general intelligence as a system that can efficiently acquire new skills. This week on No Priors, Miked joins Elad to discuss ARC Prize which is a multi-million dollar non-profit public challenge that is looking for someone to beat the Abstraction and Reasoning Corpus (ARC) evaluation. In this episode, they also get into why Mike thinks LLMs will not get us to AGI, how Zapier is incorporating AI into their products and the power of agents, and why it’s dangerous to regulate AGI before discovering its full potential.  Show Links: About the Abstraction and Reasoning Corpus Zapier Central Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mikeknoop Show Notes:  (0:00) Introduction (1:10) Redefining AGI (2:16) Introducing ARC Prize (3:08) Definition of AGI (5:14) LLMs and AGI (8:20) Promising techniques to developing AGI (11:0) Sentience and intelligence (13:51) Prize model vs investing (16:28) Zapier AI innovations (19:08) Economic value of agents (21:48) Open source to achieve AGI (24:20) Regulating AI and AGI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI
    After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.  They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.  Show Links: Tengyu Ma Key Research Papers: Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training Non-convex optimization for machine learning: design, analysis, and understanding Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Larger language models do in-context learning differently, 2023 Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning On the Optimization Landscape of Tensor Decompositions Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma Show Notes:  (0:00) Introduction (1:59) Key points of Tengyu’s research (4:28) Academia compared to industry (6:46) Voyage AI overview (9:44) Enterprise RAG use cases (15:23) LLM long-term memory and token limitations (18:03) Agent chaining and data management (22:01) Improving enterprise RAG  (25:44) Latency budgets (27:48) Advice for building RAG systems (31:06) Learnings as an AI founder (32:55) The role of academia in AI

    How YC fosters AI Innovation with Garry Tan

    How YC fosters AI Innovation with Garry Tan
    Garry Tan is a notorious founder-turned-investor who is now running one of the most prestigious accelerators in the world, Y Combinator. As the president and CEO of YC, Garry has been credited with reinvigorating the program. On this week’s episode of No Priors, Sarah, Elad, and Garry discuss the shifting demographics of YC founders and how AI is encouraging younger founders to launch companies, predicting which early stage startups will have longevity, and making YC a beacon for innovation in AI companies. They also discussed the importance of building companies in person and if San Francisco is, in fact, back.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @garrytan Show Notes:  (0:00) Introduction (0:53) Transitioning from founder to investing (5:10) Early social media startups (7:50) Trend predicting at YC (10:03) Selecting YC founders (12:06) AI trends emerging in YC batch (18:34) Motivating culture at YC (20:39) Choosing the startups with longevity (24:01) Shifting YC found demographics (29:24) Building in San Francisco  (31:01) Making YC a beacon for creators (33:17) Garry Tan is bringing San Francisco back

    The Data Foundry for AI with Alexandr Wang from Scale

    The Data Foundry for AI with Alexandr Wang from Scale
    Alexandr Wang was 19 when he realized that gathering data will be crucial as AI becomes more prevalent, so he dropped out of MIT and started Scale AI. This week on No Priors, Alexandr joins Sarah and Elad to discuss how Scale is providing infrastructure and building a robust data foundry that is crucial to the future of AI. While the company started working with autonomous vehicles, they’ve expanded by partnering with research labs and even the U.S. government.   In this episode, they get into the importance of data quality in building trust in AI systems and a possible future where we can build better self-improvement loops, AI in the enterprise, and where human and AI intelligence will work together to produce better outcomes.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @alexandr_wang (0:00) Introduction (3:01) Data infrastructure for autonomous vehicles (5:51) Data abundance and organization (12:06)  Data quality and collection (15:34) The role of human expertise (20:18) Building trust in AI systems (23:28) Evaluating AI models (29:59) AI and government contracts (32:21) Multi-modality and scaling challenges

    Music consumers are becoming the creators with Suno CEO Mikey Shulman

    Music consumers are becoming the creators with Suno CEO Mikey Shulman
    Mikey Shulman, the CEO and co-founder of Suno, can see a future where the Venn diagram of music creators and consumers becomes one big circle. The AI music generation tool trying to democratize music has been making waves in the AI community ever since they came out of stealth mode last year. Suno users can make a song complete with lyrics, just by entering a text prompt, for example, “koto boom bap lofi intricate beats.” You can hear it in action as Mikey, Sarah, and Elad create a song live in this episode.  In this episode, Elad, Sarah, And Mikey talk about how the Suno team took their experience making at transcription tool and applied it to music generation, how the Suno team evaluates aesthetics and taste because there is no standardized test you can give an AI model for music, and why Mikey doesn’t think AI-generated music will affect people’s consumption of human made music.  Listen to the full songs played and created in this episode: Whispers of Sakura Stone  Statistical Paradise Statistical Paradise 2 Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MikeyShulman Show Notes:  (0:00) Mikey’s background (3:48) Bark and music generation (5:33) Architecture for music generation AI (6:57) Assessing music quality (8:20) Mikey’s music background as an asset (10:02) Challenges in generative music AI (11:30) Business model (14:38) Surprising use cases of Suno (18:43) Creating a song on Suno live (21:44) Ratio of creators to consumers (25:00) The digitization of music (27:20) Mikey’s favorite song on Suno (29:35) Suno is hiring

    Context windows, computer constraints, and energy consumption with Sarah and Elad

    Context windows, computer constraints, and energy consumption with Sarah and Elad
    This week on No Priors hosts, Sarah and Elad are catching up on the latest AI news. They discuss the recent developments in AI music generation, and if you’re interested in generative AI music, stay tuned for next week’s interview! Sarah and Elad also get into device-resident models, AI hardware, and ask just how smart smaller models can really get. These hardware constraints were compared to the hurdles AI platforms are continuing to face including computing constraints, energy consumption, context windows, and how to best integrate these products in apps that users are familiar with.  Have a question for our next host-only episode or feedback for our team? Reach out to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil  Show Notes:  (0:00) Intro (1:25) Music AI generation (4:02) Apple’s LLM (11:39) The role of AI-specific hardware (15:25) AI platform updates (18:01) Forward thinking in investing in AI (20:33) Unlimited context (23:03) Energy constraints

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you
    Scott Wu loves code. He grew up competing in the International Olympiad in Informatics (IOI) and is a world class coder, and now he's building an AI agent designed to create more, not fewer, human engineers. This week on No Priors, Sarah and Elad talk to Scott, the co-founder and CEO of Cognition, an AI lab focusing on reasoning. Recently, the Cognition team released a demo of Devin, an AI software engineer that can increasingly handle entire tasks end to end. In this episode, they talk about why the team built Devin with a UI that mimics looking over another engineer’s shoulder as they work and how this transparency makes for a better result. Scott discusses why he thinks Devin will make it possible for there to be more human engineers in the world, and what will be important for software engineers to focus on as these roles evolve. They also get into how Scott thinks about building the Cognition team and that they’re just getting started.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ScottWu46 Show Notes:  (0:00) Introduction (1:12) IOI training and community (6:39) Cognition’s founding team (8:20) Meet Devin (9:17) The discourse around Devin (12:14) Building Devin’s UI (14:28) Devin’s strengths and weakness  (18:44) The evolution of coding agents (22:43) Tips for human engineers (26:48) Hiring at Cognition

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"
    AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long. Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.  Show Links: Bling Zoo video Man eating a burger video Tokyo Walk video Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic Show Notes:  (0:00) Sora team Introduction (1:05) Simulating the world with Sora (2:25) Building the most valuable consumer product (5:50) Alternative use cases and simulation capabilities (8:41) Diffusion transformers explanation (10:15) Scaling laws for video (13:08) Applying end-to-end deep learning to video (15:30) Tuning the visual aesthetic of Sora (17:08) The road to “desktop Pixar” for everyone (20:12) Safety for visual models (22:34) Limitations of Sora (25:04) Learning from how Sora is learning (29:32) The biggest misconceptions about video models

    Related Episodes

    EP 266: Stop making these 7 Large Language Model mistakes. Best practices for ChatGPT, Gemini, Claude and others

    EP 266: Stop making these 7 Large Language Model mistakes. Best practices for ChatGPT, Gemini, Claude and others

    Send Everyday AI and Jordan a text message

    In today's episode, we're diving into the 7 most common mistakes people make while using large language models like ChatGPT. 

    Newsletter (and today's click to win giveaway): Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions on AI

    Related Episodes:
    Ep 260: A new SORA competitor, NVIDIA’s $700M acquisition – AI News That Matters
    Ep 181: New York Times vs. OpenAI – The huge AI implications no one is talking about
    Ep 258: Will AI Take Our Jobs? Our answer might surprise you.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn


    Topics Covered in This Episode:
    1. Understanding the Evolution of Large Language Models

    2. Connectivity: A Major Player in Model Accuracy

    3. The Generative Nature of Large Language Models

    4. Perfecting the Art of Prompt Engineering

    5. The Seven Roadblocks in the Effective Use of Large Language Models

    6. Authenticity Assurance in Large Language Model Usage

    7. The Future of Large Language Models


    Timestamps:
    00:00 ChatGPT.com now the focal point for OpenAI.

    04:58 Microsoft developing large in-house AI model.

    09:07 Models trained with fresh, quality data crucial.

    10:30 Daily use of large language models poses risks.

    14:59 Free chat GPT has outdated knowledge cutoff.

    18:20 Microsoft is the largest by market cap.

    21:52 Ensure thorough investigation; models have context limitations.

    26:01 Spread, repeat, and earn with simple actions.

    29:21 Tokenization, models use context, generative large language models.

    33:07 More input means better output, mathematically proven.

    36:13 Large language models are essential for business survival.

    38:53 Future work: leverage language models, prompt constantly.

    40:47 Please rate, share, check out youreverydayai.com.


    Keywords:
    Large language models, training data, outdated information, knowledge cutoffs, OpenAI's GPT 4, Anthropics Claude Opus, Google's Gemini, free version of Chat GPT, Internet connectivity, generative AI, varying responses, Jordan Wilson, prompt engineering, copy and paste prompts, zero shot prompting, few shot prompting, Microsoft Copilot, Apple's AI chips, OpenAI's search engine, GPT-2 chatbot model, Microsoft's MAI 1, common mistakes with large language models, offline vs online GPT, Google Gemini's outdated information, memory management, context window, unreliable screenshots, public URL verificat

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    Habemus AI Act - What does this mean for the industrial sector and manufacturing?

    Habemus AI Act - What does this mean for the industrial sector and manufacturing?
    Robert talked to Dr. Bernhard Nessler, Research Manager Deep Learning & Certification at SCCH, and to Thomas Doms from TÜV AUSTRIA about the AI Act. Both work for Trustifai - a joint venture by SCCH, JKU Linz and TÜV Austria. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner Siemens **OUR EVENT IN JANUARY ** https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/ Contact Dr. Bernhard Nessler: https://www.linkedin.com/in/bernhard-nessler-linz-austria/?originalSubdomain=at Contact Thomas Doms: https://www.linkedin.com/in/thomas-doms-30890b2a/?originalSubdomain=at We thank our team: Barbara, Anne and Michael! !!! The Industrial AI Podcast reports weekly on the latest developments in AI and machine learning for the engineering, robotics, automotive, process and automation industries. The podcast features industrial users, scientists, vendors and startups in the field of Industrial AI and machine learning. The podcast is hosted by Peter Seeberg, Industrial AI consultant and Robert Weber, tech journalist. Their mission: Demystify Industrial AI and machine learning, inspire industrial users.

    Inside the Hunt for the Discord Leaker + Twitter Chaos Updates

    Inside the Hunt for the Discord Leaker + Twitter Chaos Updates

    Aric Toler untangles the web of teens, gamers and memes at the heart of the latest intelligence scandal.

    Then, an update on Twitter — where things have gone from bad to worse.

    Plus: How A.I. is bringing us closer to “Westworld.”


    On today’s episode:

    • Aric Toler is the director of research and training at Bellingcat, the Dutch investigative site. He worked with journalists at The New York Times to identify the man who allegedly leaked top secret documents on Discord, a social media chat platform.

    Additional reading:

    The State of AI Report: Research, Industry, Politics and Safety

    The State of AI Report: Research, Industry, Politics and Safety
    On today's episode, NLW reviews Air Street's epic 6th annual State of AI report - https://www.stateof.ai/ Before that on the Brief, Character AI launches group chats; an AI startup is doing layoffs; and the RIAA wants voice cloning sites to be considered piracy. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  Netsuite | The leading business management software | Get no interest and no payments for 6 months https://netsuite.com/breakdown TAKE OUR SURVEY ON EDUCATIONAL AND LEARNING RESOURCE CONTENT: https://bit.ly/aibreakdownsurvey ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    ChatGPT Gets a Body with the Figure 01 Robot

    ChatGPT Gets a Body with the Figure 01 Robot
    A wild demo has the world talking about AI and robotics. Plus, Google has a new AI agent called SIMA that shows how an agent trained on multiple games out performs an agent trained on a single game, even on the single game the agent was trained on. Today's Episode Brought to You By: Plumb - Build, test, and deploy AI features with confidence - https://useplumb.com/  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/