Logo

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    enAugust 16, 2024
    What recent actions have Google and Microsoft taken in AI?
    What challenges are AI companies facing regarding profitability?
    What breakthrough method was introduced by UC Berkeley researchers?
    Why is Elon Musk suing OpenAI and Sam Altman?
    What is the purpose of the humanoid robot Figure 2?

    Podcast Summary

    • AI startup consolidationGoogle and Microsoft's acquisitions of AI startups and reported investments indicate a trend towards consolidation in the AI industry due to the difficulty of achieving profitability with expensive models and subscription-based business models.

      The AI industry is experiencing a wave of consolidation as companies struggle to turn a profit. Last week, Google hired the founders of Character.AI, a chatbot startup, along with a reported 2.5 billion investment. Similar moves have been made by Microsoft with Inflection AI. However, with expensive AI models and subscription-based business models, these companies are finding it difficult to make good margins. Despite high engagement from users, the costs of running AI models can outweigh the revenue. This trend is expected to continue, with Amazon reportedly hiring another AI startup, Adept, in the near future. These moves suggest that the AI startup world may be facing challenges in achieving profitability.

    • Acqui-hires in AI sectorCompanies like Microsoft and Amazon are using acqui-hires to gain access to talent and data in the AI sector, leaving the acquired companies to die, and investors may not receive their full investment back but are being made whole in some cases.

      The tech industry is seeing a trend of companies being acquired in a unique way to avoid antitrust scrutiny and to gain access to talent and data. Instead of outright acquisitions, companies like Microsoft and Amazon are using a strategy called "acqui-hires," where they bring over the CEO and staff, but leave the hollowed-out husk of the company to die. This trend is particularly prevalent in the AI sector, where competition for talent and data is intense. Companies like Character.ai, ADEPT, and Cohere have all been targets of these acqui-hires. Investors in these companies may not receive their full investment back, but they are being made whole in some cases. This trend is unusual for companies that have raised hundreds of millions of dollars, as their value is typically tied to their product. However, the talent in these companies is so sought after that the acquiring companies are willing to pay a premium for it, making this an effective strategy to avoid regulatory scrutiny and gain a competitive edge.

    • AI talent acquisition trendPowerful companies acquire engineering talent and expertise to gain market share in generative AI, potentially stifling competition and impacting talent mobility. Microsoft's refund of Amazon's investment in a startup's founders highlights the importance of reputation management in these deals.

      The tech industry is witnessing a trend of powerful companies acquiring engineering talent and expertise to gain market share in the generative AI space. This trend raises concerns about potential stifling of competition and the impact on talent mobility. Microsoft's decision to refund Amazon's investment in a startup's founders is an example of reputation management in these types of deals. In the case of early-stage companies, acquisitions often require the agreement of the founders. Grok, an AI chip startup, is a notable player in this space, aiming to scale production from 4,000 to over 100,000 units and targeting a full dollar return on investment. Their focus on inference speed sets them apart from competitors like NVIDIA, as they aim to provide a cloud offering for inference on open models. However, challenges such as scaling production, limited memory, and distribution remain.

    • AI landscape changesNew players like Grok and ongoing changes at OpenAI highlight the importance of advancements in AI and the need for a strong focus on safety and ethical considerations

      The AI landscape is experiencing significant changes, with companies like OpenAI and Enfropic making strides in language models and safety research. Grok, a new player in the field, is gaining attention for its distinct approach and competitive capabilities. However, OpenAI, a pioneer in the space, has seen notable departures of key figures, including co-founder John Shulman and president Greg Brockman, raising concerns about the company's commitment to safety and alignment. Elon Musk, another influential figure, has filed a new lawsuit against OpenAI, adding to the ongoing drama in the industry. These developments underscore the importance of continuous advancements in AI and the need for a strong focus on safety and ethical considerations.

    • OpenAI lawsuitsElon Musk is suing OpenAI and its CEO, Sam Altman, for betraying trust and engaging in racketeering activities, as OpenAI transitioned from a nonprofit to a for-profit company and made deals with affiliates and companies owned by Altman.

      Elon Musk is suing OpenAI and its CEO, Sam Altman, for allegedly betraying him and engaging in racketeering activities. Musk had initially invested in OpenAI under the assumption it would remain a nonprofit focused on artificial general intelligence (AGI). However, OpenAI later sought to become a for-profit company, leading to a disagreement between Musk and Altman. The lawsuit, which includes emails between Musk and Altman, alleges that OpenAI and its affiliates engaged in self-dealing, seized technology and personnel, and betrayed Musk's trust. The lawsuit also accuses OpenAI of racketeering, as it has made deals with hardware and energy companies owned or indirectly owned by Altman. Additionally, OpenAI has partnered with Figure, a humanoid robot company, to develop Figure 2, a robot powered by OpenAI's GPT technology. The robot is already being tested in BMW plants, highlighting the trend of using humanoid robots in manufacturing to replace human labor. The US is considering new chip export rules, but ASML and Tokyo Electron have been able to continue selling to China due to potential exemptions.

    • US export restrictions on ASMLThe US is imposing the FDPR to limit exports of advanced technology to China, affecting companies like ASML and increasing their stock price

      The US is utilizing the Foreign Direct Product Rule (FDPR) to limit the export of advanced technology to countries like China, specifically targeting companies like ASML that provide essential hardware for producing cutting-edge chips. This rule not only restricts the export of machines but also the ongoing maintenance and support teams that ensure their proper functioning. The impact on ASML's bottom line is significant, as shown by a 11% increase in their stock price. The US is banking on these companies complying with the FDPR voluntarily, but the political and economic dynamics remain unclear. Additionally, OpenAI led a $60 million investment round for webcam startup Opel, potentially hinting at future AI product development. OpenAI also made significant price reductions for their GPU and introduced structured outputs, making their model more accessible and cost-effective for users.

    • Structured outputs in AITo ensure structured outputs in AI, companies and developers can impose hard constraints on models using techniques like symbolic reasoning, which may require more computing power but guarantee the desired format.

      Ensuring structured outputs, specifically in the format of JSON, is crucial for companies and developers who require a guaranteed format or structure from their AI models. OpenAI, in an attempt to meet these requirements, moved beyond fine-tuning their model and instead imposed hard constraints to restrict the model's outputs. This technique, which can be thought of as a mix of symbolic reasoning, requires more computing power but ensures the model's outputs adhere to the desired format. The competition in the AI industry not only includes giants like OpenAI and Google but also inference providers, who may struggle to keep up with the increasing costs. Apple is rumored to be entering the market with a paid subscription for additional features. In the realm of research and advancements, a paper titled "Scaling LEM: Test Time Compute Optimally Can Be More Effective Than Scaling Model Parameters" suggests that optimizing test time compute can be more effective than scaling model parameters. Additionally, companies like Audible and Reddit are exploring AI-powered features, such as search and summarization, to enhance user experiences. The market value of AI technology is increasingly evident, as seen in the growing number of applications and the demand for more advanced features.

    • Optimizing test time compute for AI modelsTwo strategies, search-based and revision-based methods, can significantly improve AI model efficiency and accuracy, with the choice between them depending on problem complexity. An adaptive strategy can achieve a 4x reduction in compute requirements.

      Optimally scaling test time compute can significantly improve the efficiency and accuracy of AI models, leading to results that outperform larger models in certain cases. The researchers in this study explored two strategies: search-based and revision-based methods, using a fine-tuned language model to revise incorrect answers and a process-based reward model to verify the correctness of individual steps. The findings suggest that the choice between these strategies depends on the problem complexity, with revision-based methods being more effective for simpler problems and search-based methods for more complex ones. By implementing an adaptive strategy that dynamically assesses the problem type and applies the optimal solution, the researchers achieved a 4x reduction in compute requirements. This research has significant implications for the future of AI, as it focuses on optimizing test time compute for autonomous agents, reducing the need for extensive retraining and enabling more efficient problem-solving. Additionally, it's important to consider the trade-offs between training and test time compute, as the former provides long-term benefits, while the latter is a one-time expense that is lost after the problem is solved. Overall, this study provides valuable insights into the complex relationship between compute and AI performance and paves the way for more efficient and effective AI systems.

    • Machine Learning AdvancementsUC Berkeley and DeepMind researchers introduced a new Probabilistic Roadmap Method for accurate and efficient decision-making and a robot agent achieving amateur human-level performance in table tennis, while also focusing on reducing the size of neural networks for cost-effectiveness and power-efficiency

      Researchers from UC Berkeley and DeepMind have made significant strides in the field of machine learning with two groundbreaking papers. The first paper introduces a new Probabilistic Roadmap Method (PRM) for evaluating every possible step in a decision-making process, similar to how AlphaGo determines the optimal move. This method could lead to more accurate and efficient decision-making in various applications. The second paper from DeepMind presents a robot agent that achieves amateur human-level performance in table tennis, using a hierarchical and modular policy architecture and zero-shot real-world transfer. Although it didn't win against advanced players, this research is crucial for improving robotics and transferring simulations to the real world. Additionally, researchers are focusing on reducing the size of neural networks to make them more cost-effective and power-efficient. A new approach called self-compressing neural networks aims to remove redundant weights and reduce the number of bits needed to represent the remaining weights. This method allows the model to learn the level of precision with which each weight needs to be represented, leading to a natural weight pruning process and a decrease in training time. Overall, these papers demonstrate the importance of universities and research institutions in driving innovation in machine learning and robotics, as well as the ongoing efforts to improve the efficiency and effectiveness of these technologies.

    • Quantization-aware training and output formatsResearchers are investigating quantization-aware training and the impact of output formats on large language models. QAT allows models to adjust their architecture dynamically, but careful consideration is needed to avoid 'permanent forgetting'. Traditional compression techniques are post-training, but format restrictions can significantly impact performance.

      Researchers are exploring new strategies for training and compressing machine learning models, specifically focusing on quantization-aware training (QAT) and the impact of output formats on large language models. The Oshawa Benjipa paper from 2013 introduced QAT, which allows models to dynamically re-architect themselves by rounding weights to zero. However, there's a risk of discarding weights too eagerly, potentially leading to "permanent forgetting." Researchers from Imagination Labs have recently explored a similar idea, pruning weights at runtime and achieving better performance than the 2013 paper. However, scalability remains a question. The innovation lies in the model's ability to dynamically adjust its architecture based on the importance of each weight. Traditionally, compression techniques like quantization are applied post-training. The researchers from the University of California, Berkeley, studied the impact of format restrictions on large language models. They found that structured output formats can significantly impact performance, suggesting the importance of considering both the task and the format. These findings highlight the need for a more nuanced approach to model compression and quantization, considering the dynamic nature of the learning process and the impact of output formats. As the field continues to evolve, it will be essential to explore these ideas further and assess their scalability.

    • AI regulation debateResearchers are developing autonomous AI agents, but there's a plateau in scaling and cost-effectiveness. The California SB 1047 bill aims to regulate AI, with arguments for and against, and a $100 million budget. The debate should focus on the pros and cons, including existential risk.

      Researchers are making progress in turning large language models into autonomous agents, capable of completing tasks that humans can do in about 30 minutes. However, there is a plateau in the scaling law, and the cost-effectiveness of these agents is significant, often costing less than a US bachelor's degree holder's median hourly wage. The debate around the California SB 1047 AI regulation bill continues, with Fei-Fei Li arguing against it, citing potential harm to academia, smaller tech companies, and open-source communities. However, it's crucial to engage with the reasons for the bill's proposal, including the existential risk argument, which Fei-Fi Li's piece did not address. The bill sets a $100 million budget requirement for regulation, and the debate should focus on the pros and cons of the policy, rather than just the potential negative consequences.

    • AI Regulation and CompetitionProductive dialogue between stakeholders is crucial for effective and safe AI development. Competition is essential to prevent monopolistic practices and maintain innovation.

      The ongoing debate surrounding AI regulation, as exemplified by the discussion around the AI Act, highlights the importance of having a productive and inclusive dialogue between various stakeholders. While there are valid arguments on both sides, it's crucial to engage with each other's perspectives in good faith to ensure the development of effective and safe AI technologies. The recent ruling against Google's monopolistic practices in search is another significant development. This decision, which could lead to a forced breakup of the company or the unwinding of exclusive search deals, underscores the importance of maintaining a competitive market and preventing dominant players from stifling competition. The consequences of this ruling could be far-reaching, making it a topic of great interest and importance.

    • Tech Antitrust, New TechnologyGoogle paid Apple $20 billion to be default search engine, violating antitrust laws, impacting future negotiations. Amazon under UK investigation for exclusivity agreements with Anthropic. OpenAI released GP40 with safety evaluations and new vulnerabilities.

      There have been significant developments in the tech industry regarding antitrust investigations and new technology releases, specifically for Google, Amazon, and OpenAI. Google was found to have violated antitrust laws by paying Apple $20 billion to be the default search engine, leading to a drop in Apple's stock. Amazon is currently under investigation by the UK's competition and markets authority over exclusivity agreements with AI research firm Anthropic. OpenAI, on the other hand, released the system card for their new model, GP40, which includes safety evaluations and regulations. Google's violation of antitrust laws could impact future negotiations, while the UK's investigation into Amazon's exclusivity agreements raises questions about competition. OpenAI's GP40 system card includes safety evaluations and regulations, but also new vulnerabilities due to its ability to generate audio and respond to audio inputs. The document outlining the model's capabilities and safety concerns is comprehensive and includes information on testing methods and potential risks, such as persuasion and unauthorized voice generation. These developments underscore the ongoing importance of antitrust regulations and the potential risks and benefits of new technologies.

    • AI model evaluationOpenAI's GPT-3 model has received positive feedback from Meta and Apollo, but further testing and evaluation are necessary. Its ability to mimic human voices, such as Scarlett Johansson, is still unknown.

      OpenAI has released a new model, GPT-3, which has shown promising results in various applications, but still needs further testing and evaluation. The model has received positive feedback from third-party assessments by Meta and Apollo. However, its ability to mimic human voices, such as Scarlett Johansson, remains unknown. OpenAI has been transparent about the model's capabilities and limitations, providing a detailed model card. The discussion also touched upon the potential of AI-generated content and music. Overall, the episode provided valuable insights into the latest developments in AI technology.

    Recent Episodes from Last Week in AI

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024