Logo

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    enJuly 25, 2024
    What is the price of GP 4.0 Mini model?
    How does GP 4.0 Mini compare to GPT 4?
    What factors contribute to effective datasets for language models?
    What are the energy requirements for XAI's Memphis project?
    What challenges do companies face in building data centers domestically?

    Podcast Summary

    • Affordable AI modelsOpenAI released a smaller, cheaper version of their GPT 4 model, GP 4.0 Mini, which is significantly cheaper and outperforms previous models, demonstrating a trend towards more affordable AI models in the industry.

      OpenAI released GP 4.0 Mini, a smaller and cheaper version of their GPT 4 model, priced at 15¢ per 1,000,000 input tokens, which is significantly cheaper than previous models. This trend of smaller, more affordable models demonstrates the ongoing race to reduce pricing in the AI industry. GP 4.0 Mini also showcases impressive capabilities, outperforming GPT 4 on chat preferences on the LMSys leaderboard. This shift towards smaller, more efficient models doesn't indicate scaling is broken, but rather a way for companies to learn and apply those lessons to the next training run, leading to exponential increases in capabilities. OpenAI's goal is to create intelligence that's "too cheap to meter," meaning the most valuable use cases will involve inference time reasoning, moving towards agents rather than simple chat responses.

    • AI text-to-video generationOpenAI's GPT-4 and Hyper's Hyper 1.5 are advancing AI text-to-video generation with faster processing, human-like text, longer clips, and higher resolution. Companies are focusing on safety, enterprise solutions, physics engine, and commercialization.

      The latest advancements in AI technology, specifically OpenAI's GPT-4 and Hyper's Hyper 1.5, are pushing the boundaries of what's possible in text and text-to-video generation. OpenAI's GPT-4 is significantly faster and more powerful than its predecessors, with the ability to generate human-like text at a cost that raises questions about its intended user base. Hyper's Hyper 1.5, on the other hand, offers longer text-to-video clips and higher resolution, attracting over 1.5 million users since its launch just four months ago. Both companies are making strides towards creating more advanced AI models, with OpenAI focusing on safety and enterprise solutions, and Hyper positioning its technology as a physics engine for world understanding and AGI development. The text-to-video space is becoming increasingly competitive, and it remains to be seen which companies will thrive and how value will accrue in the industry. Anthropic's recent release of a cloud app for Android, offering free access to its chatbot cloud and additional features for subscribers, is another move towards commercializing AI technology and providing a more seamless user experience. Overall, these advancements demonstrate the rapid pace of innovation in AI and the potential for significant impacts on various industries and applications.

    • AI technologies and applicationsAnthropic's Claude AI, Google's Gemini AI, and OpenAI's Strawberry are new AI technologies and applications, each offering unique features, with Anthropic lagging in downloads compared to ChatGPT.

      There are several new AI-driven technologies and applications emerging in the market, each trying to make a mark and offer unique features to users. Anthropic's Claude AI is making strides with its advanced model Cloud 3.5 SONET, but it lags behind competitors like ChatGPT in terms of downloads. Google's Workspace Labs has introduced Gemini AI for creating video presentations, while YouTube Music is testing AI conversational radio for customized playlists. OpenAI is reportedly working on a new project, Strawberry (or Q star), aimed at delivering advanced reasoning capabilities, moving beyond the language model paradigm. These developments underscore the growing importance of AI integration across various industries and applications, with companies continually pushing the boundaries to offer more intelligent and human-like systems.

    • AI advancements, data wallOpenAI's Strawberry model demonstrates progress in language models that may avoid logical traps and aid in self-play dynamics and long-horizon tasks, potentially overcoming the data wall in AI training. However, building large data centers for AI research faces challenges due to local community resistance, as seen in Elon Musk's Memphis project.

      The latest developments in AI research, specifically the Strawberry model from OpenAI, show promising advancements in general language models that may avoid logical traps and pave the way for self-play dynamics and long-horizon tasks. This could potentially help overcome the "data wall" in AI training. OpenAI's goal is to reach human-level problem solving (Level 2) and eventually have AI aid in invention (Level 4). However, the internal workings of OpenAI might become less transparent as the company grows and silos form. Meanwhile, Elon Musk and XAI's plans to build a massive supercomputer in Memphis for $10 billion highlight the challenges companies face when building large data centers domestically, as they encounter resistance from local communities due to energy and water demands. This red tape paradox is a common issue, with some companies opting to build overseas to avoid these challenges. The Memphis project will initially require up to 50 megawatts of electricity, equivalent to the energy usage of about 30,000 homes, and potentially up to 150 megawatts, which could power 100,000 homes. Despite the potential economic benefits, there are concerns about the impact on the local community and the lack of consultation in the deal.

    • Musk's XA project, AI ethics and educationElon Musk's XA project in Austin sparks debate, while AI companies use YouTube transcripts without permission, raising ethical concerns. Andrei Capkoffee launches Eureka Labs, an AI-driven education platform, and Anthropic and Menlo Ventures collaborate on the Anthology Fund to invest in early-stage AI companies

      Elon Musk's XA project in Austin, Texas, is generating excitement and skepticism in equal measure. Musk has promised significant job creation and infrastructure improvements, but some are wary of past promises made by Musk's companies that have not materialized. Meanwhile, in the world of AI, Apple, NVIDIA, and Anthropic have reportedly used YouTube transcripts without permission to train their models, raising questions about data ownership and ethics. In education, Andrei Capkoffee, a well-known figure in the AI world, has announced the launch of Eureka Labs, an education platform built with AI at its core. Lastly, Menlo Ventures and Anthropic have teamed up to launch the Anthology Fund, a $100 million initiative to invest in early-stage AI companies, providing both parties with unique benefits. These stories highlight the rapid advancements and complexities in technology and business.

    • AI InvestmentsAnthropic and Mistral AI are making significant investments in machine learning and AI, with Anthropic receiving a substantial investment from PayPal and Mistral AI pushing boundaries of code generation. Both companies are releasing new models under open source licenses and showing promising results.

      Several companies, including Anthropic and Mistral AI, are making significant investments in the field of machine learning and artificial intelligence. Anthropic, which recently received a substantial investment from Venmo's parent company, PayPal, is known for its impressive portfolio, which includes investments in Gilead Sciences, Uber, and Roku. Mistral AI, on the other hand, is pushing the boundaries of code generation with its Mamba-based architecture, releasing new models like CodeStrel Mamba and Mistral Nemo 12b. These models, which can handle large context windows and are optimized for specific use cases, are being released under open source licenses and are showing promising results. Additionally, smaller models like Hugging Face's smallllm are being developed to meet the needs of edge devices and are outperforming existing models in their size categories. Overall, these developments highlight the importance of data curation and the ongoing trend towards making AI technology more accessible and portable.

    • LLM optimizationResearchers and companies optimize LLMs through large synthetic datasets, open-source licensing, and hardware optimizations, enabling smaller models with more knowledge, free use for research and non-commercial purposes, and significant cost savings.

      Researchers and companies are continuously pushing the boundaries of large language models (LLMs) by optimizing their implementation and improving their efficiency. One such optimization is the "Cosmopedia v2" dataset, which is the largest synthetic dataset for pretraining and contains 30 million textbooks, blog posts, and stories generated by a language model. This dataset is used to cram as much knowledge as possible into smaller models, demonstrating that size constraints don't limit the amount of data and compute that can be poured into a model. Another development is the revamped license for the Stable Diffusion free model, which faced significant backlash due to its restrictive conditions. The new terms explicitly grant free use for research, non-commercial, and limited commercial purposes, allowing individuals and businesses with annual revenues under $1,000,000 to use the model without charge. Additionally, researchers from various institutions have introduced new optimizations for NVIDIA Hopper GPUs called "flash attention free," which achieves up to 75% usage of the GPU's maximum capacity, resulting in a 1.5 to 2 x speed up compared to previous versions. This optimization can lead to significant cost savings for data centers and companies using these GPUs. In summary, researchers and companies are working on optimizing LLMs through large synthetic datasets, open-source licensing, and hardware optimizations to improve efficiency and reduce costs.

    • Memory optimization in deep learningMemory optimization techniques like Flash Attention 3 and mixture of experts can lead to faster training times, reduced costs, and potentially larger context window sizes in deep learning models.

      Researchers are constantly seeking ways to optimize deep learning models for better performance and efficiency. In the case of large language models like those used in AI, reducing the amount of data exchange between different types of memory on the chip can lead to significant improvements. Flash Attention 3, for instance, was able to achieve 75% of the maximum capacity on the current state-of-the-art GPU, resulting in faster training times, reduced costs, and potentially larger context window sizes. Another approach is the use of "mixture of experts," where a large model is decomposed into a multitude of smaller experts, allowing for better scaling and more efficient use of compute resources. This can also help prevent the problem of catastrophic forgetting, as older experts can be retained and new ones added as needed. However, with a large amount of experts to choose from, efficient retrieval becomes a challenge. The paper "Mixture of a Million Experts" addresses this by using a technique called product key retrieval, inspired by search engine algorithms, to quickly identify the relevant experts for a given task. Overall, these papers demonstrate the importance of continuous research and innovation in the field of deep learning, as researchers strive to create more powerful and efficient AI systems.

    • Creating effective datasets for language modelsOptimizing for salience, difficulty, and novelty in creating effective datasets for language models leads to more challenging and effective benchmarks, resulting in better model performances and advancements in the field.

      Creating effective datasets for language models involves optimizing for salience, difficulty, and novelty. Salience refers to the importance and relevance of the capabilities being tested by a benchmark. Difficulty is the lowest error rate achievable by existing models, making the benchmark challenging enough to distinguish between models. Novelty is measured by the predictability of model performances on the benchmark, identifying benchmarks where model performances differ significantly from other benchmarks. Researchers are using adaptive search techniques to generate benchmarks and optimize for these metrics, leading to more effective and challenging datasets. For example, a study discovered datasets for math, multilingual, and knowledge-intensive question answering that are 27% more novel and 22% more difficult than existing benchmarks. Another study focused on encoding spreadsheets for large language models, introducing sheet compressor to compress spreadsheets by 25 times, leading to better in-context learning and usability. These advancements in preprocessing and early processing can have significant impacts on the field.

    • AI safety and regulationBalancing powerful prover models and weaker verifier models in AI systems can lead to safer, more interpretable solutions, but may come at a cost. Ongoing research aims to perfectly sync prover and verifier for safe AI. Trump allies propose an executive order to secure systems from foreign threats and establish industry-led AI agencies, but funding and reversibility are concerns.

      Ensuring the safety and interpretability of superintelligent AI systems is a complex challenge that requires a balance between powerful prover models and weaker verifier models. This dynamic, reminiscent of generative adversarial networks, can lead to more legible explanations from the prover but may come at the cost of some raw capabilities. The ongoing research in this area suggests that it's possible to reach a point where the prover and verifier are perfectly in sync, ensuring safe and trustworthy AI solutions. Meanwhile, former President Trump's allies are reportedly drafting an executive order to establish industry-led agencies for evaluating AI models and securing systems from foreign adversaries. This initiative, known as "Manhattan Projects for Defense," would likely differ significantly from the AI executive order issued by President Biden's team last year. While the specifics of the proposed order are not yet clear, it's expected to focus on making America a leader in AI technology and securing systems from foreign threats. Executive orders are a powerful tool for presidents to direct the government to take certain actions, but they don't come with funding, and they can be reversed by the next administration. The ongoing debate around AI regulation and safety continues to evolve, with various stakeholders advocating for different approaches to ensure the benefits of AI while mitigating potential risks.

    • Manhattan Project for AIA proposal suggests a large-scale initiative called the Manhattan Project for AI to ensure safety and alignment, involving massive funding and expert oversight, but faces debates on benefits, risks, and challenges of funding and expertise, as well as geopolitical implications

      There are calls for a large-scale initiative similar to the Manhattan Project during the 1940s to address the development and safety of artificial intelligence (AI). This proposal comes from the America First Policy Institute, a nonprofit led by former Trump advisors. However, it's important to note that this is not an official policy proposal from the Trump campaign. The Manhattan Project for AI would involve massive funding for advanced compute clusters and expert oversight to ensure safety and alignment. The proposal has sparked debate about the potential benefits and risks, as well as the challenges of funding and expertise. Additionally, there's ongoing discussion about geopolitical implications, such as US companies offering cloud services to Chinese companies, and potential US sanctions against China's semiconductor industry. These issues highlight the complexities and challenges of regulating and managing the development and use of AI.

    • AI Ethics, Whistleblower ProtectionsSignificant concerns have arisen over ASML's potential export of advanced lithography machines to China and OpenAI's alleged restrictive policies on disclosing potential risks to regulators and long-term NDAs for employees

      There are significant concerns surrounding the practices of ASML and OpenAI. At ASML, the advanced lithography machines they could be exporting to China have the potential to significantly impact China's domestic semiconductor supply chain. Meanwhile, OpenAI is under scrutiny for allegedly barring staff from disclosing potential risks to regulators and signing strict NDAs that prevent employees from speaking negatively about the company indefinitely. These practices have raised concerns about whistleblower protections and the White House executive order on AI safety. Senator Chuck Grassley has called for OpenAI to change its policies and practices to encourage transparency and protect whistleblowers. The SEC is now involved, and OpenAI has been requested to produce all employment, severance, and investor agreements containing nondisclosure clauses. These developments highlight the importance of transparency and ethical business practices in the rapidly advancing field of AI.

    Recent Episodes from Last Week in AI

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024