Logo

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    enSeptember 03, 2024
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    Podcast Summary

    • Text-to-image generation advancementsRecent developments in text-to-image generation include Ideogram AI's new features and Google's free Imagen, which offer improved results, customization, and accessibility.

      The field of AI is rapidly advancing, particularly in the areas of text-to-image generation. Last week saw several notable developments, including Ideogram AI's expansion of features with its version 2 model and the release of Google's powerful AI image generator, Imagen, for free use. Ideogram's new features include a meta prompt that rewrites user prompts to improve results, manual color palette control, and a search functionality for pre-generated images. Google's Imagen, a significant player in the field, now allows users to experiment with the model through their AI Test Kitchen service, providing a free and relatively low-latency option for generating images. These advancements demonstrate the maturation of text-to-image generation as a product, with a focus on making the user experience as seamless and customizable as possible.

    • AI image and video generation toolsThe widespread availability of free AI image and video generation tools raises concerns over data uncertainty and potential copyright infringement, while advancements in text-to-video generation continue to improve user experiences.

      The race to make AI technology more accessible and affordable is leading to the widespread availability of free tools like image and video generators. However, the uncertainty surrounding the data used to train these models and the potential for generating copyrighted material raises concerns. For instance, a new image generation tool, Grok, is now free but comes with restrictions and potential workarounds. Similarly, Perplexity's image generation tool allows users to choose between different models, including Flux, which was previously only available via Grak-2. The trend towards faster and more realistic text-to-video generation is also continuing, with Luma's Green Machine 1.5 now able to generate legible text and five seconds of video in about two minutes. These advancements could lead to significant changes in user experiences, particularly as generation times continue to decrease. However, the potential for inconsistencies and continuity errors in AI-generated content remains a challenge that will need to be addressed as we move towards higher fidelity video generation.

    • AI video generation speed and costNew tools and technologies are making video generation faster and more accessible, with Runway's Gen Free Alpha Turbo offering 7x speed and 50% cost reduction. Hardware developments, like AMD's acquisition of ZT Systems, are also driving progress in AI technology.

      The world of AI is rapidly advancing, with new tools and technologies emerging to make video generation faster and more accessible. Runway's Gen Free Alpha Turbo is a prime example, offering seven times the speed and half the cost of its previous version. This trend of faster and cheaper video generation is expected to continue, leading to a significant shift in the market. Another notable development is Perplexity's latest update, which improves its code interpreter, enabling it to run queries and generate charts in real time. This feature sets Perplexity apart from competitors like Google, as it positions itself to compete in the search engine market while offering more engaging, interactive features. Hardware developments are also shaping the AI landscape, with AMD's acquisition of server maker ZT Systems for $4.9 billion marking a significant move in the direction of setting up servers for AI solutions. This acquisition is a big deal for AMD, representing a significant investment in the future of AI technology. Overall, these developments underscore the rapid pace of innovation in the AI space and the increasing importance of hardware infrastructure in driving progress. The race to make AI more accessible, faster, and more affordable is on, and it's an exciting time to be a part of it.

    • AMD-ZT Systems acquisition, OpenAI partnershipAMD's acquisition of ZT Systems expands beyond GPU capabilities into AI infrastructure and software market. OpenAI's partnership with Conde Nast raises regulatory concerns and creates a significant barrier to entry for smaller competitors. GitHub's rival raises substantial funding for competing AI coding assistant.

      AMD's acquisition of ZT Systems is not just about adding GPU capabilities to their server offerings, but also about expanding into the broader AI infrastructure and software market. This move includes the offloading of ZT Systems' manufacturing arm, allowing AMD to focus on higher margin businesses and potentially mitigating antitrust concerns. The deal structure, which includes both stock and cash, also signals a partnership between the two companies. Meanwhile, OpenAI's partnership with Conde Nast raises questions about regulatory capture and the potential for a moat built on cash. In the tech world, OpenAI is paying large sums to access content for training its models, creating a significant barrier to entry for smaller competitors. The lack of similar arrangements for video data, particularly from YouTube, adds intrigue to the situation. Lastly, a GitHub co-pilot rival has raised a substantial Series A round, indicating strong investor interest in competing with Microsoft's profitable AI coding assistant. These developments underscore the ongoing competition and innovation in the AI sector.

    • Monetization in Generative AIOpen AI leads monetization in generative AI, companies must improve models with unique twists, industry shifting towards profitability, strategic partnerships, and ongoing advancements continue

      Open AI and its affiliated suite of products are currently leading the way in monetization within the generative AI space. Companies looking to compete need to focus on improving existing models with unique twists. OpenAI's unique perspective and early investment in promising startups, similar to Stripe, position them well for future success stories. The industry is seeing a shift towards profitability and fewer large funding rounds, but there's still room for VC excitement in profitable sectors. Stability AI, which has raised a significant amount of capital but faced leadership challenges, recently appointed a new CTO with extensive experience in infrastructure and business leadership to help turn the company around. The partnership between Cruise and Uber, allowing Cruise robotaxis to be hailed through the Uber app in 2025, is a strategic move that could help Cruise compete with Waymo and Tesla in the robotaxi market. Open source projects like AI 21's Jamba model family continue to introduce new models, contributing to the ongoing advancements in the generative AI space.

    • Jamba modelsJamba models, released by AI21 Labs, are hybrid models that combine the strengths of transformer and Mamba architectures, achieving better performance and scalability than similar-sized transformer models. They use a special quantization method and can recall facts reliably from large context windows.

      Jamba, a new class of models, combines the strengths of transformer and Mamba architectures to achieve better performance and scalability. Jamba models, released by AI21 Labs, are hybrid models that use a mixture of experts architecture and perform better than similar-sized transformer models like LaMA 3, 8 billion, and 70 billion. Mamba, an alternative to transformer models, has the ability to scale infinitely as the length of the input grows, but its challenge has been proving its effectiveness at large scales. By combining the two, Jamba models can account for complex relationships within a maximum context window size and keep updating a summary of previously read information for arbitrarily long inputs. This allows Jamba models to recall facts reliably from large context windows, unlike other models with long context windows that often forget information. Additionally, Jamba models use a special quantization method called experts int8, which only compresses the experts in the MOE model, accounting for 85% of the model weights. This makes Jamba models more democratized and accessible, as they can fit on a single 8-GPU node. The Jamba Open Model License Agreement allows for both research and commercial use, following an acceptable use policy. Exciting developments in the field of language models continue to emerge, with Jamba being a significant step forward in achieving better performance and scalability.

    • Microsoft Phi models data qualityMicrosoft's Phi series of models have exceptional data quality and are intentionally limited in size to ensure high performance. The series offers a large 3.8 billion parameter model suitable for edge devices, while smaller models are being developed through pruning and distillation.

      Microsoft's Phi series of models, particularly the smaller ones, stand out due to their exceptional data quality. Microsoft is intentionally limiting model sizes to ensure high performance, while larger models could potentially achieve even better performance with more compute and data. Additionally, the Phi series offers a 3.8 billion parameter, 128,000 token context window model, which is significantly larger than other models in its class, making it suitable for edge device deployment. Another trend is the development of smaller models through pruning and distillation, as demonstrated by NVIDIA's Llama3.1-Minutron4b and Open Source Drakaris models. These advancements contribute to the growing momentum towards smaller, more efficient models. The research paper "Can AI Scaling Continue through 2030?" by Epic AI highlights the exponential growth in AI model scaling and compute usage, posing questions about the sustainability and future of AI research.

    • AI scaling constraintsDespite the projected 4x year-over-year growth rate in AI compute budgets, potential constraints like power requirements, chip production, data availability, and latency could limit the ability to scale AI systems beyond current expectations.

      The current trajectory of AI scaling, specifically compute budgets, is expected to continue at a 4x year-over-year growth rate until around 2030. This would result in systems that are 10,000 times more powerful than GPT-4. However, there are potential constraints that could limit this growth. These include power requirements, chip production, data availability, and latency. Power consumption is projected to need one to five gigawatts, a significant increase from current levels. Chip production could be a bottleneck due to the limiting factor of packaging technology. Data availability is uncertain, with potential solutions like synthetic data and multimodal data. Latency, the time it takes for data to propagate through a model, places a hard boundary on the amount of compute that can be poured into a model based on its size. These constraints could impact the ability to scale AI systems beyond current expectations.

    • LLMs and AGIAdvancements in LLMs and the development of agentic LLMs, through techniques like guided Monte Carlo tree search, self-critique, and iterative fine-tuning, are significant steps towards achieving AGI, effectively utilizing large-scale computation, and driving the ongoing race towards AGI.

      The ongoing advancements in large language models (LLMs) and the development of agentic LLMs are significant steps towards achieving artificial general intelligence (AGI). The scaling laws in machine learning have shown a reliable and predictable increase in next word prediction accuracy as more compute and data are fed to models, leading to substantial investments from tech giants like Microsoft. The latest research from AGI company Multion, in collaboration with Stanford, proposes a framework combining guided Monte Carlo tree search, self-critique, and iterative fine-tuning to transform an LLM into an agent that can perform tasks independently. This framework, which improves agent performance significantly, is a step towards agent-first architectures and a substantial capability leap. The success of these techniques lies in their ability to effectively utilize large-scale computation, as argued in Richard Sutton's essay "The Bitter Lesson." These advancements, while expensive, hold the potential for significant returns and are driving the ongoing race towards AGI.

    • Deep Learning DistillationDeep learning models, specifically transformers, are being converted to smaller, more trainable state-space models through a process called distillation, making them more accessible and effective while retaining their capabilities.

      The use of deep learning models, specifically those based on Monte Carlo tree search and agentification, is a current trend in AI research due to their ability to learn and adapt through online experiences. These models require large amounts of compute, and as compute continues to get cheaper, they become more accessible and effective. The transformer to SSM (State-Space Model) paper discussed the conversion of transformer models to SSMs, resulting in smaller, more trainable models. This process, called distillation, allows for the training of large transformer models and the transfer of their capabilities to smaller, more efficient SSMs. The paper's success demonstrates the potential of this approach and its strategic importance in the field of AI research.

    • AI Learning DynamicsAI learning can decline as models tackle new tasks sequentially, termed 'loss of plasticity.' Researchers suggest refreshing certain model neurons to enhance learning efficiency. This is vital amid ongoing discussions on AI regulations, as illustrated by recent changes to California's SB 1047 bill.

      Recent research into AI models highlights significant findings, particularly the concept of 'loss of plasticity.' This phenomenon occurs when models trained sequentially on different tasks become less capable of learning new tasks as they progress. Unlike catastrophic forgetting, which involves losing previous knowledge, loss of plasticity indicates a decline in the ability to learn new tasks effectively. Researchers propose an innovative solution: periodically re-initializing certain neurons in the model, which helps maintain learning efficiency. Notably, this work was co-authored by Richard Sutton, a prominent figure in reinforcement learning. Additionally, the discussion touches on a California bill regulating AI, indicating ongoing policy considerations in the field. Together, these elements emphasize the importance of understanding and improving AI learning processes amidst evolving regulatory landscapes.

    • AI regulation in CaliforniaCalifornia's SB1047 bill faces opposition from major AI companies, with critics arguing it may not pose a significant regulatory burden for them due to its limited scope and lenient requirements.

      The recently discussed bill, SB1047 in California, aims to regulate large AI labs, but it has faced significant pushback from companies like Meta, OpenAI, and Anthropic. The bill will create a new government agency and board, but it also reduces requirements for companies, such as no longer needing to submit certifications of safety test results under penalty of injury. Meta will be particularly happy about the bill's protection of open-sourced fine-tuned models. However, critics argue that the bill only applies to a select few players with high-cost models and may not pose a significant regulatory burden for them. Companies like OpenAI have raised concerns about innovation and regulatory overreach. Anthropic, on the other hand, has given a qualified endorsement, emphasizing the need for a regulatory framework adaptable to rapid change in the field. The debate continues as the bill heads towards a final vote in California's Assembly floor.

    • Personhood credentials, Sparse outer encodersThe proposal of personhood credentials to distinguish human and AI interactions online raises concerns about relying on a single centralized issuer. Sparse outer encoders can be further divided into features, leading to a meta sparse auto encoder, impacting interpretability and deceptive alignment detection.

      As we advance in artificial intelligence (AI), distinguishing between human and AI interactions online will become increasingly challenging. To address this issue, the concept of personhood credentials has been proposed. These digital credentials, issued by trusted institutions, prove that users are real people without revealing their identity to online services. However, the paper raises concerns about relying on a single centralized issuer of these credentials and suggests exploring techniques for multiple issuers to reduce risk and maintain privacy. Another technical work discusses the nature of sparse outer encoders (SAEs) and their features. Contrary to the assumption that these features are atomic and cannot be further divided, the paper argues that you can train another SAE on top of an existing one, resulting in a meta sparse auto encoder that splits up the features more. This finding has implications for interpretability techniques used to understand large models and detect deceptive alignment. Both papers contribute valuable insights to ongoing discussions about AI, privacy, and the importance of maintaining a free market and democratic approach to these technologies.

    • AI interpretability and controlResearchers are compressing and abstracting neural activations to improve interpretability and control the resolution of atomicity of concepts, while legal developments around AI and copyright infringement emphasize the need for clear guidelines and regulations

      Researchers are finding ways to compress and abstract neural activations in deep learning models to improve interpretability and control the resolution of atomicity of concepts. This process involves using smaller lists of numbers to force abstractions to be more atomic, giving researchers more control over the degree of interpretability. On the legal front, there have been more developments in lawsuits against AI companies for copyright infringement. The definition of induced infringement, which applies when a company provides instructions for using their product in a way that violates a copyright, could pose a problem for AI models that generate copyrighted material when prompted. The case against Mid-Journey, which used a list of 4,700 artists' names to generate works of their style without their knowledge or approval, is moving forward. These developments highlight the need for clear guidelines and regulations around AI and intellectual property.

    Recent Episodes from Last Week in AI

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024