Logo
    Search

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    enMay 05, 2024

    Podcast Summary

    • AI-powered development environmentGitHub's new AI-powered development workspace assists with code completion, sets up projects, runs and tests code, and provides coverage of the entire development process, reflecting the need for more comprehensive AI-assisted tools in software development.

      AI technology is continuing to evolve and integrate into various aspects of our lives, particularly in the field of software development. This past week, GitHub released a technical preview of a new AI-powered development environment, expanding on their existing AI code completion tool. This new workspace not only assists with code completion but also sets up projects, runs and tests code, and provides coverage of the entire development process. This shift towards more comprehensive AI-powered workspaces reflects the need for more than just writing lines of code in software development. Microsoft and GitHub's focus on this trend is part of a wider trajectory towards agent-like designs that support developers by doing more than just writing code based on what's already been written. This move towards more AI-assisted development environments is a significant step in automating software development workloads and is indicative of the fact that AI is far from plateauing. Instead, it continues to innovate and find new applications in various industries.

    • AI moving beyond chat interfaces to agentization and problem-solvingGitHub's new AI-supported tools automate entire development process and China's Vito produces HD videos from prompts, demonstrating AI's potential to solve complex problems and automate workflows.

      The latest advancements in AI technology are moving beyond just chat interfaces and focusing on agentization and problem-solving capabilities. This is particularly evident in the full-stack automation of the development lifecycle demonstrated by GitHub's new AI-supported tools. These tools provide suggestions and assistance throughout the development process, from project planning to implementation and testing. This is a significant step towards automating the entire development process and shows the potential for AI to fulfill specific tasks in a structured workflow. Another interesting development is the unveiling of Vito, an AI tool from Shanxu Technology and Chingo University in China, which can produce HD resolution videos up to 16 seconds long from prompts. While it may not be as capable as Sora, which can produce videos up to 60 seconds long, it demonstrates China's impressive domestic capacity in AI technology. However, it's important to note that there are always caveats when evaluating Chinese companies and their AI advancements. Overall, these developments show that AI technology is making significant strides beyond just chat interfaces and is capable of solving complex problems and automating entire workflows. It will be interesting to see how these advancements continue to evolve throughout the year.

    • China's New AI Model VDU: High-Resolution Videos and Physics SimulationChina's VDU AI model generates high-resolution videos and simulates physics, challenging advanced AI models. However, its significant hardware requirements make it costly.

      A new AI model named VDU, developed in China, is capable of generating high-resolution 1080p videos and simulating physics, positioning it as a potential competitor to advanced AI models like Sora from OpenAI. The hardware requirements for running VDU are significant, with the need for multiple advanced GPUs, making it a costly endeavor. The company behind VDU, though relatively new, boasts a strong pedigree with founders from top universities and investors from major tech companies. While China may be currently able to keep up with the latest AI developments, export controls could widen the gap. Meanwhile, ChatGPT, an existing AI model, has introduced a new feature for its paying subscribers, allowing them to modify and store the model's memory during conversations. This feature, which provides users with more control over the model's responses, is now available outside of Europe and Korea for ChatGPT Plus subscribers. The reason for the exclusion of Europe and Korea remains unclear, adding to the intrigue surrounding the regulatory environment for AI technology.

    • New AI hardware device, The Rabbit R1, receives mixed reviewsThe Rabbit R1 faces criticism for its finicky interface, lack of clear benefits, and unreliable features, while Amazon's Q, a new AI-powered assistant for businesses, is now generally available and offers accelerated software development and data analysis.

      The Rabbit R1, a new AI hardware device, is receiving mixed reviews due to its finicky interface, lack of clear benefits, and unreliable features. The device, which is much cheaper than similar AI hardware like the Humane AI pin, is being compared to an alpha release. Some reviewers suggest that these types of products may become more useful over time as they mature, but for now, it's not worth it to be an early adopter. Additionally, there's a trend emerging in the tech industry where startups are shipping hardware and making promises about future software updates, essentially offloading risk down the chain to consumers. This is similar to the VC process, but instead of investors taking on the risk, consumers are now being asked to invest in the product's potential future value. On a positive note, Amazon's Q, a generative AI-powered assistant for businesses and developers, is now generally available. The chatbot is designed to accelerate software development and help businesses deal with their data by generating code, testing, debugging, and implementing new code, as well as connecting to enterprise data repositories to answer business-related questions.

    • Amazon and Yelp's AI advancements and a new Chinese robotAmazon focuses on AI for employees, Yelp launches chatbot for users. Chinese company Astrobot introduces humanoid-ish robot S1 with impressive capabilities.

      Both Amazon and Yelp are making significant strides in AI technology, with Amazon focusing on generating AI-powered apps for its employees and Yelp launching an AI chatbot for users to find local service professionals. Amazon, although a latecomer, benefits from its vast distribution network and existing enterprise presence. Yelp, on the other hand, is diversifying and potentially feeling the squeeze as generative AI may impact service discovery platforms. In the robotics sector, Astrobot, a new Chinese company, has developed a humanoid-ish robot, the S1, capable of impressive feats, such as handling heavy payloads and fast movements. The robot, expected to be commercially available later this year, adds to the growing trend of humanoid robots. The comparison of humanoid robots to adult males in terms of performance is a notable trend, shedding light on the potential capabilities of these machines. The lower half of the S1 robot, which appears stationary in the video, raises questions about the robot's mobility and the implications for the humanoid robot thesis.

    • Tesla's Autopilot System Faces Regulatory ScrutinyTesla's autopilot system is under investigation by the NHTSA for concerns over driver monitoring, potential misuse, and permissive operational design, which could impact the industry's progress towards autonomous vehicles.

      The robotics industry is moving quickly with companies like Softbank Robotics, Boston Dynamics, and Tesla making strides in production-ready systems. However, the regulatory landscape, particularly for Tesla's autopilot system, is becoming increasingly scrutinized. The National Highway Traffic Safety Administration (NHTSA) is investigating Tesla's recall of over 2 million cars due to concerns about driver monitoring and potential misuse. The NHTSA has previously investigated Tesla's autopilot system, citing concerns over its permissive operational design and marketing, which they believe gives drivers a false sense of autonomy and leads to complacency. The ongoing investigation and recall suggest that Tesla may face headwinds in demonstrating the safety and reliability of its autopilot system. It's an exciting time for robotics, but the industry must navigate regulatory challenges to ensure the safety and effectiveness of these advanced technologies.

    • Tesla's Stock Surges on Baidu Deal, OpenAI Partners with Financial TimesTesla's stock rose after Elon Musk secured deals with Baidu for FSD tech in China and OpenAI announced a strategic partnership with the Financial Times for AI content and product development.

      Tesla's stock soared after Elon Musk returned from China with reports of deals with tech giant Baidu, potentially allowing Tesla to push forward with Full Self-Driving (FSD) technology in China and leverage the vast data from the 1.7 million Tesla cars deployed there. This comes at a time when Tesla investors have been concerned about Elon Musk's distractions with Twitter and other projects. Meanwhile, OpenAI, a leading artificial intelligence research lab, announced a strategic partnership and licensing agreement with the Financial Times. This deal allows OpenAI to use Financial Times content for training and also aims to deepen the Financial Times' understanding of generative AI for content discovery and developing new AI products and features for readers. These deals highlight the growing importance of data and strategic partnerships in the development and implementation of advanced AI technologies.

    • OpenAI's Strategic Partnerships and Fundraising EffortsOpenAI is making strategic partnerships with organizations and raising funds to invest in startups, demonstrating its growing influence and value to companies. Huawei is bypassing US sanctions by forming a consortium to develop high bandwidth memory in China, highlighting the importance of memory technology in AI development.

      OpenAI is making strategic partnerships with various organizations, including The Financial Times, and raising funds through specialized purpose vehicles, while also facing copyright lawsuits. These partnerships could potentially provide value to companies in the short term, but the long-term implications are uncertain. OpenAI is reportedly in discussions with around a dozen organizations for similar deals. The OpenAI Startup Fund has quietly raised an additional $15 million from unnamed investors, bringing its total to $25 million, which it uses to invest in early-stage startups. Huawei, meanwhile, is forming a consortium with Chinese government and semiconductor companies to develop high bandwidth memory (HBM) in China, bypassing US sanctions restricting AI development. HBM is essential for AI and high-performance computing as it increases the speed at which data can travel between processors and memory. OpenAI's partnerships and fundraising efforts demonstrate its growing influence and the value it offers to organizations, while Huawei's HBM project underscores the importance of memory technology in AI development and the efforts to bypass US sanctions.

    • Huawei improving chip processing speed, DeepMind's Med Gemini outperforms on medical benchmarksHuawei aims to enhance processor speed by reducing data travel distance, while DeepMind's Med Gemini excels in medical applications, surpassing human performance on certain tasks and dealing with video assessments.

      Huawei is working on improving the speed of its processors by reducing the distance data needs to travel between processors and memory using the third dimension of chips. This is a crucial step for Huawei, as they badly need high bandwidth memory for their Ascend processors, which is currently a bottleneck for them. In the realm of research and advancements, DeepMind and Google have released a technical report on Med Gemini, a version of Gemini fine-tuned for medical applications. Med Gemini is outperforming other models on various medical benchmarks, including gene name extraction, gene location, protein coding genes, human genome DNA alignment, and question answering in the medical domain. DeepMind is emphasizing the importance of long context in the medical field, where little details can make a big difference, and Med Gemini is starting to show promising results in outperforming humans on certain tasks. Another interesting development is Med Gemini's ability to deal with video, assessing surgery videos and outputting an assessment of whether the goals of the surgery were achieved. These advancements in AI applications, particularly in the medical field, are significant steps forward in automating mundane tasks and improving the overall efficiency and accuracy of medical professionals.

    • AI model's reasoning text may not be crucial to performanceA recent study found that replacing human-readable reasoning text with filler tokens in transformer language models did not impact performance for certain tasks, suggesting that the model's ability to perform complex tasks might be more reliant on inference and computing power than on the reasoning text it generates.

      The text generated by AI models during their problem-solving process may not be as crucial to their performance as previously believed. A recent study explored the use of filler tokens, or dots, instead of human-readable reasoning steps in transformer language models. The researchers found that for certain tasks, the performance remained the same when they replaced the reasoning text with filler tokens. This suggests that the model's ability to perform complex tasks might be more reliant on the amount of inference and computing power it can apply, rather than the human-readable reasoning it generates. This discovery challenges the assumption that the apparent reasoning process of AI models is directly tied to their performance. The researchers were careful to select tasks where this approach would be effective and found that the model's performance was comparable whether it generated human-readable reasoning or not. This finding has important implications for understanding the true computations being performed by AI models and the validity of their apparent reasoning processes. It also raises questions about the role of human-readable reasoning in AI systems and whether it is necessary for optimal performance. The study adds to the ongoing conversation about the nature of AI and its capabilities, emphasizing the importance of continued research in this area.

    • Recent research reveals disconnection between language model's apparent reasoning and actual reasoningFindings challenge assumptions about language model reliability, raise concerns for AI interpretability, and highlight the importance of rethinking assumptions about AI reasoning.

      Recent research highlights the disconnection between a language model's apparent reasoning and its actual reasoning. This was demonstrated through experiments with a small language model, which was able to perform additional computations without producing human-interpretable outputs, leading to more accurate answers in specific mathematical question answering tasks. This finding challenges assumptions about the reliability of reasoning traces and raises concerns for AI interpretability, safety, and alignment. Further research in this area could reveal the specific circumstances where this disconnection occurs and how it might impact various applications. Another interesting development is the use of self-training to bootstrap a synthetic training set for teaching large language models to reason about code. This approach allows the models to learn from program execution traces and improve their ability to debug and repair code, leading to significant improvements on certain benchmarks. These findings underscore the importance of rethinking our assumptions about AI reasoning and the need for more sophisticated approaches to understanding and interpreting AI behavior. The potential implications for AI development and application are vast, and ongoing research in this area is essential for ensuring that AI systems are safe, reliable, and aligned with human values.

    • Latest advancements in language modelsMicrosoft's co-pilot, SenseTime's Sense Nova 5.0, and Octopus v4 showcase improvements in language model performance and ability to handle complex software tasks. Companies like SenseTime add complexity to the discussion due to national security concerns.

      The latest advancements in language models are pushing the boundaries of what these systems can do, with significant improvements in performance and the ability to handle more complex software engineering tasks. Microsoft's co-pilot announcement is an example of this trend, as these models are being developed to solve whole software problems, not just individual scripts. SenseTime's new language model, Sense Nova 5.0, is another example of this progress, with claims of surpassing GPT-4 on some benchmarks. Recurrent transformers and cloud on device collaboration are also emerging trends in this space. However, it's important to note that companies like SenseTime, which have been the subject of national security concerns, are making these advancements, adding an extra layer of complexity to the discussion. Octopus v4, from Nexa AI, is another development worth mentioning, as it focuses on cloud on device collaboration and the use of a graph of language models to coordinate work and utilize sub-workers. The open source universe is seeing a lot of activity in this area as well, with the release of models on Hugging Face and code on GitHub. Overall, these advancements are expanding the capabilities of language models and raising important questions about their potential applications and implications.

    • Advancements in language models: specialization, routing, and multi-token predictionRecent research explores specialization, routing to specialist models, and multi-token prediction to improve open source language models. Multi-token prediction enables parallel prediction and logical consistency, potentially addressing planning limitations.

      Open source language models, while improving, may not match the generality and performance of closed source models. Specialization and routing to specialist models are expected to persist in the open source community as a result. Another intriguing development is the idea of training language models to predict multiple tokens at once, which could lead to better performance and inference efficiency. This approach, known as multi-token prediction, was proposed in a recent paper from MetaAI. The researchers found that this method enables parallel prediction and allows for more logical consistency in the model's output. It also raises the possibility of addressing one of the limitations of language models, their inability to plan ahead and account for what they are about to say. While more exploration is needed, this simple yet effective idea could potentially help improve the logical consistency and planning abilities of language models. Another topic of importance is policy and safety, and a recent study found that it's possible to manipulate the behavior of large language models by altering their activations. While this information is not new, it provides insights into how transformers operate and underscores the importance of continued research in this area. Overall, the ongoing advancements in language models highlight the need for ongoing research in specialization, routing, and safety to close the gap between open source and closed source models.

    • Exploiting a residual stream to jailbreak transformer modelsResearchers discovered a method to preserve input data in transformer models, allowing for fine-tuning without forgetting, but ethical concerns remain as Big Tech companies are not fully cooperating in ensuring safe use of advanced AI technologies.

      A recent research paper discovered a method to "jailbreak" transformer models by exploiting a residual stream, allowing for fine-tuning without the usual risk of catastrophic forgetting. This technique involves preserving the original input data to each layer, which can then be used to reject or allow requests that the model should not. The researchers showed high success rates in implementing this method on various models, including Crenchat, Gamma E chat, and Gamma three instruct. However, an analysis piece from Politico highlights that despite promises from figures like Rishi Sunak to make AI safe, Big Tech companies are not fully cooperating, making it a challenge to ensure the ethical use of these advanced technologies. The paper's findings add to the ongoing conversation about the importance of transparency and accountability in AI development. Additionally, it's important to note that once a model is open-sourced, it may not be completely safe from manipulation.

    • UK-tech companies resistance to AI safety regulationsThe UK's push for AI safety measures faces resistance from tech companies unwilling to comply with multiple regulations and initiatives, questioning the commitment's seriousness.

      The negotiations between the UK government and tech companies regarding AI safety are facing resistance from the industry. Despite the UK government's push for safety measures and voluntary commitments from AI companies, some firms are reluctant to comply with different regulations and initiatives, such as pre-release testing. The US, being the base of many AI companies, is a significant factor in this issue. Companies argue that they cannot jump through hoops in every jurisdiction and question the initial commitment's seriousness. The US AI Safety Institute and the UK AI Safety Institute are now discussing a memorandum of understanding to collaborate on testing, but it remains unclear how this will address the concerns of companies having to report and share information with multiple entities. The proliferation of AI safety institutes could potentially lead to an excessive burden on companies if each one comes with unique requirements. The lack of testing mandates in some institutes, like the US AI Safety Institute, also raises questions about the effectiveness of these voluntary commitments.

    • Advancements in AI by Google and DOEGoogle DeepMind gains access to advanced AI models, DOE releases reports and launches initiatives to enhance AI use in clean energy, and the CHIPS Act incentivizes semiconductor manufacturing for AI advancements.

      There are significant developments in the field of AI, with various institutions and organizations, including Google DeepMind and the Department of Energy (DOE), making strides in this area. Google DeepMind has secured pre-deployment access to advanced AI models, while the DOE has announced new initiatives to enhance America's leadership in AI and clean energy. The DOE's actions include the release of reports on AI's potential in the clean energy sector, the creation of a new website showcasing AI tools, and the launch of a Voltamic initiative to use AI for streamlining permitting processes. The CHIPS Act, which aims to incentivize semiconductor manufacturing in the US, has also resulted in significant investments in this sector. While there are technical considerations and potential risks associated with the deployment of advanced AI models, these developments underscore the growing importance of AI in various sectors, particularly in energy and technology.

    • CHIPS Act's Impact vs AI Safety Summit's EvolutionThe CHIPS Act's $39 billion investment has led to over $300 billion in semiconductor industry investments, while the second AI Safety Summit introduced new concerns and saw fewer attendees compared to the first one.

      The CHIPS Act, with its $39 billion investment, has shown significant leverage in the high-capital-expenditure semiconductor industry, leading to over $300 billion in investments from chip companies and their supply chain partners over the next decade. However, the second Global AI Safety Summit, while important, has seen less impact compared to the first one due to fewer attendees and a shift towards addressing more detailed and complex issues. Despite this, new concerns such as market concentration, energy consumption, and environmental impact are being introduced, making the discussions even more challenging. The CHIPS Act's success in attracting investments demonstrates the potential of government intervention in high-risk, high-reward industries, while the AI Safety Summit's evolving focus highlights the need for more specific and nuanced discussions on AI safety.

    • Intersection of AI and National Security: Collaboration and ClarificationThe Federal AI Safety Board's formation addresses national security risks from AI-driven attacks, but diverse perspectives and ongoing dialogue are crucial in assessing potential risks and developing policies.

      The intersection of artificial intelligence (AI) and national security is a complex issue requiring clarity, focus, and agreement. The formation of the Federal AI Safety Board, composed of high-profile tech executives, is a response to the Executive Order on Advancing the Best Available Science and Technology to Address the National Security and Homeland Security Needs of the United States. This board will collaborate with the Department of Homeland Security to develop strategies for protecting critical infrastructure from AI-driven attacks. However, it's essential to consider diverse perspectives when assessing potential risks and developing policies. In the realm of AI-generated content, creating compelling and cohesive stories using AI is a challenging task. OpenAI's Airhead project, showcasing a person with a balloon head, required hundreds of generations and significant post-processing work, including color grading, retiming, and VFX. This highlights that human involvement and talent are still crucial in the production of AI-generated content. The complexity of these issues underscores the importance of ongoing dialogue and collaboration between various stakeholders, including tech companies, government agencies, and experts in the field.

    • Navigating complexities in the AI industry: OpenAI's copyright lawsuit and Sora's video involvementOpenAI faces challenges in the AI industry, including a copyright lawsuit and the efficiency of Sora's involvement in creating a promotional video. The lawsuit could impact their ability to surface results, while Sora's involvement led to a large amount of source material but the improvement in efficiency is unclear.

      While Sora's involvement in creating the promotional video for OpenAI's text-to-image model, DALL-E, resulted in a significant amount of source material, the efficiency boost over creating the video without her is not clearly defined. Setter Berg, the post-production lead of Airhead, mentioned a 300 to 1 ratio of source material used, but it's unclear if this is a definitive improvement or the better alternative. Additionally, eight newspaper publishers have sued OpenAI for copyright infringement, alleging unauthorized use of their articles in training the language model. The outcome of this lawsuit could impact OpenAI's ability to surface results from these websites and potentially undermine their product vision. The media industry is bifurcating into those who want to protect their intellectual property and those who want to partner, making it difficult to predict the long-term consequences for OpenAI. Overall, these developments add to the complexities and challenges facing the AI industry, particularly in the areas of data usage and copyright. OpenAI, as a key player, will need to navigate these issues carefully to set precedents that support their vision and competition in the market.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024