Logo
    Search

    #123 - Delete your info from ChatGPT, Google’s AI plans, EU act targets OSS, PaLM 2, writers’ strike

    enMay 21, 2023

    Podcast Summary

    • Exploring the Latest News and Developments in AIDiscussed latest news, shared experiences, covered topics like tools, VCs, research, policy, and AI-generated media and art, acknowledged listener feedback

      The world of AI is vast and diverse, with applications and research spanning robotics, computer vision, machine learning, and even creative writing. During this episode of "Last Week in AI," hosts Andrei Korenkov and Daniel Beshear discussed the latest news in AI, sharing their experiences from their respective roles in the industry. Daniel, host of the Gradient podcast, has interviewed numerous experts in the field, from academia and industry to art, and shared his favorite interviews, including Jan Lecun and Chris Manning. Andrei, on the other hand, is a robotics and computer vision PhD from Stanford and now works at an AI startup. They covered topics such as tools and apps, VCs and costs, research and advancements, policy and safety, and AI-generated media and art. They also acknowledged the importance of listener feedback and made a special shout-out to Seth Weiderberg for his insightful email about the ongoing writers' strike. Stay tuned for more discussions on the latest developments in AI.

    • ChatGPT's New Plugins: A Game Changer in AIChatGPT's new plugins offer internet access, making it more powerful and versatile, transforming it from a simple Q&A machine to a full-fledged platform, opening new possibilities for AI applications and interfaces, while also raising concerns about competitive pressures and potential misuse.

      OpenAI's recent rollout of over 70 chat GPT plugins, now accessible to all ChatGPT Plus subscribers, is a significant development in the world of AI. These plugins, which include internet access, enable ChatGPT to automatically use appropriate tools to complete tasks, making it more powerful and versatile. With the ability to retrieve facts, generate prompts, and perform various functions, ChatGPT is evolving from a simple question-answering machine to a full-fledged platform. This paradigm shift in how we use computers and the internet is a big deal, as it opens up new possibilities for AI applications and interfaces. Additionally, the openness to offering these plugins to a wider audience signals a trend towards greater accessibility and integration of AI technology into everyday life. However, concerns regarding competitive pressures and the potential misuse of these tools will need to be addressed as this technology continues to evolve.

    • Growing concerns over privacy and potential misuse of personal data by large language modelsUsers can request data removal, but it may not entirely erase info as it's pre-trained on vast data. Italy, EU ban ChatGPT, and companies implement policies. Remember, language models should be used with caution, and personal data may still be accessible.

      There is growing concern over privacy and the potential misuse of personal information by large language models like ChatGPT, leading to increased scrutiny and regulation. Users can request the removal of their personal data from OpenAI systems, but it may not entirely erase the information as it has likely been pre-trained on vast amounts of internet data. Italy and the EU have banned ChatGPT, and other countries are investigating privacy implications. Samsung has banned employees from using generative AI tools, and companies are implementing their own policies regarding employee use. It's important to remember that language models like ChatGPT should be treated with caution, as their outputs should not be taken as fact and personal data may still be accessible with sufficient effort. OpenAI's personal data removal request form requires users to provide information about themselves and the data subject, and while it updates responses, it does not necessarily erase the information from the training data. Users can also toggle off chat history and training data in the settings to limit the data collected. At Google I/O, AI was a major focus, with Google announcing new AI tools and features. However, privacy concerns surrounding language models like ChatGPT cast a shadow over the event.

    • Google integrates AI into 25 products using new language model Palm 2Google showcased its commitment to AI by integrating Palm 2 into over 25 products, allowing users to generate text using the model, and made its chatbot, Bard, free for everyone. Apple introduced a new personal voice feature for accessibility.

      Google's recent I/O event showcased the tech giant's significant push towards integrating AI into its various products. The main announcements included the integration of the new language model, Palm 2, into over 25 Google products such as Maps, Docs, Gmail, and Sheets. This means users can now write job descriptions in Google Docs, for instance, and have Palm generate the text. Google also opened up access to its chatbot, Bard, making it free for everyone. This move follows Microsoft's lead in integrating language models into every Microsoft product. The event signaled Google's commitment to AI and its competition with other tech companies. Additionally, Google's Bard is soon to allow users to prompt it using images, adding another dimension to its capabilities. The race dynamics in the industry, with Google's response to competitors like JPT4, also played a role in this push. However, concerns about the potential misuses and risks of these giant models remain. Apple also made an accessibility update with its new personal voice feature, which can create a voice that sounds like the user or a loved one in just 15 minutes, providing an intriguing accessibility option. Overall, these tech companies are racing to integrate AI into their products, addressing accessibility needs, and staying competitive in the industry.

    • AI advancements and potential misuseAI text-to-speech could simulate emotion convincingly for nefarious purposes, while AI-generated images may be misused and require identification tools. Amazon plans to add chat GPT-style search, and AI model debugging tools are available, but the cost of generative AI is projected to increase significantly

      Technology advancements in AI, specifically in text and image generation, are raising concerns about potential misuse and increasing costs. The personal voice feature in text-to-speech technology could be used for nefarious purposes if it can simulate emotion convincingly. Google has launched a tool to help identify AI-generated images, addressing concerns about their potential misuse. Amazon is reportedly planning to add chat GPT-style search to its online store. In the world of AI research, weights and biases have introduced a new LLM debugging tool for AI model developers. However, the cost of generative AI is projected to significantly increase, potentially doubling the entire cloud infrastructure cost within five years. These advancements come with challenges and potential risks that need to be addressed.

    • Exploring new ways to reduce compute requirements and address privacy concerns in the language model businessOpenAI uses cryptocurrency and iris scanning for digital identity network, addressing privacy concerns and reducing costs. OpenAI confirms data breach, emphasizing data security. Startups build AI relationship coach and successfully raise funds, showcasing opportunities in the industry.

      Companies in the language model business are under pressure to reduce compute requirements due to the high cost of generating data, while also addressing privacy concerns with the increasing use of AI. OpenAI, led by CEO Sam Altman, is exploring the use of cryptocurrency and iris scanning technology to create a digital identity network, as concerns grow about AI's ability to fake identities. Meanwhile, OpenAI confirmed a data breach affecting approximately 1.2% of ChatGPT Plus subscribers, exposing sensitive information, and emphasized the importance of securing online data. A startup, MRI, is building an AI relationship coach, raised a significant amount of funding, but withheld the total amount, and faces potential risks. Another AI startup, Rewind, successfully raised $12 million in a funding round at a $350 million valuation, despite the declining startup investment environment. These developments highlight the ongoing challenges and opportunities in the AI industry.

    • Founder's unconventional fundraising approach, Hippocratic AI's $50 million seed round, EU AI Act's potential impact on open source softwareA founder took an unconventional approach to fundraising by asking VCs for their best offers instead of pitching. Hippocratic AI raised $50 million in seed funding, emphasizing safety and accessibility in healthcare. The EU AI Act, with its broad jurisdiction and potential restrictions, could impact open source software development and innovation.

      The fundraising landscape can be unconventional, as seen in the case of a founder who asked VCs for their best offers instead of making the pitch. Meanwhile, Hippocratic AI, a healthcare-focused large language model, raised a significant $50 million in seed funding, emphasizing safety and accessibility in healthcare. However, a recent development in the EU AI Act could potentially hinder open source software development of large language models, posing challenges for smaller players and innovation. The Act, which has raised concerns for its broad jurisdiction and potential restrictions, could significantly impact the open source community and the AI industry as a whole. It remains to be seen how these developments will unfold and what implications they may have.

    • Generative AI Startups Raising Funds in Q1New generative AI startups, including those developing open source models and tools, raised significant funds in Q1. Google also released a new state-of-the-art language model.

      The generative AI landscape is seeing significant investment, with startups raising a substantial amount of funds in Q1 of this year. Despite the presence of established players like OpenAI and Enthropic, VCs are not shying away from untested players. One such startup is developing open source generative AI models, which aims to help organizations incorporate AI into their production applications. They have raised $20 million in a seed round, led by Lux Capital, with notable angel investors like Scott Banister, a co-founder of PayPal, and Jeff Hammerbacher, a cloud data of founding employee. Another company, Stability AI, is releasing open source tools, including an open source text to animation tool. While the animation quality is currently rough, it's an exciting development in the field of video synthesis, which is still unsolved. Additionally, there's a Python library called Pandas AI, which adds generative AI capabilities to the popular data analysis and manipulation tool, pandas. It allows users to ask questions about their data in natural language and get answers back in the form of Pandas data frames. This is an exciting development as it makes data analysis more accessible and user-friendly. In the research and advancement section, Google released a new state-of-the-art language model, Palm 2, which is generating buzz in the field. Overall, the generative AI space is seeing rapid growth and innovation, with various players making significant strides in the field.

    • Advancements in Palm 2: Multilingual data, compute scaling, and pre-training objectivesPalm 2, a new transformer-based model, features improvements in multilingual data, compute scaling, and pre-training objectives, leading to significant performance gains and a smaller, faster model compared to its predecessor.

      The new model, Palm 2, showcases advancements in multilingual and diverse pre-training data, compute optimal scaling, and the use of different pre-training objectives. These techniques have led to significant improvements in model performance while making it smaller, more efficient, and faster than its predecessor, Palm. The model, which is expected to be around 300 billion parameters, is a transformer-based model that uses a diverse set of research advances, including those inspired by Chinchilla and improvements in data sets. The use of a mixture of pre-training objectives allows the model to understand different aspects of language. Despite not having many comparisons to GPT-4 in the paper, Palm 2 is competitive and, in some cases, outperforms it on certain NLP tasks. Microsoft's research also focuses on automatic prompt optimization, which involves searching for and selecting new candidate prompts to improve model performance through a selection process, much like evolutionary optimization. These advancements demonstrate the ongoing efforts to improve language models and make them more effective and efficient.

    • Optimizing LLM performance with prompt generation using a beam search algorithmThe paper discusses the potential threats of AI to human health and existence and highlights the importance of ongoing AI safety discussions in various communities

      Prompt optimization using a beam search algorithm is an effective way to improve LLM performance by iteratively generating and selecting new candidate prompts. This optimization process is reminiscent of exploratory, evolutionary algorithms, allowing for incremental improvements and exploration over multiple prompt candidates. The process uses backpropagation information to efficiently generate new prompts and avoid random changes for long prompts. The paper "Threats by Artificial Intelligence to Human Health and Human Existence," published in BMJ Global Health, highlights the potential threats of AI to human health and existence, including control and manipulation, dehumanizing lethal weapon capacity, and rendering human labor obsolete. The paper's appearance in a journal focused on global health demonstrates the increasing mainstreaming of AI safety concerns in various communities, including those without a computer science background. The paper's explicit examination of specific threats to human health and existence is a valuable contribution to the ongoing discussion on AI safety. However, the concept of self-improving AGI remains a topic of debate, and it's essential to consider the potential implications and limitations of this technology.

    • Exploring new methods for controlling and steering AI modelsRecent papers introduce new techniques for modifying language model outputs and improving image-text scaling, offering potential for reducing fine-tuning needs and improving safety and reliability.

      The field of AI safety and AI extension risk is gaining traction as more communities and backgrounds converge in this niche topic. This conversation highlighted the unique perspectives brought by individuals from different fields, such as computer science, healthcare, and human rights. Regarding the technical side, a recent paper titled "Steering GPT to Excel by Adding an Activation Vector" introduced an interesting method for modifying language model outputs by adding activation vectors. This approach could potentially reduce the need for fine-tuning or prompt modification, offering a powerful mechanism for steering models in certain ways. Another paper, "Task Vectors," demonstrated the ability to make predictable expectations about model performance by subtracting weights between fine-tuned or pre-trained versions of a single model. This finding could have significant implications for safety and reliability, especially as language model deployment becomes more widespread. Furthermore, an inverse scaling law for clip training was discussed, revealing that reducing image text token length matters significantly for the quality of the scaling. This surprising result could potentially help researchers with limited resources to successfully train models. Overall, these findings underscore the importance of continued research and attention to methods for controlling and steering AI models, as their deployment in various products becomes increasingly inevitable.

    • Strategies to Save Costs on Large Language ModelsResearchers propose cost-saving techniques like prompt adaptation, LLM approximation, and LLM cascade to provide comparable performance to expensive models at much lower costs. Frugal GPT, which employs these methods, performs as well as GPT-4 but at a fraction of the cost.

      Large language models (LLMs) like J1 Jumbo can have significant cost differences, sometimes varying by two orders of magnitude. To help users save money, researchers propose strategies such as prompt adaptation, LLM approximation, and LLM cascade. These techniques can provide comparable performance to expensive models like GPT-4 at much lower costs. For instance, Frugal GPT, which employs these methods, can perform as well as GPT-4 but at a fraction of the cost. This is a significant development for users looking to save on costs without compromising performance. Additionally, there is ongoing debate about the potential risks and control of advanced AI. Imad Mustafa, Stability AI's CEO, believes that advanced AI might find humans boring rather than dangerous. However, others like Geoffrey Hinton express concerns about power-seeking and the need to control AI as it develops. While some suggest regulation, others worry about the potential for regulatory capture. Hinton predicts that it will take AI between five and 20 years to surpass human intelligence. As the development of advanced AI continues, it's crucial to prioritize both progress and safety.

    • Experts Agree on Key Measures for AGI Safety and GovernanceExperts from AGI labs advocate for risk assessments, safety restrictions, model audits, and red teaming to ensure AGI safety and governance.

      Despite ongoing debates about the timeline and potential risks of artificial general intelligence (AGI), there is broad agreement among experts on certain safety and governance measures that should be implemented. According to a survey of 92 leading experts, there is strong consensus on the need for risk assessments, safety restrictions, model audits, and red teaming. While there were some contentious issues, such as notifying other labs and avoiding capability jumps, the majority of respondents agreed on these measures. The experts surveyed were primarily from AGI labs, which may explain their focus on safety. The survey results suggest that policymakers can look to these measures as a starting point for AGI safety and governance. However, it's important to note that this is a small sample size and more research is needed to fully understand the complexities of AGI safety.

    • Discussion on the Alan Turing Institute's focus on large language models and AI voice scamsStay informed about AI developments, protect against voice scams using voice cloning technology, and be cautious of requests to hide money trails. The Alan Turing Institute should focus on large language models, and MIT Technology Review provides tips to spot AI-generated text.

      The field of AI is rapidly evolving, and organizations and policymakers need to stay informed and adaptable to keep up with the latest developments. This was highlighted in a discussion about the Alan Turing Institute in the UK, which has been criticized for its lack of focus on large language models (LLMs) despite their significance in the field since 2019. Additionally, the increasing sophistication of AI voice scams was highlighted as a major concern, with 77% of victims reportedly losing money and a significant portion of those losses being in the thousands. These scams can be particularly effective due to the use of voice cloning technology, which can make the scam calls seem more convincing. The importance of staying informed and taking steps to protect against these types of scams was emphasized, including creating plans with family members and being cautious of requests to hide money trails. The MIT Technology Review also provided an overview of how to spot AI-generated text, which is becoming increasingly difficult to distinguish from human-written content. Overall, the discussions underscored the need for ongoing vigilance and education in the face of the ever-evolving landscape of AI technology.

    • AI in Content Creation: The Ethical and Legal DilemmaThe use of AI in content creation, such as writing and music, raises ethical and legal concerns. While some industries and creators embrace the technology, others fear its impact on their livelihoods and creative autonomy. Ongoing discussions explore the implications of AI in content creation and the need for regulations and guidelines.

      While there are ongoing efforts to combat the use of AI in creating content, particularly in the fields of writing and music, there is currently no foolproof solution to distinguish human-generated content from AI-generated content. The article discusses the ongoing writer's strike in the US, where the Writers Guild of America is pushing for a ban on the use of AI in generating scripts due to concerns over compensation and writing credits. In the music industry, AI-generated songs have gone viral in China, leading to discussions about the legal and ethical implications of such content. In Korea, K-pop label Hive has released an AI-generated song, "Mascaraib," using Superton's voice synthetic tools. The use of AI in content creation is a growing trend, and it remains to be seen how industries and content creators will navigate the legal and ethical complexities of this technology. The article also notes that while some content creators have embraced the use of AI, others have expressed concerns about the potential impact on their livelihoods and creative autonomy. Overall, the article underscores the need for ongoing dialogue and exploration of the implications of AI in content creation.

    • Synthetic media raises concerns of misinformation and prolongation of conspiracy theoriesFormer President Trump's use of doctored videos and AI-generated images sparks controversy, while some argue people can distinguish real from synthetic media, others express concerns about potential misinformation and prolongation of conspiracy theories, ongoing labor dispute in writing industry highlights complexities and challenges of media landscape.

      The use of synthetic media, including doctored videos and AI-generated images, is becoming more prevalent and raises concerns, particularly in the political arena. Former President Trump's sharing of a doctored video of Anderson Cooper and the use of AI-generated cover art for a fantasy novel have sparked controversy and debate. While some argue that people are generally good at distinguishing between real and synthetic media, others express concerns about the potential for misinformation and the prolongation of conspiracy theories. The ongoing labor dispute in the writing industry, with the Writers Guild striking against studios and streamers seeking to turn writing into gig work, further highlights the complexities and challenges of the media landscape. Overall, it's clear that the adoption of synthetic media is a significant development with far-reaching implications.

    • WGA vs. Studios: The AI Scriptwriting DebateThe WGA is advocating for regulations to prevent studios from crediting AI-generated scripts, while studios want to explore the potential benefits. This debate sets a precedent for how other creative industries will handle AI integration.

      The Writers Guild of America (WGA) is currently engaging in a significant fight against studios over the use of large language models in scriptwriting and crediting. The WGA is advocating for regulations to prevent studios from crediting scripts generated solely by AI, maintaining that it could negatively impact the professional capacity and financial benefits of human writers. This issue goes beyond the WGA, as it sets a precedent for how other creative industries, such as acting, directing, producing, and editing, will negotiate the integration of AI into their processes. The studios, on the other hand, are dismissive of the WGA's concerns and want to explore the potential benefits of using AI. This debate is crucial as it will likely have far-reaching implications for the future of the creative industry. The podcast, "Skynet Today's Last Week in AI," supports the WGA's stance and encourages listeners to engage in the conversation. If you're interested in learning more, be sure to check out the articles discussed in the episode and subscribe to their weekly newsletter.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Code Interpreter is GPT-4.5: A Summer AI Technical Roundup [feat. Swyx and Alessio of Latent Space]

    Code Interpreter is GPT-4.5: A Summer AI Technical Roundup [feat. Swyx and Alessio of Latent Space]
    Today NLW is joined by Swyx and Alessio, the hosts of the Latent Space podcast to discuss the key technical developments from the last month of AI, including code interpreter; llama 2; the latest in AI agents; growing interest in AI companions, and more. Latent Space podcast -https://www.latent.space/podcast / https://twitter.com/latentspacepod Swyx - https://twitter.com/swyx Alessio Fanelli - https://twitter.com/FanaHOVA ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/ Twitter: https://twitter.com/nlw / https://twitter.com/AIBreakdownPod

    Q*: Was This New Advance the Reason for Sam Altman's Firing?

    Q*: Was This New Advance the Reason for Sam Altman's Firing?
    More information came out late last week about a model that showed signs of more advanced reasoning called Q*, leading to additional speculation that this was part of the reason for Sam Altman's firing from OpenAI. Also on this episode, Inflection claims their latest model is the second most powerful LLM. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    OpenAI Is Now On A Billion Dollar Revenue Pace

    OpenAI Is Now On A Billion Dollar Revenue Pace
    On the Brief, NLW looks at new reports that OpenAI is clearing $80,000,000 a month, even before the launch of ChatGPT Enterprise. Also on the Brief Snapchat Dreams is live; an AI-powered defense system around D.C. and more. On the main episode, NLW looks at all the announcements from Google Cloud Next including Vertex AI, Duet AI and an updated partnership with Nvidia. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    126 - Über KI Tools, Teil 1: Text

    126 - Über KI Tools, Teil 1: Text
    Ein knappes Jahr, nachdem wir hier bei 9vor9 zum ersten Mal über CHatGPT gesprochen haben, starten wir eine kleine Serie über KI Tools, die wir in diesem Jahr genutzt haben. In dieser Episode geht es vor allem um Texterstellung und dann auch ein bisschen um das Thema Suche. Und wir kommen bei Ersterem auf keinen gemeinsamen Nenner. Aber hört selbst.

    5 Predictions for the GPT Store Coming Next Week

    5 Predictions for the GPT Store Coming Next Week
    OpenAI announced the GPT Store is finally launching next week. NLW gives 5 predictions on how it will play out. Before that on the Brief: rumors of AI-powered Siri updates from Apple and a leaked Bard Advanced powered by Google Gemini Ultra. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/