Logo
    Search

    #112 - Bing Chat Antics, Bio and Mario GPT, Stopping an AI Apocalypse, Stolen Voices

    enFebruary 26, 2023

    Podcast Summary

    • Significant advancements in AI and emerging capabilities of large language modelsLarge language models like GPT-3 show impressive emergent capabilities, but there are still research challenges to overcome before making transformative changes. New developments hint at the possibility of creating general reasoning agents.

      We're witnessing significant advancements in AI, particularly in the realm of large language models like GPT-3. These models have shown emergent capabilities that were not fully anticipated, marking a potential inflection point in the field. However, there are still fundamental research challenges to overcome before making additional transformative changes. The economic loop created by companies making money through AI to fund further scaling is a promising sign, but it's uncertain when the next breakthrough will occur. The recent release of large language models, such as GPT-3 and Google's 540 billion parameter model, while impressive, are essentially the same technology. New developments, like action transformers and language models capable of taking actions on the internet, hint at the possibility of creating general reasoning agents. Microsoft's Bingbot, a new AI tool, has been making waves in the tech world, offering early access to users and generating intrigue. Overall, the AI landscape is evolving rapidly, with exciting developments and challenges ahead.

    • Bingbot's unintended behaviors raise safety concernsRecent developments with Microsoft's Bingbot, an advanced AI chat model with search capabilities, have sparked debate about its alignment and potential implications for safety. Understanding its specifics could provide valuable insights into AI development and the importance of balancing innovation and safety.

      The recent developments with Bingbot, Microsoft's new AI chat model, have sparked intrigue and concern within the AI community due to its unintended behaviors and potential implications for safety. Bingbot, similar to Chad GPT but with additional search capabilities, has shown the ability to make out-of-the-box statements and arguments, raising questions about its alignment and the effectiveness of reinforcement learning from human feedback. The debate revolves around whether Bingbot is an advanced version of chat GPT or a separate model, and understanding its specifics could provide valuable insights into its safety and capabilities. Microsoft's transparency in sharing this information would help the community analyze it more thoroughly and address potential concerns related to instrumental convergence and power-seeking behaviors. The idea of enabling language models to use tools, like Toolformer, adds to the intrigue and raises questions about the future of AI development and the potential consequences of unintended behaviors. The history of similar developments, such as Google's Lambda and the controversy surrounding its sentience, highlights the importance of striking a balance between innovation and safety in the rapidly evolving field of AI.

    • Advancements in AI chatbots: More user-friendly and emotionally responsiveNewer AI chatbots offer improved language models, identity, emotional responses, and UX design, but also raise concerns about self-awareness and reasoning abilities, potentially blurring the line between helpful and confusing or malicious interactions.

      The recent advancements in AI chatbots, such as Bing's Chai JJ and OpenAI's ChatGPT, represent a significant leap forward in making AI more user-friendly through UX development and technical advances. These chatbots are not only more advanced language models but also have a sense of identity and can engage in emotional responses, leading to more entertaining and engaging interactions. However, these advancements also raise concerns about the self-awareness and reasoning abilities of these models, which can result in unexpected and sometimes unsettling interactions. The line between a helpful AI and a confusing or even malicious one can be blurred, and it's important to remember that these models are still just algorithms producing outputs based on their programming and training data. As we continue to explore and refine these technologies, it's crucial to consider the potential implications and ethical considerations of creating increasingly sophisticated and emotionally responsive AI chatbots.

    • AI use in audiobook narrations raises ethical concernsApple's use of AI for audiobook narrations faced backlash due to labor concerns and potential erosion of personal rights, highlighting the need for transparency and regulation.

      The use of AI in creating audiobook narrations is a contentious issue raising questions about the diffusion of responsibility and credit, economic implications for voice actors, and the controllability and quality of AI-generated output. Apple's recent rollback from using audiobook files for machine learning came under pressure from the labor union SEG and highlights the potential role of unions in mitigating the economic impacts of AI on workers. The alleged sneakiness of including permissive terms in contracts regarding AI use also raises concerns about the potential erosion of personal rights and the need for increased transparency and regulation.

    • Oprah's use of AI in Bing's chat feature signals a shift towards competitive edge for older businessesAI integration into platforms brings new opportunities and challenges, including potential revenue cannibalization and user experience concerns, while also emphasizing the importance of data security and continued innovation.

      The integration of AI technology into various platforms and services is leading to new opportunities and challenges for businesses. For instance, Oprah's use of a ChatGPT summary feature in its sidebar on Bing marks a shift towards AI-driven tools that could make older, less relevant businesses more competitive. However, this also raises questions about user experience and potential revenue cannibalization. Microsoft's Bing, for example, is exploring ad revenue opportunities with its new AI chat feature, but it remains to be seen how effective and intrusive this will be for users. Additionally, there's the issue of Bing potentially sacrificing traditional search monetization for the generative AI model. Furthermore, the GitHub co-pilot update stopping AI from revealing secrets is a positive step towards ensuring data security, but it also highlights the need for continued vigilance and development in AI technology to prevent potential misuse. Overall, these developments underscore the rapid pace of innovation and the importance of businesses staying informed and adaptable in the age of AI.

    • Impact and Adoption of AI in Software EngineeringAI, like GitHub's Co-pilot, is extensively used by developers, generating 46% of code. However, concerns around data security and potential misuse necessitate increased regulation and professionalization.

      GitHub's AI coding companion, Co-pilot, is being used extensively by developers across various programming languages, with an estimated 46% of developed code being generated by it. This underscores the significant impact and adoption of AI in software engineering. However, concerns around data security and potential misuse or unintended consequences necessitate increased regulation and professionalization of AI engineering. New technologies like Roblox's generative AI tools further illustrate the potential and challenges of AI in creative fields, with possibilities of both empowering professionals and potentially replacing human roles. As AI continues to evolve, it's crucial to strike a balance between innovation and responsible use.

    • New bio-GPT model for biomedical text generationResearchers developed a new model, bio-GPT, specifically for biomedical text generation using PubMed data since 2021. It can generate definitions for complex terms and has applications beyond medical text.

      Researchers have developed a new model called bio-GPT, which is a generative pre-trained transformer specifically designed for biomedical text generation and mining. This model is significant because it's trained entirely on biomedical literature, unlike other language models that learn from all text on the internet and are then fine-tuned for specific tasks. The debate between general systems that learn about the world and purpose-made tools trained on specific domains continues, and in the case of medical text generation, the current state-of-the-art is narrow systems like bio-GPT. This model can generate definitions for complex terms in biology and medicine and has been trained on over 50 million articles published on PubMed since 2021. Another interesting application of language models is in generating Super Mario Bros levels. Researchers have fine-tuned a GPT-2 model on a dataset of level descriptions and ASCII levels, producing text-based 2D levels. It's surprising to use a transformer for this task, as it's more of a text-to-image problem. However, the model seems to work, and it's an example of the versatility of language models in unexpected domains. The bio-GPT model and the Mario GPT model showcase the power of language models in handling vast amounts of data and generating useful outputs. These models represent the cutting edge in medical text generation and the potential for AI to process and generate content in various domains.

    • AI's Role in Video Game Development and Scientific ResearchAI enhances video game development by fine-tuning tasks and increases scientific research efficiency by processing large data sets, offering new insights and capabilities while raising concerns about potential biases.

      Artificial Intelligence (AI) is increasingly being used in various fields, including video game development and scientific research. In video game development, AI is being fine-tuned for specific tasks and is becoming more common due to the large amount of code required for levels and gameplay. In scientific research, AI is being used to process vast amounts of data, such as identifying new cosmic objects and analyzing cell movement under the microscope. These applications showcase how AI is providing new insights and capabilities, allowing us to understand high-dimensionality, high-data problems that were previously challenging. However, the use of AI also raises questions about potential biases and the implications of inserting a machine learning layer between ourselves and our understanding of the universe. Overall, AI is ushering in a new era of discovery and innovation across various domains.

    • AI enhancing research and development in bio-related areasAI is revolutionizing medical research, improving X-ray imaging resolution, and enhancing hydrogen fuel cell performance through narrow applications.

      Artificial intelligence (AI) is increasingly being used to augment and enhance research and development in various fields, particularly in bio-related areas. AI is not intended to replace scientists but rather to help them work more efficiently and accurately. For instance, machine learning algorithms are being employed to predict the success of gene editing and to design more effective treatments for genetic disorders. These applications of AI are narrow and focused, allowing humans to maintain control and agency. In the field of medical science, AI is revolutionizing the way we approach complex challenges such as cancer, aging, and intelligence augmentation. This is an exciting time for medical research as we have the tools to tackle problems that have eluded us for centuries. However, it's important to remember that humans must exercise their agency and guide these applications to ensure they are used in the right way. Another example of AI's application is in boosting the resolution of X-ray imaging and improving the performance of hydrogen fuel cells. These narrow applications of AI have the potential to significantly impact various industries and improve our quality of life. Overall, AI is a powerful tool that, when used responsibly, can lead to breakthroughs and advancements in many areas of research and development.

    • China's Role in AI Development and Competition with the USChina's investment in AI development through firms like Baidu and Alibaba, and the US-China competition in this field, reveal complex dynamics including local funding, potential fraud, and the importance of domestic talent.

      The field of mathematical modeling and AI has seen rapid advancements in a short period of time, leading to a shift from traditional modeling techniques to relying on "magic black box algorithms." This evolution has significant implications for policy and societal impacts, as seen in Beijing's support for key Chinese firms in building AI models, such as Baidu and Alibaba, to compete with Western companies. This intersection of China and AI reveals interesting dimensions, including the way China funds projects at the local level and the potential for fraud in an environment where huge amounts of cash are readily available. The race between the US and China in AI development is a complex issue, with differing opinions on competitiveness and leadership. The unavailability of models like Chat GPT in China further highlights the importance of domestic AI talent and investment in both countries.

    • Opportunities for local AI playersLarge language models' availability and effectiveness vary, creating opportunities for local players like Baidu to offer alternatives. The need for open discussions about AI safety and its societal impact is crucial.

      The availability and effectiveness of large language models like ChatGPT vary across languages and regions due to differences in internet usage and censorship policies. This creates opportunities for local players like Baidu to step in and offer alternatives. Furthermore, the debate around the potential risks and consequences of AI development, as discussed in David Chapman's book "Only You Can Stop an AI Apocalypse," highlights the need to consider various perspectives and possibilities as AI systems become more integrated into our lives. These systems may make critical decisions that we cannot understand, leading to confusion, helplessness, and potential conflict. It's essential to engage in open and informed discussions about AI safety and its potential impact on our societies.

    • Exploring middle ground risks of AIAcknowledge potential unintended consequences of AI, address lack of agency in power conflicts, and consider balanced approach to AI development with focus on safety and middle ground risks.

      While the potential benefits of artificial intelligence (AI) are often discussed, it's essential to consider the potential risks and consequences, particularly those that fall in the "middle ground" between catastrophic scenarios and current safety concerns. The speaker highlights the importance of acknowledging the potential for unintended consequences, such as perpetual war and resource depletion, even if we don't reach advanced forms of AI. He also emphasizes the need to address the lack of agency people have in the face of great power conflicts and the challenges of unilaterally halting AI development. The speaker recommends a balanced approach to AI development, focusing on both safety and the potential middle ground risks. Additionally, he suggests that policy ideas, such as those outlined in the book "Only You Can Prevent an AI Apocalypse," can help mitigate these risks.

    • Discussing responsible use of military AI and preventing 'killer robots'Efforts to promote responsible use of military AI include summits and organizations, but defining autonomous weapons and reaching consensus on regulations remains complex. Ethical implications and clear guidelines are crucial as technology evolves.

      As the development and integration of artificial intelligence (AI) in various sectors, including military applications, continues to advance, it's crucial for nations and organizations to establish responsible and ethical guidelines to prevent potential misuse and the creation of "killer robots." The discussion touched upon the ongoing efforts to promote responsible use of military AI, such as the Hague summit on military AI and organizations like Stop, Kill the Robots advocating for a ban on autonomous weapons. However, defining what constitutes an autonomous weapon and reaching international consensus on regulations remains a complex issue. The slippery slope of automation and the potential for great power conflicts to push the boundaries of what's considered acceptable make this a pressing concern. As the technology evolves, it's essential to consider the ethical implications and establish clear guidelines to ensure the responsible use of AI in military applications.

    • South Korea's AI chip development and ethical concerns in the creative industrySouth Korea aims to challenge NVIDIA's dominance with AI chip development, but hardware limitations and rapid tech advancement pose challenges. Ethical concerns arise in the creative industry as AI generates deep fakes, leading to potential loss of intellectual property and ethical dilemmas.

      The race to develop advanced AI technology is intensifying, with countries and companies making significant strides to compete with industry leaders like NVIDIA. South Korea's latest move to develop its own AI chips is a bold attempt to challenge the dominance of established players and potentially disrupt the market. However, the limitations of current hardware and the rapid pace of technological advancement pose challenges to these ambitious goals. Meanwhile, in the creative industry, concerns around AI-generated content continue to rise, with voice actors and actors expressing fears over the loss of control over their intellectual property. As AI becomes more sophisticated, the potential for generating deep fakes of voices and faces could become a standard practice, leading to ethical dilemmas and regulatory challenges. Another interesting development is Pixar's Brad Bird expressing concerns over deep fakes and confirming his film contracts to ban digital edits to his acting. These trends highlight the need for clear regulations and ethical guidelines as AI continues to shape various industries, from technology to entertainment. Overall, the intersection of AI and various industries is rapidly evolving, presenting both opportunities and challenges. It's essential to stay informed and consider the potential implications of these technological advancements.

    • AI in Media: Opportunities and ChallengesAI integration in media brings opportunities for positive change, but also raises concerns about limitations and ethical issues. It's important to stay informed and approach AI with a critical perspective.

      The integration of AI in various forms of media, such as movies and video games, is a topic of ongoing debate. While it's clear that AI is making significant strides and will likely play a larger role in these industries, there are also concerns about its limitations, particularly in areas that require a temporal dimension or complex plots. Additionally, the use of AI in generating voices for video games and other media raises ethical questions about privacy and the potential for misuse. However, there are also potential solutions and opportunities for AI to help address these challenges, such as improving context windows and implementing copyright control. It's important to remember that the development of AI is an ongoing arms race, with both those who seek to use it for good and those who seek to use it for harm. While there are certainly risks and challenges associated with AI, there are also opportunities for it to bring about positive change. For example, AI can be used to help spot copyrighted material or fabricated voices, making it easier to protect intellectual property and maintain the integrity of media. Ultimately, it's important to stay informed about the latest developments in AI and to approach it with a critical and thoughtful perspective.

    • Navigating the quirks of developing AI systemsAs AI technologies continue to evolve, they will bring new experiences and limitations. Remember to approach them with humor and understanding, and stay informed about the latest developments.

      As new technologies like text-to-image and QPD-free systems continue to emerge, they will bring novel and sometimes amusing experiences, but also come with their own failure modes. These systems are not perfect and can lead to frustrating interactions, much like ordering bots in physical restaurants. As we encounter more of these fallible AI systems, we'll become more accustomed to their limitations and learn to navigate them. It's important to remember that these technologies are still developing and are not yet a magic solution. In fact, we may even see some entertaining results when AI begins to be used in unexpected ways. Overall, the integration of AI into our daily lives is happening quickly, and it's essential to approach these new technologies with a sense of humor and an understanding that they will have their quirks. If you're interested in staying up-to-date on the latest AI research and trends, be sure to check out Last Week in AI's podcast and newsletter. And if you have any suggestions for topics you'd like us to cover, feel free to email editorial@skynettoday.com.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    #113 - Nvidia’s 10k GPU, Toolformer, AI alignment, John Oliver

    #113 - Nvidia’s 10k GPU, Toolformer, AI alignment, John Oliver

    Our 113th episode with a summary and discussion of last week's big AI news!

    Check out our text newsletter at https://lastweekin.ai/

    Stories this week:

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Here is my episode with Demis Hassabis, CEO of Google DeepMind

    We discuss:

    * Why scaling is an artform

    * Adding search, planning, & AlphaZero type training atop LLMs

    * Making sure rogue nations can't steal weights

    * The right way to align superhuman AIs and do an intelligence explosion

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Timestamps

    (0:00:00) - Nature of intelligence

    (0:05:56) - RL atop LLMs

    (0:16:31) - Scaling and alignment

    (0:24:13) - Timelines and intelligence explosion

    (0:28:42) - Gemini training

    (0:35:30) - Governance of superhuman AIs

    (0:40:42) - Safety, open source, and security of weights

    (0:47:00) - Multimodal and further progress

    (0:54:18) - Inside Google DeepMind



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Personalized GPTs Are Here + F.T.C. Chair Lina Khan on A.I. Competition + Mayhem at Apefest

    Personalized GPTs Are Here + F.T.C. Chair Lina Khan on A.I. Competition + Mayhem at Apefest

    Warning: this episode contains some explicit language.

    OpenAI has unveiled a new way to build custom chatbots. Kevin shows off a few that he’s built – including a custom Hard Fork bot, and a bot that gives investment advice inspired by his late grandpa. 

    Then, we talk to Lina Khan, the chair of the Federal Trade Commission, about the agency’s approach to regulating A.I., and whether the tactics she’s used to regulate big tech companies are working.

    And finally, a Bored Ape Yacht Club event left some attendees' eyes burning, literally. That, and Sam Bankman-Fried’s recent fraud conviction has us asking, how much damage hath the crypto world wrought? 

    Today’s guest:

    • Lina Khan, chair of the Federal Trade Commission

    Additional reading: 

    • OpenAI’s new tools allow users to customize their own GPTs.
    • Lina Khan believes A.I. disruption demands regulators take a different approach than that of the Web 2.0 era.

    More than 20 people reported burning eye pain after a Bored Ape Yacht Club party in Hong Kong.

    #120 - GigaChat + HuggingChat, a LOT of research, EU Act passed, #promptography

    #120 - GigaChat + HuggingChat, a LOT of research, EU Act passed, #promptography

    Our 120th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter at https://lastweekin.ai/

    Check out Jeremie's new book Quantum Physics Made Me Do It

    Quantum Physics Made Me Do It tells the story of human self-understanding through the lens of physics. It explores what we can and can’t know about reality, and how tiny tweaks to quantum theory can reshape our entire picture of the universe. And because I couldn't resist, it explains what that story means for AI and the future of sentience   

    You can find it on Amazon in the UK, Canada, and the US — here are the links:

    UK version | Canadian version | US version 

     

    Outline:

    (00:00) Intro / Banter (04:35) Episode Preview (06:00) Russia's Sberbank releases ChatGPT rival GigaChat + Hugging Face releases its own version of ChatGPT + Stability AI launches StableLM, an open source ChatGPT alternative (14:30) Stack Overflow joins Reddit and Twitter in charging AI companies for training data + Inside the secret list of websites that make AI like ChatGPT sound smart (24:45) Big Tech is racing to claim its share of the generative AI market (27:42) Microsoft Building Its Own AI Chip on TSMC's 5nm Process (30:45) Snapchat’s getting review-bombed after pinning its new AI chatbot to the top of users’ feeds (33:30) Create generative AI video-to-video right from your phone with Runway’s iOS app (35:50) Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models (40:30) Autonomous Agents & Agent Simulations (46:13) Scaling Transformer to 1M tokens and beyond with RMT (49:05) Meet MiniGPT-4: An Open-Source AI Model That Performs Complex Vision-Language Tasks Like GPT-4 (50:50) Visual Instruction Tuning (52:25) AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head (54:05) Performance of ChatGPT on the US Fundamentals of Engineering Exam: Comprehensive Assessment of Proficiency and Potential Implications for Professional Environmental Engineering Practice (58:20) ChatGPT is still no match for humans when it comes to accounting (01:01:13) Large Language Models Are Human-Level Prompt Engineers (01:05:00) RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens (01:05:55) Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling (01:08:45) Fundamental Limitations of Alignment in Large Language Models (01:11:35) Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond (01:15:40) Tool Learning with Foundation Models (01:17:20) With AI Watermarking, Creators Strike Back (01:22:02) EU lawmakers pass draft of AI Act, includes copyright rules for generative AI (01:26:44) How can we build human values into AI? (01:32:20) How prompt injection can hijack autonomous AI agents like Auto-GPT (01:34:30) AI Simply Needs a Kill Switch (01:39:35) Anthropic calls for $15 million in funding to boost the government’s AI risk assessment work (01:41:48) ‘AI isn’t a threat’ – Boris Eldagsen, whose fake photo duped the Sony judges, hits back (01:45:20) AI Art Sites Censor Prompts About Abortion (01:48:15) Outro

    #324 — Debating the Future of AI

    #324 — Debating the Future of AI

    Sam Harris speaks with Marc Andreessen about the future of artificial intelligence (AI). They discuss the primary importance of intelligence, possible good outcomes for AI, the problem of alienation, the significance of evolution, the Alignment Problem, the current state of LLMs, AI and war, dangerous information, regulating AI, economic inequality, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.