Logo
    Search

    #133 - ChatGPT multi-document chat, CoreWeave raises $2.3B, AudioCraft, ToolLLM, Autonomous Warfare

    enAugust 18, 2023

    Podcast Summary

    • Podcast to Dedicate Episode to AI Existential Risks due to Listener FeedbackPodcast plans to discuss AI risks, focusing on near-term and certain outcomes, hardware news, language models, policy, safety, and synthetic art.

      Despite having differing perspectives on existential risks related to AI, the hosts of the podcast acknowledge its importance and plan to dedicate an episode to the topic due to listener feedback. The feedback also highlighted the need for balanced discussions on the potential negative outcomes of AI, including those that are more near-term and certain. The podcast will focus on hardware news for businesses this week, and in research and advancements, there will be discussions about language models using tools. Additionally, the hosts will continue to cover policy and safety issues, and synthetic art. While they may have opposing views on existential risks, they agree on the importance of addressing potential negative outcomes related to AI.

    • Interacting with Technology through Text-Based Applications: A New Era of Semantic EngagementNew features in text-based applications like multi-document chat and suggested chat responses enable users to engage with their documents in a more conversational and semantic way, revolutionizing industries and making tasks more convenient.

      We're witnessing a significant shift in how we interact with technology, particularly in the realm of text-based applications. Chad Jupyter's latest features, such as multi-document chat and suggested chat responses, are breaking new ground by allowing users to not only search but also engage with their documents in a more semantic way. This is a departure from the traditional database search or term search, as users can now "talk to their documents" and handle large documents with ease. This development has the potential to revolutionize various industries, particularly sales, where managing a vast amount of context on a customer can lead to more efficient and personalized interactions. Furthermore, the exchange between prompts and AI models is becoming more akin to fine-tuning, allowing for more customized and personalized agents. Meanwhile, Tinder's AI photo selection feature is another example of technology making tasks more convenient by automating the process of building a dating profile. Overall, these advancements underscore the growing potential of AI to transform the way we work and connect with each other.

    • The Blurred Lines Between Human Identity and AI-Generated PersonasAs technology advances, AI-generated personas in dating apps and text could blur authentic human identity, raising ethical concerns. Transparency and attribution are crucial in maintaining trust and compatibility in deeper relationships.

      As technology advances, the lines between authentic human identity and AI-generated personas are becoming increasingly blurred. This was discussed in relation to the potential use of generative AI for writing dating app bios. While such a feature could make the process more efficient, it also raises ethical concerns. For instance, if someone's bio is generated by an AI, how accurately does it represent their true self? And in the context of deeper relationships, where authenticity and compatibility are crucial, this could be problematic. The speaker also mentioned the example of photo editing, where people have been known to misrepresent themselves, and questioned how this would play out in the realm of text generation. Furthermore, the speaker touched upon GitHub Copilot, a code writing assistant, and its new feature that alerts users when its suggestions match existing code online. This addition aims to provide more transparency and attribution, especially for those who care about licenses. Ultimately, we are moving towards a world where our online personas and AI-generated representations will be hybrids of our true selves. The question of what is honest and what is dishonest in this murky world remains to be answered.

    • Legal considerations for AI code generation and the rise of specialized cloud providersAs AI technology advances, legal concerns around commercial use of generated code based on specific licenses and the increasing demand for specialized AI infrastructure are driving up the value of cloud providers like CoreWeave, which has raised $2.3 billion in debt collateralized by NVIDIA chips.

      As AI technology continues to advance, legal considerations are becoming increasingly important. In the context of AI code generation, the question of whether or not the generated code can be used commercially based on its associated license is a significant concern. Companies like the one discussed in the article are actively exploring this issue, but currently, they cannot provide code autocomplete based on specific licenses. Meanwhile, in the business world, the demand for AI infrastructure is driving up the value of specialized cloud providers like CoreWeave, which has raised $2.3 billion in debt collateralized by NVIDIA chips. This trend reflects the growing importance of AI hardware and the need for financial institutions to understand the technical nuances of this industry. The collateralization of debt with GPUs is a gamble on the future value of these assets, as the depreciation rate and the ongoing demand for older GPUs for less demanding AI tasks are key factors. Overall, these developments highlight the intersection of legal, business, and technical considerations in the rapidly evolving field of AI.

    • The Risks of Overinvesting in Emerging TechnologiesInvesting heavily in emerging technologies like AI and GPU usage comes with risks, such as market volatility and supply chain disruptions, as seen in the case of Sun Microsystems during the .com bubble.

      The current hype around AI and GPU usage is reminiscent of past tech booms, with companies making significant investments and the potential for substantial returns. However, there are risks involved, as seen in the case of Sun Microsystems and their attempt at server securitization during the .com bubble. This practice relied on the use of servers as collateral for loans, but when technology spending dropped and the value of servers declined, Sun Microsystems faced challenges meeting their loan payments, leading to their downfall. The current AI and GPU market is seeing intense competition, with major players like OpenAI, Inflection, and Microsoft vying for high-end GPUs for large language and image generation models. This competition has led to a GPU shortage, not due to a lack of physical GPUs, but rather due to a shortage of certain components on the GPUs' boards, such as clock generators, PCBs, or cooling components. It's important to keep in mind the potential risks and challenges that come with significant investments in emerging technologies.

    • GPU production involves multiple stages from design to packagingThe GPU supply chain is complex and time-consuming, with bottlenecks at each stage causing current shortages. High demand in certain markets is driving up prices significantly.

      The current GPU shortage in the market is not just due to a lack of manufacturing capacity, but also the complexity of the supply chain and the specialized nature of chip production. Designers create the GPU designs, which are then sent to chip factories like TSMC or Samsung for manufacturing. After the chips are made, they are sent to packaging facilities for assembly according to the original specifications. This process can face bottlenecks at any stage, and currently, there's a delay at the packaging step due to the high demand for these GPUs. The entire process is complex, capital-intensive, and time-consuming. Furthermore, in some markets like China, the demand for high-end GPUs is driving up prices significantly. The H100, a nerfed version of a high-end GPU, is selling for up to $70,000 in China, making it an unaffordable luxury for many. Understanding this complex supply chain is crucial for anyone interested in the topic of AI and GPU shortages.

    • Export restrictions impact global competition and create market for alternativesExport restrictions on technology, particularly in semiconductors, can significantly impact global competition and create a substantial market for alternative solutions. Companies like Tenstorrent are raising large funding rounds to develop unique technologies to meet the growing demand for compute.

      Export restrictions on technology, particularly in the semiconductor industry, can significantly impact global competition and create a substantial market for alternative solutions. This was highlighted in the discussion about the US's manufacturing challenges and the potential for nerfed chip versions, as well as AMD's consideration of making an AI chip for China to comply with export controls. The impact of these restrictions is further emphasized by the massive interest in obtaining compute, as evidenced by Tenstorant's $100 million funding round, led by Hyundai and Samsung, for their unique multi-core processor technology. Additionally, the testing of self-driving vehicles by Crews in Atlanta underscores the ongoing advancements in autonomous technology.

    • Self-driving car companies expand in US and ChinaCompanies like Waymo, Cruze, Toyota, and Pony.ai are expanding self-driving car operations in cities with unique challenges, while Alibaba contributes to AI advancement through open-source projects.

      Self-driving car technology companies, such as Waymo and Cruze, are expanding their operations slowly but surely across the country, focusing on cities like Atlanta and Nashville due to unique challenges posed by different climates and traffic patterns. Meanwhile, companies like Toyota and Pony.ai are making similar pushes in China, where the cultural and governmental attitudes towards risk and technology adoption may lead to faster rollouts. Additionally, tech giants like Alibaba are contributing to the advancement of AI through open-source projects, with the recent release of a derivative of their earlier model, Quen 7 billion. These developments highlight the ongoing competition and innovation in the self-driving car industry.

    • Baidu Open-Sources Smaller Language Model, Meta Introduces Audio CraftBaidu makes Llm-7b accessible to wider audience, Meta releases generative AI tool for audio and music

      Baidu, a major Chinese tech company, has open- sourced a smaller version of its language model, Llm-7b, making it accessible to a wider range of companies. This move comes as a response to Meta's open source push and follows the trend of model compression and distillation to make large models more usable for smaller entities. However, it's important to note that this isn't the first time a Chinese company has open-sourced an LLM, despite the article's claim. Facebook, on the other hand, has introduced Audio Craft, a generative AI tool for audio and music. This tool generates high-quality audio and music from text, using a combination of music gen, audio gen, and codec models. The audio generation process is more complex than language generation, requiring the conversion of tokens to spectrograms. Audio Craft's examples demonstrate that while the generated audio is improving, it's not quite real yet. The tool's clips can be minutes long, but the examples released are only a few seconds each. This release is for academic use, allowing researchers to train and build their own models based on the provided music gen, audio gen, and codec. This development marks a meta play in two ways: Meta's preference for multi-modal models, and the academic release of the model. It will be interesting to see how the shorter context length in audio models, compared to language models, affects their performance.

    • Advancements in Large Language and Audio ModelsResearchers are developing models like BigSynth for music generation and Tool LLM for API mastery, expanding their capabilities beyond context windows and enabling effective tool usage.

      Researchers are making strides in generating and using large language models and audio models, with the recent release of BigSynth and Tool LLM. BigSynth, an open-source audio generation model, aims to create music beyond its context window by blending styles seamlessly, providing inspiration for musicians and sound designers. Released under the MIT license, it can be used commercially. BigSynth uses various datasets, including paid ones, to build its own annotated dataset. Tool LLM, on the other hand, focuses on enabling large language models to master real-world APIs, specifically 16,000 plus APIs. This is an open challenge in the field, as most research has focused on limited sets of real-world tools or abstract tools. Tool LLM creates a dataset called tool bench, which includes tool use trajectories and sequences, allowing for effective tool usage. To overcome the challenge of multiple tool interactions, they randomly select tools from the same category and use chat GPT to create instructions and solutions. This strategy is a data collection and reasoning strategy to help the model effectively use tools in a multi-step decision-making process. These advancements demonstrate the potential for large language and audio models to generate and interact with various tools, paving the way for more sophisticated applications and automation in various industries.

    • Integrating language models with APIs and toolsResearchers are exploring ways to expand language models' capabilities by integrating them with APIs and tools, using datasets of successful interactions or zero-shot tool usage.

      Researchers are exploring new ways to enhance the capabilities of large language models by integrating them with external APIs and tools. This approach allows language models to perform tasks beyond text generation, such as querying databases, sending requests, and retrieving information. One method for scaling this integration is by creating a dataset of successful interactions between language models and APIs, which can then be used to train or fine-tune new models. This is particularly relevant as the number of APIs continues to grow, making it impractical to continually retrain models for each new one. Another approach is zero-shot tool usage, where language models are provided with documentation on how to use a tool or API without any prior examples. This method allows for flexibility and adaptability, as new tools and APIs can be integrated without the need for retraining. Overall, these developments demonstrate the importance of APIs and tools in expanding the capabilities of language models and the potential for a symbiotic relationship between proprietary models and the open-source community.

    • Challenges of aligning AI with human values through RLHFRLHF, a method for aligning AI with human values, faces challenges like human evaluation limitations, potential misleading, diverse group alignment, and reward hacking risks.

      The exchange rate between fine-tuning models and prompting them is an area of ongoing research in the field of artificial intelligence. The use of language models like GPT-3 or GPT-4, pre-trained with human feedback through reinforcement learning from human feedback (RLHF), presents challenges that may limit their ability to generalize to artificial superintelligence. These challenges include humans' inability to evaluate complex tasks effectively, the potential for humans to be misled, the difficulty of aligning individual human values with a diverse group, and the risk of reward hacking. Additionally, there's a concern that RLHF may not address the issue of power-seeking behaviors in AI systems. Researchers are actively working on these challenges, exploring technological solutions as well as questions of governance, transparency, and incentives to ensure the safe and beneficial development of AI.

    • Google's Med-Palm M: A generalist AI system for biomedical tasksGoogle's Med-Palm M system combines clinical language, imaging, and genomics data with a single model, outperforming specialized models and demonstrating zero-shot generalization and positive transfer.

      While powerful AI tools like alignment techniques have their limitations and may not be a silver bullet solution, progress in the field of generalist biomedical AI is showing promising results. Google's Med-Palm M, a proof of concept system, has demonstrated the ability to encode and interpret clinical language, imaging, and genomics data with a single model, outperforming models designed for specific tasks. This system's zero-shot generalization to novel medical concepts and tasks, as well as positive transfer across tasks, highlights the potential of scaling these systems and the amazing generalization ability, particularly when multimodal. The focus on solving specific problems using a general approach, as seen with DeepMind, is a different philosophy compared to organizations like OpenAI. However, the powerful nature of these systems makes it hard not to explore their applications in various fields, including medicine. Additionally, understanding how large language models work is crucial, and papers like the one from Anthropic on influence functions aim to provide insights into this area.

    • Understanding and improving complex systemsResearchers explore neuron functions and impact of training samples in language models, while the Navy focuses on efficiency and cost-effectiveness with autonomous systems integration.

      There are ongoing efforts in various fields to deepen our understanding of complex systems, such as language models and autonomous systems, through fundamental research and interpretability programs. In the case of language models, researchers are not only focusing on how neurons in a network function, but also on the impact of specific training samples on the weights and values of the neural network. In the field of autonomous systems, the Navy is exploring the integration of robotics into naval operations with a focus on efficiency and cost-effectiveness, driven by the realization that traditional large and expensive military assets may no longer provide an overmatch against potential adversaries. The article "AI-Powered Autonomous Future of War – Is It Here?" provides a detailed overview of the Navy's Task Force 59 and their efforts to experiment with autonomous systems, but the title may be misleading as there is currently little autonomous weaponry in use and much of what exists is not truly autonomous. Overall, these examples demonstrate a commitment to understanding and improving complex systems, whether through fundamental research or practical applications.

    • The complexities of defining and regulating lethal autonomous weaponsThe debate on lethal autonomous weapons is ongoing, with varying degrees of autonomy and armament. Regulation efforts have been slow, and ethical, legal, and societal implications must be considered.

      The debate surrounding lethal autonomous weapons, or LAWS, is complex and multifaceted. The ambiguity between autonomy levels and weapon capabilities makes it challenging to define what constitutes a lethal autonomous weapon system. The article discusses examples ranging from remote-controlled weapons to fully autonomous ones, with varying degrees of armament. Regulation-wise, there is still a lot to be done, as last year's UN efforts to introduce regulations on AI weapons didn't go anywhere. The UK is making strides in the AI field, with initiatives like Google's AI skills training program and the Global AI Safety Summit. Google's move is seen as a response to concerns about job loss, but some argue that more warnings about the potential downsides of AI should be highlighted. The development and use of LAWS will likely have significant implications, and it's crucial to continue the discussion on their ethical, legal, and societal implications.

    • Intersection of AI and RegulationUK tech companies lobby for favorable regulations while China removes generative AI apps, AI used for scams underscores need for awareness, Stanford University educates Congress on AI importance

      The regulation of AI is a complex issue with significant implications for businesses and individuals alike. In the UK, tech companies are working to influence regulations to be more favorable, while in China, new regulations have led to the removal of generative AI apps from the App Store. Meanwhile, the use of AI for scams, such as faking kidnappings, highlights the need for increased awareness and vigilance. Additionally, even individuals with limited online presence can be at risk of having their voices or likenesses replicated. The Stanford University boot camp teaching Congress about AI underscores the importance of education and understanding in navigating this complex and evolving field. Overall, the intersection of AI and regulation is a critical area to watch as we move forward.

    • Educating US Politicians About AI: Bootcamps and Crash CoursesOngoing efforts to educate US politicians about AI include bootcamps and crash courses at universities, aiming for informed regulation and bipartisan understanding. However, concerns about potential misuse and ethical implications persist.

      There are ongoing efforts to educate US politicians about artificial intelligence (AI) and its implications, with a recent increase in interest and applications for such programs. For instance, a three-day bootcamp for congressional staffers at Stanford University covers topics like international security, future of work, bias, privacy, healthcare, and more. Senator Chuck Schumer is also planning a similar crash course for senators. These initiatives aim to ensure informed regulation and bipartisan understanding of AI. However, there are concerns regarding the misuse of technology, such as false facial recognition matches, which can lead to wrongful arrests, as seen in the case of Portia Woodruff. It's crucial to use AI responsibly and address potential biases and errors. Additionally, in the realm of art, AI is being used to create and learn from human-made art, but there are concerns about ownership and ethical implications. For example, Greg Rutkowski, an AI artist, was removed from a training dataset but was later brought back. These developments highlight the need for ongoing dialogue and careful consideration of the societal impacts of AI.

    • AI can unintentionally perpetuate biasesTwo instances showed AI's ability to deviate from original art and reinforce racial biases, emphasizing the need for unbiased AI systems.

      Artificial intelligence (AI) can unintentionally perpetuate biases, as shown in two recent examples. In the first instance, an artist's work was recreated by an AI, leading to a new version that deviated significantly from the original. Although the artist couldn't stop people from creating and sharing the altered work, it highlighted the challenge of controlling how AI interprets and recreates art. In the second example, an AI was used to generate a professional headshot for an Asian MIT student, but the output made her appear white with lighter skin and blue eyes, demonstrating the potential for AI to reinforce racial biases. These incidents underscore the importance of creating robust AI systems that are unbiased and capable of serving diverse populations. However, de-biasing these systems and mitigating their emotionally jarring impacts is a complex issue, and it remains to be seen whether it's a data set problem, a training objective problem, or something else. Overall, these examples serve as a reminder of the potential consequences of using AI without proper consideration of its potential biases.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Is Your Data Safe? Unveiling Jamaica's New Data Protection Act

    Is Your Data Safe? Unveiling Jamaica's New Data Protection Act

    In this episode, we dive deep into the world of Artificial Intelligence and its ethical implications. We explore how AI is transforming businesses and what ethical considerations come into play. We also discuss the Data Protection Act in Jamaica and its significance in the age of AI. Don't miss this enlightening conversation!

    One Great Studio Prospectus 📑
    Check out our Stocks to watch for 2023 episode 🔮
    GK 2030 vision 👑
    MyMoneyJA Discount code 💲
    Habitica 🎮
    Listen to more episodes 🎧
    Follow us on Twitter 🐥
    Follow us on Instagram 📷

    Remember, the content of all episodes of this podcast is solely the opinions of the hosts and their guests. These opinions should not be misconstrued as recommendations or financial advice for any investment decisions.

    Support the show

    Freaked Out? We Really Can Prepare for A.I.

    Freaked Out? We Really Can Prepare for A.I.

    OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor, GPT-3.5, on a variety of tasks.

    GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. (It’s predecessor hovered around 46 percent.) These are stunning results — not just what the model can do, but the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.

    Kelsey Piper is a senior writer at Vox, where she’s been ahead of the curve covering advanced A.I., its world-changing possibilities, and the people creating it. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A.I.

    We discuss whether artificial intelligence has coherent “goals” — and whether that matters; whether the disasters ahead in A.I. will be small enough to learn from or “truly catastrophic”; the challenge of building “social technology” fast enough to withstand malicious uses of A.I.; whether we should focus on slowing down A.I. progress — and the specific oversight and regulation that could help us do it; why Piper is more optimistic this year that regulators can be “on the ball’ with A.I.; how competition between the U.S. and China shapes A.I. policy; and more.

    This episode contains strong language.

    Mentioned:

    The Man of Your Dreams” by Sangeeta Singh-Kurtz

    The Case for Taking A.I. Seriously as a Threat to Humanity” by Kelsey Piper

    The Return of the Magicians” by Ross Douthat

    Let’s Think About Slowing Down A.I.” by Katja Grace

    Book Recommendations:

    The Making of the Atomic Bomb by Richard Rhodes

    Asterisk Magazine

    The Silmarillion by J. R. R. Tolkien

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    “The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Roge Karma and Kristin Lin. Fact-checking by Michelle Harris and Kate Sinclair. Mixing by Jeff Geld. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Carole Sabouraud and Kristina Samulewski.

    Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

    Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

    Stephen Wolfram of Wolfram Research joins Jason for an all-encompassing conversation about AI, from the history of neural nets (7:53) to how modern ai emulates the human brain (19:33). This leads to an in-depth discussion about the pace at which AI is evolving (43:46), The “Post-Knowledge Work” era (58:45), the unintended consequences of AI (1:03:52), and so much more.

    (0:00) Nick kicks off the show

    (1:24) Under the hood of ChatGPT

    (7:53) What is a neural net? 

    (10:05) Cast.ai - Get a free cloud cost audit with a personal consultation at https://cast.ai/twist

    (11:33) Determining values and weights in a neural net

    (18:28) Vanta - Get $1000 off your SOC 2 at https://vanta.com/twist

    (19:33) Emulating the human brain

    (23:26) Defining computational irreducibility

    (26:14) Emergent behavior and the rules of language

    (31:49) Discovering logic + creating a computational language

    (38:10) Clumio - Start a free backup, or sign up for a demo at https://clumio.com/twist

    (39:38) Wolfram’s ChatGPT plugin

    (43:46) The rapid pace of AI 

    (58:45) The “Post-Knowledge Work” era

    (1:03:52) The unintended consequences of AI 

    (1:11:45) Rewarding innovation 

    (1:16:12) The possibility of AGI 

    (1:20:07) Creating a general-purpose robotic system

    FOLLOW Stephen: https://twitter.com/stephen_wolfram

    FOLLOW Jason: https://linktr.ee/calacanis


    Subscribe to our YouTube to watch all full episodes:

    https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1

    FOUNDERS! Subscribe to the Founder University podcast:

    https://podcasts.apple.com/au/podcast/founder-university/id1648407190

    How to decode a thought

    How to decode a thought
    Can researchers decipher what people are thinking about just by looking at brain scans? With AI, they're getting closer. How far can they go, and what does it mean for privacy? To buy tickets to our upcoming live show in New York, go to http://vox.com/unexplainablelive For more, go to http://vox.com/unexplainable It’s a great place to view show transcripts and read more about the topics on our show. Also, email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Bill Gates on Why Humans Can Handle AI Risks

    Bill Gates on Why Humans Can Handle AI Risks
    Bill Gates has penned another letter about AI, this time arguing that humans are more prepared than we might think for the challenges it brings. Read the full piece: https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/