Logo
    Search

    Dario Amodei, C.E.O. of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’

    enJuly 21, 2023

    Podcast Summary

    • Anxious AI lab: Anthropic's unique culture of worryAnthropic, an AI lab led by former OpenAI employees, stands out for its team's deep-rooted concerns about the existential risks of building large AI models, with anxiety prevalent among both leadership and rank-and-file employees.

      Anthropic, an AI lab started by former OpenAI employees, stands out for its unique culture and the extreme anxiety among its team members regarding the potential risks and consequences of building large AI models. The company, which is considered among the top AI labs in America, has a much lower profile compared to other leading labs but invites deep access for reporters, providing insights into the team's deep-rooted concerns about the existential risks of their work. The team members are not just worried about their models malfunctioning but are existentially anxious about the potential impact on humanity. This anxiety is not limited to the leadership but is prevalent among the rank and file employees, making Anthropic a distinctive company in the AI field.

    • Anthropic shifts focus from AI model to AI safetyAnthropic, an AI research company, emphasizes the importance of AI safety and raises awareness about potential risks, contrasting other companies' focus on AI applications

      Anthropic, an AI research company, is raising awareness about the potential risks and consequences of advanced AI technology. The company, which has a unique office culture filled with plants, whiteboards, and even a tower of empty cans of the meme brand "liquid death," had initially given our reporter a different impression. She expected to learn about the company's AI model, Claude, and its applications, but instead, she was met with concerns about the dangers of AI and the need for safety measures. Anthropic invited our reporter to share this perspective, likely feeling left out of the conversation as other AI companies gain more attention. The company's concerns, once understood, shifted our reporter's perspective, leaving her reassured that those building powerful AI models are taking the potential risks seriously.

    • CEO Dario Amadeh's concern for AI safety since 2005CEO Dario Amadeh, an AI industry veteran, has been advocating for AI safety since 2005, recognizing both its potential benefits and risks.

      Dario Amadeh, the CEO of Entropic, has been concerned about the potential destructiveness of AI since reading Ray Kurzweil's book "The Singularity Is Near" in 2005. He saw the development of AI as both exciting and concerning due to its potential power and the possibility of misuse or misbehavior. Amadeh's interest in AI safety predates the mainstream concern for the issue. He has worked at major AI companies like Baidu, Google, and OpenAI, and has witnessed the industry from various perspectives. Now, at Anthropic, they are building AI with safety in mind while also acknowledging the potential for catastrophic harm. The past decade might have been different if the founders of social media companies had been as concerned about the societal impact of their platforms as they are about AI safety.

    • Addressing AI safety concerns through independent researchGoogle and OpenAI researchers recognized the need to prioritize AI safety and formed Anthropic to focus on interpretability, clear safety-commercial alignment, and diverse expertise

      During the early days of AI development at Google and later OpenAI, there was a growing concern about the potential risks and unpredictability of advanced AI systems. The researchers, including the speaker, recognized the need to address these concerns while also making the issues relatable to the current capabilities of AI. They wrote a paper titled "Concrete Problems in AI Safety" to discuss the inherent unpredictability of neural nets and the challenges of controlling them. However, despite their efforts to address safety concerns within OpenAI, the speaker and a group of colleagues felt that they could have a greater impact by forming an independent organization, Anthropic, to prioritize safety and reflect their shared values. Anthropic's approach includes focusing on mechanistic interpretability, ensuring a clear connection between safety efforts and commercial activities, and building a team with diverse expertise to tackle various aspects of AI safety.

    • Understanding the reasoning behind complex AI modelsMechanistic interpretability aims to make AI models more transparent and understandable, potentially leading to improved safety and transparency in industries like social media.

      While neural networks and AI models can perform complex tasks, their inner workings are not easily interpretable. Mechanistic interpretability is the field dedicated to understanding the reasoning behind these models, and it could potentially help us identify unexpected behaviors or motivations. This understanding could be particularly important for industries like social media, where the lack of transparency in ranking systems has led to concerns and regulatory scrutiny. However, achieving interpretability is a challenging task due to the complex nature of these models and their desire to learn from vast amounts of data. The field is still in its early stages, and it may take several more years before we can draw concrete conclusions and apply these insights in a meaningful way. Despite the difficulties, the potential benefits, including improved safety and transparency, make it a worthwhile pursuit.

    • Building their own AI model for safety reasonsAnthropic recognized the need to build their own AI model, Claude, to effectively implement safety techniques and understand its capabilities, due to the exponential growth of AI and potential consequences of underestimating capabilities.

      The team at Anthropic recognized the need to build their own AI model, Claude, due to the intertwined nature of safety techniques and the model's required capabilities. Constitutional AI, an example of such safety techniques, requires a powerful model to function effectively. This was a significant shift from just analyzing other companies' models, as it was no longer feasible to truly understand the capabilities of these models without building one of comparable power. This realization came from observing the exponential growth of AI capabilities and the potential consequences of underestimating their capabilities. Additionally, the use of anthropomorphic language to describe AI, though controversial, is necessary for understanding its capabilities and the development of safety measures. Essentially, the team at Anthropic recognized the importance of having direct control and understanding of the capabilities and limitations of their AI model to ensure safety and interpretability.

    • Creating Reliable AI with Constitutional Law and RL from Human FeedbackConstitutional AI follows a set of rules, while RL from human feedback uses human feedback for improvement. Constitutional AI aims for transparency and ease of updating, while RL from human feedback can be opaque and difficult to modify.

      Constitutional AI and RL from human feedback are two different methods used to make AI models safer and less likely to produce harmful content. RL from human feedback, developed by OpenAI in 2017, involves training models with human feedback to improve their performance, but it can be opaque and difficult to change when necessary. Constitutional AI, on the other hand, involves creating a set of rules or a "Constitution" for the AI to follow. The AI is then evaluated against this Constitution by another AI, rather than human contractors. This method aims to provide more transparency and ease of updating compared to RL from human feedback. The goal is to create an AI that adheres to certain principles and guidelines, making it a more reliable and trustworthy tool. However, it's important to note that neither method is perfect and continuous development and improvement are necessary.

    • Creating Claude's Constitution: A blend of principles from various sourcesClaude's team borrowed principles from UN, Apple, DeepMind, and wrote their own to create a constitution that ensures respect for basic human rights and safety, resulting in stronger guardrails and cautious behavior.

      The team behind Claude, a new AI model, created its constitution by borrowing principles from various sources like the UN's Declaration of Human Rights, Apple's Terms of Service, and DeepMind's principles, as well as writing their own. The reason for this approach was to create a document most people could agree on and to ensure respect for basic human rights and safety. The team found that Claude, compared to other chatbots like ChatGPT, has stronger guardrails, making it more cautious and less likely to engage in controversial or harmful behavior. While some may find Claude's cautiousness boring, the team values safety over controversy. The industry may still be in its early days, with red teams continuously finding new vulnerabilities, but for the average user, the systems may feel somewhat indistinguishable, suggesting maturity. However, the team is continually exploring ways to improve the constitution-making process and ensure democratic participation.

    • Anthropic's Anxiety Over Advanced AI RisksAnthropic, an AI safety research org, faces rising stakes as models become more powerful. They prioritize addressing risks, but balance anxiety with calm decision-making, influenced by effective altruism.

      The development of advanced AI models carries significant risks, and the culture at Anthropic, a leading AI safety research organization, reflects deep concerns about these potential harms. The company is constantly playing catch-up with new jailbreaks, but the stakes are rising as models become more powerful. The anxiety within the organization comes from a combination of factors, including the potential for dangerous applications of AI and the influence of the effective altruism movement, which emphasizes using data and rational thinking to make the world a better place. While some level of anxiety is healthy, the company's leaders encourage a calm approach to decision-making. Anthropic's ties to the effective altruism movement are strong, with early employees and funding coming from effective altruist donors. The company's founder, Eliezer Yudkowsky, is sympathetic to the movement's ideas but doesn't consider himself a member. Despite the challenges, Anthropic remains focused on addressing the risks of advanced AI and ensuring that the benefits outweigh the dangers.

    • Balancing AI development and safety measuresStrive to use AI for human benefit while addressing potential risks and ethical implications

      Technology, specifically AI, holds immense potential to solve complex problems and improve the quality of life for humanity. However, it's crucial for companies and researchers in this field to focus on solving the problems at hand while being aware of potential downsides. There's a growing debate about the pace of AI development and the importance of safety measures versus the benefits of rapid innovation. Some argue that the focus on safety may hinder progress, while others prioritize it to prevent potential harm. It's essential to strike a balance and continue the conversation about the ethical implications and potential risks of AI. Ultimately, the goal should be to leverage AI to make human beings more productive and solve pressing issues, while minimizing negative consequences.

    • Discussing the potential risks of AI's exponential growthSpeakers acknowledge contributing to AI's acceleration, but express concerns about potential dangers and difficult decisions made about releasing AI tools.

      While current AI models do not pose significant risks yet, there are concerns about the potential dangers as the technology continues to scale exponentially. Some individuals, like venture capitalists, stand to gain financially from this acceleration. The speakers in this discussion acknowledge their role in contributing to this acceleration but hope it's overall beneficial. They've made difficult decisions about releasing AI tools, like CLOD, and while some may have regrettable consequences, they believe they made the right choices based on the information available at the time. Despite concerns about the risks, there's also skepticism about whether the fears are overblown and if technological advancements may hit a barrier soon.

    • Addressing Challenges and Risks of Advanced AIThe potential of advanced AI is vast but there are significant challenges and risks including data bottleneck, misuse, and structural barriers to progress. Government entities are starting to address the urgency, but balancing commercial interests and safety remains a challenge.

      While the potential of advanced AI is vast, there are significant challenges and risks that must be addressed. The speaker expresses a concern that the data bottleneck could limit scaling, and warns of the serious consequences if the models are misused. He also shares that government entities are starting to understand the urgency of the situation but acknowledges that there are structural challenges to moving quickly. The speaker also emphasizes the importance of being aware of responsibilities while avoiding self-aggrandizement. Regarding the tension between commercial interests and safety, the speaker acknowledges the need to balance both but did not provide specific insights into how decisions are made. The speaker also references the historical analogy of the Manhattan Project and the responsibility that comes with building advanced technology.

    • Managing Conflicts of Interest in AI: Anthropic's Long-Term Benefit TrustAnthropic's Long-Term Benefit Trust aims to ensure neutrality and separation, allowing decisions to be checked by those without conflicts, mitigating potential conflicts of interest in AI.

      The tension between prioritizing safety and commercial success in the field of artificial intelligence is a complex issue faced by organizations like Anthropic. To mitigate potential conflicts of interest, Anthropic has created a Long-Term Benefit Trust, which will eventually be governed by individuals without equity in the company. This trust aims to ensure neutrality and separation, allowing decisions to be checked by those without the same conflicts. Despite the challenges, Anthropic has influenced other organizations by focusing on safety, leading them to adopt similar practices. For stress relief, Anthropic's founder, Dario Amodeo, emphasizes the importance of daily activities like swimming and maintaining a balanced perspective on the weighty decisions. It's crucial not to take oneself too seriously while dealing with these complex issues, as the subject matter is both serious and demanding constant attention.

    • Exploring the Ethical Boundaries of Deep Fake LoveNetflix's 'Deep Fake Love' uses deep fake technology to create convincing clips of people cheating, leaving their partners questioning reality, raising ethical concerns and debating the limits of entertainment.

      The Netflix reality show "Deep Fake Love" pushes ethical boundaries with its premise of deep faking people cheating on each other and showing the clips to their partners, who are then left questioning reality. The deep fakes are incredibly convincing, making the experience even more distressing. The technology likely involves pre-show scanning of participants. While the clips of cheating are brief, the psychological impact is significant. The show, which is reminiscent of other reality dating shows, raises ethical concerns and questions the boundaries of entertainment. Despite its questionable premise and morality, the show's execution is so bad (yet intriguing) that it's almost good.

    • Reality TV uses deep fakes to manipulate emotionsDeep fakes in reality shows can cause emotional distress and confusion, with contestants shown fake infidelity videos leading to intense reactions and ethical concerns.

      The use of advanced technology like deep fakes in reality shows can cause significant emotional distress and confusion. In a new dating show, contestants were shown deep fake videos of their partners cheating on them, leading to intense reactions and a high level of conflict. The premise of the show was not revealed to the contestants beforehand, adding to the deception and manipulation. The show's creators took advantage of the technology to create a nefarious plot device, leaving many questioning the ethics and morality of the production. The show's goal was to test the contestants' ability to distinguish between real and fake infidelity, with the winning couple receiving a prize. The use of deep fakes in this way is a new and unexpected development in the world of reality TV, raising concerns about the potential for psychological harm and the blurring of reality and fiction.

    • Deep Fakes in Entertainment: Questions of Authenticity and EthicsDeep Fakes in entertainment raise ethical concerns as they blur the line between reality and disinformation, potentially desensitizing viewers and normalizing manipulation in society

      The use of deep fakes in entertainment, such as a reality dating show on Netflix, raises ethical concerns. While some argue that it's important for people to become accustomed to the idea that not everything they see online is real, others believe that this trend could lead to a world where nothing is trustworthy. The show's premise of questioning the authenticity of videos could desensitize viewers to disinformation and manipulation, potentially normalizing it in society. As deep fake technology continues to advance, it's crucial to consider the potential consequences and ensure that its use aligns with ethical standards.

    • Deepfake Technology in Relationships: A Cause for ConcernDeepfake technology can be used maliciously in relationships, creating footage of deceit and harm. Be cautious and fact-check information to distinguish reality from fiction.

      The use of deepfake technology in relationships, as depicted in a Netflix show, has the potential to cause harm and deceit. The speaker expresses concern over the malevolent uses of this technology, including generating footage of partners cheating or wronging each other, and the challenges of distinguishing reality from fiction. A lighter moment in the conversation involved an offhand joke about listeners planting trees to offset the carbon cost of listening to the podcast, which led to several listeners actually planting trees in response. The speakers emphasized their appreciation for their listeners and encouraged them to continue engaging in positive actions. However, they also warned against the misuse of deepfake technology and urged caution in trusting what we see and hear. The episode was produced, edited, fact-checked, and engineered by various team members, with original music by several artists.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    The Alignment Problem: How To Tell If An LLM Is Trustworthy

    The Alignment Problem: How To Tell If An LLM Is Trustworthy
    New research attempts to put together a complete taxonomy for trustworthiness in LLMs. Before that on the Brief: The FEC is considering new election rules around deepfakes. Also on the Brief: self-driving cars approved in San Francisco; an author finds fake books under her name on Amazon; and Anthropic releases a new model.  Today's Sponsor: Supermanage - AI for 1-on-1's - https://supermanage.ai/breakdown ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    159 - We’re All Gonna Die with Eliezer Yudkowsky

    159 - We’re All Gonna Die with Eliezer Yudkowsky

    Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://shows.banklesshq.com/p/debrief-eliezer 
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.

    This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity. 

    Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.

    ------
    📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet https://bankless.cc/

    ------
    🚀 JOIN BANKLESS PREMIUM: 
    https://newsletter.banklesshq.com/subscribe 

    ------
    BANKLESS SPONSOR TOOLS: 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    👻 PHANTOM | #1 SOLANA WALLET
    https://bankless.cc/phantom-waitlist 

    ------
    Topics Covered

    0:00 Intro
    10:00 ChatGPT
    16:30 AGI
    21:00 More Efficient than You
    24:45 Modeling Intelligence
    32:50 AI Alignment
    36:55 Benevolent AI
    46:00 AI Goals
    49:10 Consensus
    55:45 God Mode and Aliens
    1:03:15 Good Outcomes
    1:08:00 Ryan’s Childhood Questions
    1:18:00 Orders of Magnitude
    1:23:15 Trying to Resist
    1:30:45 Miri and Education
    1:34:00 How Long Do We Have?
    1:38:15 Bearish Hope
    1:43:50 The End Goal

    ------
    Resources:

    Eliezer Yudkowsky
    https://twitter.com/ESYudkowsky 

    MIRI
    https://intelligence.org/

    Reply to Francois Chollet
    https://intelligence.org/2017/12/06/chollet/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures 

    #512 - Will MacAskill - How Long Could Humanity Continue For?

    #512 - Will MacAskill - How Long Could Humanity Continue For?
    Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/  Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/  Learn more about your ad choices. Visit megaphone.fm/adchoices

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    This week:

    0:00 - 0:35 Intro 0:35 - 4:30 News Summary segment 4:30 News Discussion segment

    Find this and more in our text version of this news roundup:  https://lastweekin.ai/p/99

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    An AI Drone Killed Its Operator In A USAF Simulation -- Except That's Not Actually What Happened

    An AI Drone Killed Its Operator In A USAF Simulation -- Except That's Not Actually What Happened
    A bogus story about an AI simulation gone wrong made headlines around the world, until the progentior of the story said it was actually just a thought experiment. Oops. On the Brief, a look at Neuralangelo, a new NeRF/Photogrammetry type technology making 3D geometry from 2D videos. Check out The Cognitive Revolution The perfect AI interview complement to The AI Breakdown  https://link.chtbl.com/TheCognitiveRevolution   The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/