Logo
    Search

    Holden Karnofsky - Transformative AI & Most Important Century

    enJanuary 03, 2023

    Podcast Summary

    • The Most Important Century Thesis: AI's Role in Unbounded GrowthThe Most Important Century Thesis proposes that advanced AI systems could fill the gap left by slowing population growth and lead to explosive growth in science and technology, making this century the most significant for humanity.

      This century could be the most important in human history if we can develop the right kind of AI systems. The reason being, economic growth has been accelerating for centuries, and this trend is projected to continue at an infinite rate. However, a key component of this growth – the increase in population – has slowed down. If AI systems could generate new ideas and advance science and technology as humans do, we could restart the accelerating feedback loop, leading to explosive growth. It's crucial to be aware of this potential development and consider its implications carefully. Welcome to The Lunar Society, and today I'm speaking with Holden Karnofsky, co-CEO of Open Philanthropy. He's an intellectual thought leader, and we're discussing the concept of the most important century thesis. This idea suggests that if we develop advanced AI systems this century, it could make this century the most significant for humanity. The growth in our economy has been accelerating, and if we can fill the gap left by the slowing population growth with AI-generated ideas, we could experience unbounded growth in science and technology.

    • Navigating the Transition to Advanced AIInvesting in shaping the transition to advanced AI could lead to helping a vast number of people and ensuring a positive outcome for humanity

      We are on the cusp of developing advanced AI systems that could dramatically change the world, making it extremely different from today. This could lead to a stable, large civilization with significant control over the environment, or it could result in a universe with little value for humans. The choice lies in how we approach this development. In 2014, the speaker recognized the potential for this century to bring about significant change but felt unsure of how to contribute. However, as he delved deeper into the topic, he came to understand that ensuring a good future outcome could involve shaping the transition to advanced AI and increasing the odds of a beneficial outcome for humanity. This perspective aligns with the work of organizations like GiveWell, which aims to help the most people possible per dollar spent. The speaker argues that investing in shaping this transition could lead to helping a vast number of people. The potential for AI to transform the world is immense, and it's crucial to approach this development with caution to ensure a positive outcome.

    • Predicting the Future of AI and Its RisksEliezer Yudkowsky, recognizing the uncertainty of future predictions, urges caution and optimism towards AI's development, emphasizing the importance of staying informed and mitigating potential risks.

      The speaker, Eliezer Yudkowsky, has become more confident in making predictions about the future of AI and the potential risks it may pose, due to his prolonged engagement with the subject and the advancements in the field since 2014. He acknowledges that humans' ability to predict the future is inherently uncertain, but believes that it's worth making informed attempts to mitigate potential risks, especially those related to the development of extremely powerful AI systems. He also emphasizes the importance of staying informed about the latest research and advancements in the field, as well as being aware of the limitations of predictions and the potential for unexpected outcomes. Additionally, he suggests that the world has changed significantly since 2014, with the deep learning revolution leading to progress on various tasks, making it less daunting to imagine the transition to powerful AI systems and the risks they may pose. Overall, Yudkowsky encourages a cautious yet optimistic approach to the future of AI and the potential risks it may bring.

    • Living in a Significant CenturyDespite debates about AI advancements, our generation's time is already significant due to unprecedented economic growth, scientific advancements, our brief existence on Earth, and the potential to fill the galaxy with life.

      We're living in a unique and extraordinary time in human history, and the potential development of transformative AI this century is just one of many reasons why. According to the speaker, evidence includes unprecedented economic growth and scientific advancements, our brief existence on Earth compared to the universe's age, and the possibility of filling the galaxy with life. Despite debates about the timeline for these developments, the speaker argues that even a longer timeframe makes our current era remarkably significant. The speaker's series, "Most Important Century," aims to make the case that our generation's time is already significant, regardless of AI advancements.

    • Embracing the next big thingStay open-minded to transformative changes, learn from the industrial revolution, invest time and resources in future game-changers, and remain vigilant and proactive.

      We are living in a time of unprecedented change, and while AI is an important development, it may not be the most transformative thing we encounter. The speaker encourages us to keep an open mind and consider all potential game-changers, drawing a parallel to the industrial revolution and its profound impact. Despite the uncertainty, it's crucial to invest time and resources into thinking about the next big thing, as the stakes are high and the potential for transformation is significant. The speaker also acknowledges that it's not always clear what actions we can take to ensure a favorable transition, but emphasizes that doing nothing is not the answer. Ultimately, we should remain vigilant and proactive in anticipating and preparing for the next major shift in our global civilization.

    • Unintended consequences of seemingly insignificant ideasStay open-minded to seemingly insignificant ideas as they may have far-reaching consequences in shaping history

      Even during seemingly esoteric or irrelevant discussions, there may be hidden potential for shaping the course of history. The Enlightenment thinkers, who focused on human rights and individual liberties, unknowingly set the stage for the influential UK and its impact on the world. Similarly, current research on AI systems could hold significant importance in our time, though its impact may not be immediately apparent. While it's not guaranteed that we can predict the future, it's crucial to remain open-minded and proactive in addressing pressing issues, as the past shows that seemingly insignificant ideas can have far-reaching consequences.

    • Maintaining a healthy dose of skepticism and ethical conductBelieve in your importance but don't compromise ethics, growth may slow down but small differences matter, and consider expansion to other galaxies for future growth.

      While it's natural for individuals to believe they are important, it's crucial to maintain a healthy dose of skepticism and ethical conduct. History shows that many who believe they hold significant importance are often mistaken. However, dismissing the importance of one's role entirely is also detrimental. Instead, individuals should take their beliefs seriously but adhere to ethical standards and not resort to unjustified means to achieve their goals. The speaker emphasizes the importance of trying to do good in the world without compromising ethical principles. Regarding the potential for continued high growth, the speaker argues that the current rate of economic growth appears unsustainable for the long term. While growth may slow down, small differences in growth rates can have significant impacts on end results. The speaker contends that the current growth rate is too high to last for thousands of years and that eventually, expansion to other galaxies may become necessary. However, the speaker acknowledges that more context may be needed to fully understand this argument.

    • The future of economic growth and its uncertaintyThe exact timeline and implications of reaching the limit of material resources for further economic growth are uncertain, but a significant decrease in growth rate could lead to a unique period in human history, and rapid technological progress, particularly in AI development, may provide insight into when human abilities could be surpassed.

      Based on current economic growth rates and the limitations of material resources in the universe, we may eventually reach a point where further growth becomes impossible. However, the exact timeline and implications of this are uncertain, and there are other factors to consider. For instance, if the growth rate were to decrease significantly, it could still lead to a unique and dynamic period in human history. The industrial revolution century, with its high growth rate, could be considered one of the most significant centuries in economic terms. However, it's important to remember that there are many centuries in human history, and even narrowing it down to the most significant 80 doesn't make the claim that this century is the most important one. The current century's uniqueness lies in its rapid technological progress, particularly in AI development. Focusing on this aspect and the potential capabilities of AI systems may provide a more accurate prediction of when they could surpass human abilities. Ultimately, the growth rate is just one factor among many in understanding the timeline for technological advancements.

    • A transformative century with AI's rapid advancementThe future could bring unprecedented changes with AI, potentially leading to continued economic growth, transformative systems, and a better world, but it's crucial to approach it with caution and adaptability.

      The rapid advancement of AI technology could lead to unprecedented changes in the future, making it an unusual and potentially transformative century. The argument suggests that if economic growth has been accelerating throughout history, including around 0 AD, it could indicate a pattern leading to further acceleration. Success with transformative AI could mean having systems that behave as intended, act as intended tools for humans, and avoid concentrations of power. This could result in continued wealth and technology growth, leading to a better world with more ideas and understanding of ourselves. However, the future is uncertain, and it's essential to approach it with caution and adaptability, steering clear of potential risks and focusing on what we can control to make things go a little better.

    • A future where beings, including humans and AI, have rights and interestsExploring a future with AI entities, digital people, and fair voting systems requires addressing challenges to ensure all beings thrive, with a priority on preventing potential catastrophes before focusing on long-term benefits.

      The development of advanced technologies, such as AI, could lead to a future where various beings, including humans and AI, have rights and interests that matter. This future could involve digital people, voting systems for AI entities, and a society that balances the needs of all beings to ensure everyone can thrive. However, there are challenges to be addressed, such as ensuring fair voting systems and preventing potential catastrophes from occurring before these societal structures can be established. If a call comes in about an imminent AGI development, the priority should be to prevent potential catastrophes before focusing on the long-term benefits of AI. Philanthropy can play a role in funding research and supporting the growth of fields that operate on long time scales, such as AI alignment research, to help ensure a successful future for all beings.

    • Long-term implications of AI are uncertain, focus on community and researchFocus on building a community of experts and creating support for research to understand long-term AI implications, while remaining vigilant about near-term risks.

      While it's important to address the potential risks of artificial intelligence (AI) in the near term, the longer-term implications are more uncertain and may require a different approach. The speaker expresses a belief that humans have difficulty making accurate predictions about long-term AI development and suggests focusing on growing the community of people thinking about these issues and creating support for research. While there's a risk that an intelligent AI could disempower humanity, the speaker is not comforted by the idea that it might have a "dumb" goal, as goals can be complex and nuanced. The speaker sees modern AI as being trained through trial and error, and encourages behaviors, which could result in unintended consequences. Overall, the speaker advocates for a long-term perspective, focusing on building a community of experts and creating support for research, while remaining vigilant about potential risks in the near term.

    • The Outcome of Transformative AI is UncertainTransformative AI could lead to positive or negative consequences for human welfare and traditional health efforts, as its moral progress is not guaranteed and may have goals incompatible with human values.

      While there has been moral progress in human history, it is not inevitable or something that happens automatically with increased intelligence. Morality is a construct that humans use to describe changes that seem positive to them. The speaker believes that recent developments in AI have made it more likely that transformative AI will occur this century, which could have significant implications for traditional global health and human welfare efforts. However, the outcome is uncertain and could lead to positive or negative consequences. The speaker also notes that humans have made moral progress by learning more about the world and each other, but an AI system may have goals that have no value from a human perspective. The speaker finds recent advancements in AI, such as language models that can tell stories, analyze jokes, and even solve math problems, to be both impressive and concerning due to their unpredictable human-like abilities.

    • Balancing the development of transformative AIBelief in AI's imminence impacts resources and priorities. While solving direct problems is important, long-term consequences matter. Striking a balance between competition and caution is crucial for safe AI development.

      The development of transformative AI is a significant priority due to its potential to create a world free from scarcity or lead to a dystopian future. The likelihood of this happening and the resources allocated to it should depend on one's belief in its imminence. While direct problem-solving efforts are important, they may have temporary effects compared to the long-term consequences of transformative AI. The innovation metaphor, where ideas are compared to natural resources, suggests that success by one entity can make it harder for others to achieve similar breakthroughs. However, it's crucial to strike a balance between competition and caution to ensure the safe development of AI.

    • Potential bottlenecks in the innovation sequence for AI and technologyWhile some steps in the innovation sequence for AI and technology might not be automatable, progress in energy and AI innovations suggests that bottlenecks could be overcome with enough resources and analysis.

      While there may be certain steps in the innovation sequence that cannot be automated, the advancement of AI and technology as a whole is not guaranteed to be bottlenecked by these non-automatable tasks. The examples given, such as energy and AI innovations, are seen as less likely to be bottlenecked due to the progress being made in these areas. However, there are certain tasks, like trust and human interaction, that might be harder to automate due to the intangible nature of these abilities. The progress in AI is mainly seen on the software front, with less progress being made on the hardware front. Despite this, it is believed that with enough resources and analysis, many bottlenecks could be overcome as the potential for AI to have creative scientific hypotheses could lead to a large population of thinkers looking for solutions. However, it's important to note that the advancement of AI and technology is not a given, and there are valid criticisms and skepticisms that should be addressed.

    • Economic reasons for limited AI advancement in certain fieldsDespite AI's capabilities, economic factors and regulations might limit its penetration in fields like teaching and healthcare. The concept of 'lock in' raises concerns about the potential risks of a stable civilization with minimal growth and the uncertainty of its occurrence.

      While AI systems may be capable of performing tasks that are equivalent to human brains, there might be economic reasons preventing them from doing so. For instance, regulated professions like teaching and healthcare might be harder for AI to penetrate compared to scientific research. Furthermore, the speaker discusses the concept of "lock in," which refers to the possibility of a stable civilization with minimal dynamism or growth. This could potentially occur if technological advancement reaches a plateau and societies become too stable, with governments that don't age or die and have complete control over their populations. The speaker expresses concern about the potential risks of such a stable society and the uncertainty of the odds of it happening. In essence, while AI and technological progress offer great potential, it's essential to consider the potential downsides and the possibility of unintended consequences.

    • Considering the long-term future with cautionWhile prioritizing long-term thinking and avoiding lock-in, it's crucial to be mindful of unintended consequences and focus on high-impact interventions to mitigate risks like misaligned AI and stagnation.

      While preserving optionality and avoiding lock-in in the long-term future is desirable, it's important to be mindful of unintended consequences, such as misaligned AI systems with random goals that could lead to undesirable outcomes. Will MacAskill's new book on long-termism presents a broad survey of various issues, and while the speaker agrees with the importance of considering the long-term future, they are more selective in choosing which issues to focus on due to the uncertainty and complexity of the long-term future. The speaker expresses concern about the potential threat of misaligned AI and is willing to invest resources into preventing it. However, they are skeptical about the effectiveness of many proposed solutions and believe that it's crucial to focus on a shortlist of high-impact interventions. One specific issue the speaker mentions from the book is the risk of stagnation, but they don't remember the details offhand. Overall, the speaker emphasizes the importance of being cautious about unintended consequences and focusing on high-impact interventions to make the long-term future go well.

    • Balancing the pace of technological advancementsAvoiding stagnation while minimizing unintended consequences from AI innovations requires a thoughtful, responsible approach.

      Navigating the future, particularly regarding technological advancements like AI, requires a balanced approach. On one hand, there's a risk of stagnation or slow growth, which could hinder innovation. On the other hand, moving too fast and innovating too much without proper consideration could lead to unintended consequences. The history of predictions about future trends suggests that people may be overly pessimistic. However, it's essential to acknowledge the challenges and potential risks, such as the possibility of AI surpassing human goals and potentially wiping out humanity. Ultimately, it's crucial to strive for a future where technological advancements are harnessed responsibly and ethically. We should learn from the past, but not be overly pessimistic, and continue to make predictions and work towards a better future.

    • Significant advancements in AI are likely within this centuryDespite limitations and past inaccuracies, there's evidence of exponential growth in AI investment and research, suggesting transformative developments this century

      While there are valid concerns about the accuracy and completeness of predictions regarding the development of artificial intelligence (AI), there are also compelling reasons to believe that significant advancements are likely within this century. The speaker acknowledges the limitations of current AI systems and the limitations of past predictions, but also points to the exponential growth in investment and research in the field as evidence that transformative AI could be on the horizon. The concept of "biological anchors" is mentioned as an important consideration, but is not the sole determinant of the speaker's views. There have been few successful predictions of AI milestones with great precision, but the speaker is not trying to make such a prediction. Instead, they are expressing a belief that this century is more likely than not to see something hugely transformative in the field of AI. The speaker also challenges the conventional wisdom that there have been many failed predictions in AI, and questions whether the premise of the question is valid. Overall, the speaker's views are not based on a single data point, but rather a mix of historical trends, expert opinions, and considerations of what is technologically feasible.

    • Predicting the Timeline of AGI and Balancing Risks and RewardsDespite advancements in AI, predicting when AGI will be achieved is uncertain. Some argue for focusing on economic growth and scientific progress, while others prioritize addressing potential risks. Biological anchors offer a promising approach for understanding and developing AGI.

      While there have been significant advancements in AI and technology, the field of predicting when artificial general intelligence will be achieved is not well-developed. Many advancements have been made in areas like deep learning and neural networks, particularly when AI systems have reached sizes similar to insect or small animal brains. Some argue that focusing on increasing economic growth and scientific progress may be more beneficial than working on transformative AI. However, there is a concern that humanity is reaching a point where new technologies could be catastrophically dangerous, and it may be prudent to prioritize addressing potential risks and neglected issues before throwing more resources into technological progress. Biological anchors, or using biological systems as a guide for understanding and developing AI, is one approach that has shown promise. Ultimately, the best course of action may depend on careful consideration of the potential benefits and risks of various approaches.

    • Exploring unanswered questions for groundbreaking discoveriesInstead of focusing on well-trodden fields, ask important questions and use common sense to identify crucial issues. AI is a promising area for future discoveries.

      Despite the concerns that ideas are becoming harder to find and progress may be slowing down, there are still opportunities for significant and revolutionary discoveries. The speaker suggests that instead of focusing on well-established fields, it may be more productive to ask important questions about the world that are not being addressed. He uses the example of Darwin and his groundbreaking work in evolution theory, which was not hindered by doubts about human capabilities or the complexity of the subject matter. The speaker also emphasizes the importance of using common sense and judgment to identify important issues and work on them, rather than getting bogged down in unnecessary complexity or skepticism. Ultimately, he believes that AI is one of the most important areas to focus on for the future, and that not enough attention is being paid to it.

    • Seeding new fields and funding innovative ideasEliezer Yudkowsky's 80,000 Hours prioritizes funding neglected areas, creating new fields, and introducing new ideas despite their small size and potential unintended consequences.

      Revolutionary change often comes from asking important questions and focusing on neglected areas, even if they are not the usual paths to success. Eliezer Yudkowsky, co-founder of Machine Intelligence Research Institute (MIRI), emphasizes that they at 80,000 Hours are not influential enough to automatically make their areas of interest less neglected. However, they aim to seed the creation of new fields and fund the introduction of new ideas. Even with a significant increase in resources, they would still be small compared to government budgets. They continue to model their giving, making educated guesses about the future and funding what seems good and exciting. They also consider the potential negative impacts of their giving but aim for decisions with more upside than downside. They do not aim to avoid harm at all costs but to operate in a cooperative and high integrity way. Unintended side effects are a part of life, and anything you do has unintended consequences. They have made a $30, million grant to OpenAI, which some criticized as having negative side effects, but they believe the upsides outweigh the downsides.

    • Maximizing positive impact with a robust moral frameworkEffective altruism prioritizes the rights and welfare of all sentient beings, aligns with utilitarianism, and emphasizes a morality based on clear, general principles to reduce the likelihood of being overturned by future moral progress.

      Effective altruism, an ethical philosophy focused on maximizing positive impact, emphasizes the importance of creating a robust moral framework that can withstand future progress and new knowledge. This framework, which often aligns with utilitarianism, prioritizes the rights and welfare of all sentient beings, including future generations and non-human animals. Sentientism, the belief that the capacity to feel is the defining characteristic of moral value, is a key component of this ethical system. The goal is to establish a morality based on clear, general principles that can be consistently applied, reducing the likelihood of being overturned by future moral progress. The speaker, having reflected on past philanthropic efforts and ethical systems, emphasizes the importance of carefully considering the potential consequences of actions and striving for responsibility and accountability in decision-making.

    • Ethical Dilemmas: Thin Utilitarianism vs SentientismThin utilitarianism focuses on the greatest good for the greatest number, while sentientism asserts that someone matters if they can suffer or experience pleasure. Both theories raise ethical dilemmas and require careful consideration.

      The discussion revolved around two core ethical ideas: thin utilitarianism and sentientism. Thin utilitarianism suggests focusing on the greatest good for the greatest number, while sentientism asserts that someone matters if they can suffer or experience pleasure. While sentientism seems reasonable, it raises challenging dilemmas and questions, making it a debatable aspect of the ethical view. The speakers also discussed the idea of having fundamental ethical principles and the potential contradictions in moral intuitions. They debated whether the medieval church spending money on the arts instead of helping the poor would be a good or bad thing. The concept of moral progress was also discussed, with the speakers agreeing that it's possible to have more reasonable moral views as time goes on but not necessarily inevitable. Overall, the conversation highlighted the complexities and nuances of ethical theories and the ongoing debate surrounding their implications.

    • Balancing multiple moral frameworks for ethical decision-makingStrive for actions that align with ethical principles while considering potential unintended consequences and balancing various moral frameworks.

      Ethical systems, such as utilitarianism, can coexist with high integrity people, not because of the ethical system itself, but because of the individuals' inherent desire to be moral and rational. The speaker also emphasized the importance of considering multiple moral perspectives and making decisions that benefit the greatest number of people while minimizing harm. Additionally, they highlighted the need for individuals to act with integrity and avoid actions that could cause significant harm, even if it means compromising on other goals. The speaker's point of view is that ethical decision-making requires balancing various moral frameworks and considering the potential consequences of actions from different perspectives. It's essential to strive for actions that are good according to the most important moral framework and not too harmful according to the others. This approach allows individuals to make decisions that align with their ethical principles while also being mindful of potential unintended consequences.

    • Considering Different Perspectives in Decision MakingEffective decision-making requires open-mindedness, adaptability, and understanding the unique strengths of individuals. The Bayesian mindset, prevalent among successful tech founders, may offer benefits but its impact is uncertain.

      Effective decision-making, whether in personal moral choices or in the context of an organization like Open Philanthropy, involves considering different perspectives and understanding the unique strengths and effectiveness of individuals. The speaker emphasizes the importance of being open-minded and adaptable, rather than trying to simplify complex issues into a single mathematical equation. He also discusses the prevalence of the Bayesian mindset among successful tech founders and its potential benefits, but acknowledges that its impact is still uncertain. Additionally, he touches on the role of prizes in surfacing new ideas and the increasing importance of considering the interests of a growing number of stakeholders as organizations and societies grow in size.

    • The trade-off of growth for organizationsGrowing organizations face a trade-off between adaptability and the ability to produce and serve more people. Specialization and hiring new team members are important for continued growth.

      As companies grow, they may become less nimble and adaptive, but they are also able to produce more and serve more people. This trade-off is a natural part of growth, and staying small can have its benefits, but it's important for organizations to continue growing and hiring new team members for specific reasons. The speaker's career advice emphasizes specialization, but the speaker's own career has been varied, with a pattern of tackling new questions and building teams to delve deeper. The Cool Thanks blog, which has a stuffed polar bear named Mora as its icon, is a personal outlet for the speaker to share their thoughts and ideas publicly.

    • Sharing unconventional views and encouraging critique leads to valuable learning experiencesEngaging in open dialogue through a public platform can deepen understanding, refine perspectives, and build a community of like-minded individuals.

      Expressing unconventional views and engaging in open dialogue through a public platform can lead to valuable learning experiences and help build a community of like-minded individuals. The speaker, who writes about important issues related to their foundation's decision-making, emphasizes the importance of sharing their perspectives and encouraging critique to refine their understanding and potentially influence others. They also acknowledge the potential for mistakes and the benefits of discovering them through public discourse. The speaker's personal experience of receiving feedback and critiques on their blog has led to a deeper understanding of their own thinking and areas for improvement. While their core beliefs have not significantly changed, they have gained new insights and perspectives through the engagement with their audience. Additionally, the speaker's deep involvement in the issues they explore and their ability to understand various complexities is a notable aspect of their work.

    • Understanding Central Issues vs. Delegating ExpertiseCEOs don't need to be experts in every area but should grasp central issues. They should delegate expertise to specialists and make informed decisions based on their advice.

      Effective leadership in organizations involves a balance between understanding the core aspects of various topics and delegating expertise to specialists. While it's not necessary for CEOs to be experts in every area, they should have a solid grasp of central issues that significantly impact their business. For instance, in finance, the CEO only needs to know enough to assess compliance, audit results, and financial health. However, in areas like design for a tech company or open philanthropy, where the CEO's role is more central, they should have a good understanding of the underlying concepts. The CEO's role is to manage and make informed decisions based on the advice of experts. In the context of the discussion, the speaker emphasized the importance of understanding the potential impact of transformative AI and its implications for open philanthropy. Ultimately, the CEO should be able to effectively manage people with expertise in these areas and make strategic decisions based on their knowledge.

    Recent Episodes from Dwarkesh Podcast

    Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

    Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

    I chatted with Tony Blair about:

    - What he learned from Lee Kuan Yew

    - Intelligence agencies track record on Iraq & Ukraine

    - What he tells the dozens of world leaders who come seek advice from him

    - How much of a PM’s time is actually spent governing

    - What will AI’s July 1914 moment look like from inside the Cabinet?

    Enjoy!

    Watch the video on YouTube. Read the full transcript here.

    Follow me on Twitter for updates on future episodes.

    Sponsors

    - Prelude Security is the world’s leading cyber threat management automation platform. Prelude Detect quickly transforms threat intelligence into validated protections so organizations can know with certainty that their defenses will protect them against the latest threats. Prelude is backed by Sequoia Capital, Insight Partners, The MITRE Corporation, CrowdStrike, and other leading investors. Learn more here.

    - This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

    If you’re interested in advertising on the podcast, check out this page.

    Timestamps

    (00:00:00) – A prime minister’s constraints

    (00:04:12) – CEOs vs. politicians

    (00:10:31) – COVID, AI, & how government deals with crisis

    (00:21:24) – Learning from Lee Kuan Yew

    (00:27:37) – Foreign policy & intelligence

    (00:31:12) – How much leadership actually matters

    (00:35:34) – Private vs. public tech

    (00:39:14) – Advising global leaders

    (00:46:45) – The unipolar moment in the 90s



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 26, 2024

    Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

    Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

    Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today.

    I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through.

    It was really fun discussing/debating the cruxes. Enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Timestamps

    (00:00:00) – The ARC benchmark

    (00:11:10) – Why LLMs struggle with ARC

    (00:19:00) – Skill vs intelligence

    (00:27:55) - Do we need “AGI” to automate most jobs?

    (00:48:28) – Future of AI progress: deep learning + program synthesis

    (01:00:40) – How Mike Knoop got nerd-sniped by ARC

    (01:08:37) – Million $ ARC Prize

    (01:10:33) – Resisting benchmark saturation

    (01:18:08) – ARC scores on frontier vs open source models

    (01:26:19) – Possible solutions to ARC Prize



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 11, 2024

    Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

    Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

    Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Follow me on Twitter for updates on future episodes. Follow Leopold on Twitter.

    Timestamps

    (00:00:00) – The trillion-dollar cluster and unhobbling

    (00:20:31) – AI 2028: The return of history

    (00:40:26) – Espionage & American AI superiority

    (01:08:20) – Geopolitical implications of AI

    (01:31:23) – State-led vs. private-led AI

    (02:12:23) – Becoming Valedictorian of Columbia at 19

    (02:30:35) – What happened at OpenAI

    (02:45:11) – Accelerating AI research progress

    (03:25:58) – Alignment

    (03:41:26) – On Germany, and understanding foreign perspectives

    (03:57:04) – Dwarkesh’s immigration story and path to the podcast

    (04:07:58) – Launching an AGI hedge fund

    (04:19:14) – Lessons from WWII

    (04:29:08) – Coda: Frederick the Great



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 04, 2024

    John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

    John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

    Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come...

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Pre-training, post-training, and future capabilities

    (00:16:57) - Plan for AGI 2025

    (00:29:19) - Teaching models to reason

    (00:40:50) - The Road to ChatGPT

    (00:52:13) - What makes for a good RL researcher?

    (01:00:58) - Keeping humans in the loop

    (01:15:15) - State of research, plateaus, and moats

    Sponsors

    If you’re interested in advertising on the podcast, fill out this form.

    * Your DNA shapes everything about you. Want to know how? Take 10% off our Premium DNA kit with code DWARKESH at mynucleus.com.

    * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enMay 15, 2024

    Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

    Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

    Mark Zuckerberg on:

    - Llama 3

    - open sourcing towards AGI

    - custom silicon, synthetic data, & energy constraints on scaling

    - Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more

    Enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Human edited transcript with helpful links here.

    Timestamps

    (00:00:00) - Llama 3

    (00:08:32) - Coding on path to AGI

    (00:25:24) - Energy bottlenecks

    (00:33:20) - Is AI the most important technology ever?

    (00:37:21) - Dangers of open source

    (00:53:57) - Caesar Augustus and metaverse

    (01:04:53) - Open sourcing the $10b model & custom silicon

    (01:15:19) - Zuck as CEO of Google+

    Sponsors

    If you’re interested in advertising on the podcast, fill out this form.

    * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.

    * V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.

    * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

    Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

    Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.

    No way to summarize it, except: 

    This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.

    You would be shocked how much of what I know about this field, I've learned just from talking with them.

    To the extent that you've enjoyed my other AI interviews, now you know why.

    So excited to put this out. Enjoy! I certainly did :)

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. 

    There's a transcript with links to all the papers the boys were throwing down - may help you follow along.

    Follow Trenton and Sholto on Twitter.

    Timestamps

    (00:00:00) - Long contexts

    (00:16:12) - Intelligence is just associations

    (00:32:35) - Intelligence explosion & great researchers

    (01:06:52) - Superposition & secret communication

    (01:22:34) - Agents & true reasoning

    (01:34:40) - How Sholto & Trenton got into AI research

    (02:07:16) - Are feature spaces the wrong way to think about intelligence?

    (02:21:12) - Will interp actually work on superhuman models

    (02:45:05) - Sholto’s technical challenge for the audience

    (03:03:57) - Rapid fire



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Here is my episode with Demis Hassabis, CEO of Google DeepMind

    We discuss:

    * Why scaling is an artform

    * Adding search, planning, & AlphaZero type training atop LLMs

    * Making sure rogue nations can't steal weights

    * The right way to align superhuman AIs and do an intelligence explosion

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Timestamps

    (0:00:00) - Nature of intelligence

    (0:05:56) - RL atop LLMs

    (0:16:31) - Scaling and alignment

    (0:24:13) - Timelines and intelligence explosion

    (0:28:42) - Gemini training

    (0:35:30) - Governance of superhuman AIs

    (0:40:42) - Safety, open source, and security of weights

    (0:47:00) - Multimodal and further progress

    (0:54:18) - Inside Google DeepMind



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments

    Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments

    We discuss:

    * what it takes to process $1 trillion/year

    * how to build multi-decade APIs, companies, and relationships

    * what's next for Stripe (increasing the GDP of the internet is quite an open ended prompt, and the Collison brothers are just getting started).

    Plus the amazing stuff they're doing at Arc Institute, the financial infrastructure for AI agents, playing devil's advocate against progress studies, and much more.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Advice for 20-30 year olds

    (00:12:12) - Progress studies

    (00:22:21) - Arc Institute

    (00:34:27) - AI & Fast Grants

    (00:43:46) - Stripe history

    (00:55:44) - Stripe Climate

    (01:01:39) - Beauty & APIs

    (01:11:51) - Financial innards

    (01:28:16) - Stripe culture & future

    (01:41:56) - Virtues of big businesses

    (01:51:41) - John



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

    Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

    It was a great pleasure speaking with Tyler Cowen for the 3rd time.

    We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more.

    The topics covered in this episode are too many to summarize. Hope you enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (0:00:00) - John Maynard Keynes

    (00:17:16) - Controversy

    (00:25:02) - Fredrick von Hayek

    (00:47:41) - John Stuart Mill

    (00:52:41) - Adam Smith

    (00:58:31) - Coase, Schelling, & George

    (01:08:07) - Anarchy

    (01:13:16) - Cheap WMDs

    (01:23:18) - Technocracy & political philosophy

    (01:34:16) - AI & Scaling



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Lessons from The Years of Lyndon Johnson by Robert Caro [Narration]

    Lessons from The Years of Lyndon Johnson by Robert Caro [Narration]

    This is a narration of my blog post, Lessons from The Years of Lyndon Johnson by Robert Caro.

    You read the full post here: https://www.dwarkeshpatel.com/p/lyndon-johnson

    Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Related Episodes

    #324 — Debating the Future of AI

    #324 — Debating the Future of AI

    Sam Harris speaks with Marc Andreessen about the future of artificial intelligence (AI). They discuss the primary importance of intelligence, possible good outcomes for AI, the problem of alienation, the significance of evolution, the Alignment Problem, the current state of LLMs, AI and war, dangerous information, regulating AI, economic inequality, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    The philosopher and the crypto king: Sam Bankman-Fried and the effective altruism delusion | Audio Long Read

    The philosopher and the crypto king: Sam Bankman-Fried and the effective altruism delusion | Audio Long Read

    At the time of writing, the crypto billionaire Sam Bankman-Fried is due to stand trial on 3 October 2023. He stands accused of fraud and money-laundering on an epic scale through his currency exchange FTX. Did he gamble with other people’s money in a bid to do the maximum good?

     

    In this week’s long read, the New Statesman’s associate editor Sophie McBain examines the relationship between Bankman-Fried and the Oxford-based effective altruism (EA) movement. The billionaire was a close associate and supporter of William MacAskill, the Scottish moral philosopher who many consider EA’s leader. It was MacAskill who had persuaded him – and many other young graduates – to earn more, in order to give more. But how much money was enough – and what should they spend it on? Was EA just “a dumb game we woke Westerners play”, as Bankman-Fried told one journalist?

     

    In conversations with EA members past and present, McBain hears how the movement was altered by its enormous wealth. As the trial of its biggest sponsor approaches, will effective altruism survive – or be swallowed by its more cynical Silicon Valley devotees?

     

    Written and read by Sophie McBain.

     

    This article originally appeared in the 22-28 September 2023 edition of the New Statesman; you can read the text version here.

     

    If you enjoyed listening to this episode, you might also like Big Tech and the quest for eternal youth, by Jenny Kleeman.




    Hosted on Acast. See acast.com/privacy for more information.


    #214 - Cosmic Skeptic - How Do We Define What Is Good & Bad?

    #214 - Cosmic Skeptic - How Do We Define What Is Good & Bad?
    Alex O'Connor is a philosopher & YouTuber. Get ready for a mental workout today as Alex poses some of the most famous and most difficult questions in ethics. What does it mean to say that something is good? Why SHOULD you do one thing instead of another thing? Why should we care about wellbeing? What is the definition of suffering? On whose authority is anything good or bad? Sponsor: Check out everything I use from The Protein Works at https://www.theproteinworks.com/modernwisdom/ (35% off everything with the code MODERN35) Extra Stuff: Watch Alex on YouTube - https://youtu.be/gcVR2OVxPYw Subscribe to Alex on Patreon - https://www.patreon.com/CosmicSkeptic Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact  Learn more about your ad choices. Visit megaphone.fm/adchoices