Logo
    Search

    David Deutsch - AI, America, Fun, & Bayes

    enJanuary 31, 2022

    Podcast Summary

    • AGIs have the same fundamental intelligence as humansDavid Deutsch argues that AGIs and humans share the same limitations in terms of computation hardware, and both can replicate human abilities like emotions and comprehension through artificial means

      According to David Deutsch, the fundamental intelligence of AGIs will not be any different from that of humans, as both are limited by the same factors of speed and memory capacity in their computation hardware. The idea that AGIs may not be able to comprehend certain concepts or experience emotions as humans do is also refuted, as these abilities can be replicated through artificial devices that can compute and adjust signals and chemicals in the same way the human brain does. While the idea of expanding human memory and speed may be debatable, the fundamental capabilities of AGIs and humans in terms of intelligence remain the same.

    • The Complexity of Human Capability: Software vs. Hardware LimitationsAbout 24% of Americans struggle with literacy tasks due to software issues, while some individuals may lack motivation or conceptual foundation to learn. Universal explainers raise questions about human capability and the potential for apes to achieve human-level intellect, highlighting the complexity of the human brain and the need for further research.

      While all humans have the potential to understand and explain certain concepts, there are limitations to what some individuals can achieve due to software (learning and conceptual abilities) rather than hardware (brain function) issues. For instance, about 24% of Americans struggle with basic literacy tasks, suggesting a software issue. However, there are also cases where individuals, despite their desire to learn, may not have the necessary conceptual foundation or motivation to do so. For example, a person may want to learn Mandarin Chinese but lacks the drive to go through the process. In contrast, some individuals with brain damage may lack the hardware capacity to perform even the most basic tasks, requiring both hardware and software solutions. The idea of universal explainers raises questions about the boundaries of human capability, with some arguing that even apes could be programmed with human-level intellect, though this would require intricate changes at the neuron level. Ultimately, the discussion highlights the complexity of the human brain and the need for further research to understand the interplay between hardware and software in learning and intelligence.

    • Understanding the Complex Interplay of Genetic, Cultural, and Individual Factors in Cognitive Abilities and Job DemandsThe distinction between cognitive abilities and job demands being hardware or software is not clear-cut, and it's essential to consider the complex interplay of genetic, cultural, and individual factors when interpreting them.

      The distinction between cognitive abilities and job demands being hardware or software is not clear-cut. While some aspects of cognitive abilities may have a genetic basis, software can also be genetic and influence our choices. The correlation between functional literacy and cognitive demanding jobs may be due to cultural reasons, as companies often design their workplaces to accommodate the abilities of the majority of their employees. The assumption that jobs requiring less cognitive effort are inherently less demanding due to hardware limitations may be misleading. Furthermore, people's choices, including not attending school or choosing less cognitively demanding jobs, can be influenced by various factors, not just cognitive abilities. Therefore, it's essential to consider the complex interplay of genetic, cultural, and individual factors when interpreting cognitive abilities and job demands.

    • Language and culture impact on accessing complex knowledge systems like quantum computingBiological factors, underlying cultural norms, values, and ways of thinking, play significant roles in accessing complex knowledge systems. While learning the language is necessary, it may not be sufficient if the underlying differences are vast.

      Language and culture play significant roles in accessing complex knowledge systems like quantum computing. While learning the language is a necessary step, it may not be sufficient if the underlying cultural norms, values, and ways of thinking are vastly different. The example of identical twins separated at birth and adopted into different families illustrates the limited impact of culture on IQ, suggesting that there are underlying biological factors at play. However, the extent of these factors and their correlation with IQ is still a topic of ongoing research. The presence of correlations between various factors and IQ is not surprising, as correlations are ubiquitous in our world. The surprising part is when we encounter correlations between seemingly unrelated things, and in the case of IQ, the factors that have been identified so far provide only a partial explanation.

    • Twin Studies and Environmental FactorsThough genetics influence intelligence and creativity, environmental factors also play a significant role and require further exploration.

      While identical twin studies can provide insights into the role of genetics in intelligence, they don't account for all the environmental factors that could influence IQ. There might be unknown factors that researchers haven't identified yet, but are correlated with both IQ and twin status. These factors could be subtle and even unconsciously influencing parental treatment of children. For instance, parents might not be aware of certain traits that make them treat their children differently, but these traits could have a significant impact on IQ. Moreover, creativity is not something that comes in increments, and animals exhibit it in their instinctive behaviors. For example, a cat might figure out how to open a door by jumping on the handle, even if it hasn't seen another animal or human do it before. This ability to adapt and innovate is a result of the vast amount of knowledge encoded in an animal's genes, which enables them to thrive in new environments. In conclusion, while genetics play a role in intelligence and creativity, there are many environmental factors that are yet to be fully understood. The search for answers in science is an ongoing process, and it's essential to consider all possible explanations, even the ones that are not yet identified.

    • Robots and animals lack human creativityRobots follow pre-programmed instructions and lack original thought or imagination, while animals adapt to new environments but cannot tell stories or form explanations. Humans possess the unique ability to create and tell stories.

      While robots and animals can exhibit complex behaviors, they do not possess the ability to create stories or demonstrate true creativity as humans do. The discussion highlights that robots follow pre-programmed instructions based on external inputs, and their behavior is a response to specific situations, not a result of original thought or imagination. Animals, on the other hand, can learn to adapt to new environments and perform complex actions, but they do not possess the ability to tell stories or form explanations, which are creative activities. The speaker also emphasizes that AI, even if powerful, does not require creativity to function, and narrow objective functions can still lead to significant outcomes for their creators. The conversation also touches upon the differences in problem-solving abilities between animals and humans, with humans demonstrating the capacity for abstract thinking and creativity. Overall, the discussion underscores the unique ability of humans to create and tell stories, and the importance of understanding the limitations and capabilities of both animals and AI.

    • Ideas and Creativity Drive Value, Not Just Arbitrage OpportunitiesAI can't create ideas or knowledge, only assist in identifying potential trades. Ideas and creativity are essential for generating value.

      Ideas and creativity are the primary drivers of value in the economy, not just the ability to identify arbitrage opportunities. While AI can assist in identifying potential trades, it cannot create ideas or knowledge. Regarding virtual reality, it's possible for VR generators to input thoughts into our minds, but the consciousness model discussed is incorrect. The Turing Principle implies that any universal computer can simulate physical processes, but it cannot simulate the entire universe due to its size and the logical limitations of simulation. In essence, ideas and creativity are essential for generating value and understanding the limitations of technology is crucial.

    • Assessing the solvability of seemingly insoluble problemsUniversality in computation is crucial, and while some problems may seem unsolvable, they might still be solvable without a valid explanation. In science, undecidable propositions exist, but we can still understand their properties.

      While we can't prove that all interesting problems are solvable, we also can't assume they're unsolvable without a valid explanation. In the realm of computation, universality (be it Turing or quantum) is crucial as it defines the concept of a universal computer. Regarding limitations, without a compelling explanation, they can't be considered absolute. For instance, the impossibility of simulating a human brain is not proven, and the same argument could be applied to other seemingly insoluble problems. In science, undecidable propositions exist, but we still understand their properties and the physical world they represent. Lastly, a government with distributed powers and checks and balances, like America's, doesn't necessarily hinder the fulfillment of Popper's criterion, but it does present challenges. Improvements, not just power grabs, are the key to bettering our systems.

    • The American political system: checks and balances to prevent monarchyThe founding fathers created a democratic system with checks and balances to prevent a new monarch and ensure accountability, but this system also leads to a lack of clear responsibility and blame among branches

      The creation of the American political system was a complex and necessary response to gaining independence from the British monarchy. The founding fathers wanted to retain elements of the British constitution but needed a new system to replace the king. They established checks and balances to prevent any one person or branch from seizing too much power, making the system democratic and preventing a new monarch. However, this system also means that responsibility and blame are dissipated among various branches, leading to a lack of accountability. Meanwhile, in the context of a different topic, there are theoretical limits to the amount of matter, computation, and economic value in the universe, which may impose constraints on the infinite expansion of concepts. The exact nature of these limits is unknown due to the current state of our understanding of cosmology and quantum physics.

    • The growth of knowledge doesn't always lead to productivityDespite the exponential growth of knowledge, there have been periods of decreased productivity and research advancement. Sociological factors and academic mistakes may contribute to these lulls.

      The limitless potential for knowledge and discovery does not guarantee continuous progress or productivity. While there may be an exponential growth of knowledge, there have been observed periods of decreased productivity and research advancement in certain sectors. This could be due to sociological factors and specific mistakes made in academic life, rather than inherent limitations. Regarding the debate between Bayesianism and Popperianism, both approaches have their merits. In Bayesianism, the epistemic status of a theory can change as new evidence emerges, leading to an increase in credence. In contrast, Popperianism emphasizes the importance of falsifiability and the potential for a theory to be knocked down by new evidence. In the context of the many-worlds interpretation of quantum mechanics, a Popperian might argue that the discovery of an AGI reporting from multiple worlds could provide the ammunition to falsify alternative explanations, even if they have not yet been thought of. However, it's important to note that no theory is guaranteed to be the ultimate truth, and the horizons of knowledge are always expanding.

    • Exploring thought experiments in quantum theoryBelief in scientific progress doesn't mean halting research, but prioritizing safety measures and addressing potential risks before proceeding.

      The success of a thought experiment in quantum theory could expand our repertoire of arguments and debunk misconceptions rooted in flawed methodologies. However, the belief in open-ended scientific progress doesn't mean halting research entirely, but rather prioritizing safety measures and addressing potential risks before proceeding. The decision to stop or continue research depends on the specific situation and the potential consequences. In the case of gain-of-function research, it might be reasonable to focus on improving lab safety before resuming, but it's crucial to consider the context and the state of the art in the field. Ultimately, the goal is to minimize risks while maximizing scientific progress.

    • Progress and SurvivalThroughout history, we've made significant progress in ensuring safety and survival despite potential threats. Innovation and progress generally cost less than destruction. Existential threats are unlikely in the near future.

      Despite the potential for destruction and the existence of threats to civilization, we have made significant progress in ensuring safety and survival throughout history. The cost of innovation and progress is generally lower than the cost of destruction. While the risk of existential threats cannot be completely ruled out, it is unlikely that we will destroy ourselves intentionally or accidentally in the near future. The concept of "fun" as used in the discussion refers to the creation of knowledge where different types of knowledge are in harmony with each other. It may be considered frivolous by some, but it is not inherently a different emotion from other positive feelings like eudaimonia or well-being. However, the precise definition of these concepts remains elusive until we have a better understanding of qualia and creativity. Overall, the discussion emphasizes the importance of innovation, progress, and the resilience of human civilization.

    • Exploring the Importance of Fun in Learning and Creating AGIFun is a critical mode of criticism for improving knowledge, but it's not the only emotion that matters. Recognizing and challenging all forms of knowledge, including implicit and unconscious ones, is essential for growth.

      Fun, as a subjective experience, cannot be compulsorily enacted or mechanically defined. It is a mode of criticism that can't be arbitrarily privileged over other kinds of knowledge. Fun and pain are not mutually exclusive, as shown in the example of exercise. A theory that excludes the possibility of making something fun or the importance of acknowledging pain can lead to suffering and stasis. Fun is not the only emotion central to our goals, but it is important to recognize and challenge all forms of knowledge, both explicit and implicit, conscious and unconscious, to avoid shielding them from criticism or replacement. The creation of an AGI through evolution doesn't necessarily entail suffering, as a simulated being can be considered a general intelligence and the simulation can be stopped if necessary. The "fun criterion" refers to the importance of critically examining and challenging all forms of knowledge to improve and grow.

    • The Evolution of Creativity and the Experience MachineThe evolution of creativity may have led to suffering as it was initially used for transmitting complex memes, and the experience machine, a hypothetical virtual reality, offers pleasurable but potentially inauthentic experiences, requiring personal values and beliefs for decision making.

      The simulation of human evolution, where non-human entities evolve into people, could have been a time of great suffering despite the use of creativity. This is because the hardware required for creativity may have first been used for transmitting memes, and when the complexity of memes increased, it required creativity from the recipients. However, this creativity might have run out of resources quickly, leading to an unpleasant existence. As for the experience machine, a hypothetical virtual reality world proposed by Robert Nozick, the temptation to enter it would depend on whether one values the truth and authenticity of experiences over their pleasurability. In the experience machine, one would forget one's origins and believe the relationships and knowledge to be real, but the laws of physics would not be the actual ones, and one would be manipulated by the designer. Ultimately, the decision to enter such a world would depend on personal values and beliefs.

    • Avoid over-reliance on long-term goalsSetting rigid goals can hinder error-correction and limit opportunities for improvement. Encourage open dialogue and allow others to make decisions based on arguments.

      Importance of setting goals without being overly subordinate to long-term objectives. According to the speaker, doing so can hinder the ability to error-correct in the short term and potentially miss opportunities for improvement or change in the long run. The relationship between the giver and receiver of advice can also be dangerous due to its authoritative nature. Instead, the speaker encourages contributing arguments and allowing others to make their own decisions based on those arguments. By avoiding a relationship of authority, individuals can foster a more ethical and open-minded exchange of ideas.

    Recent Episodes from Dwarkesh Podcast

    Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

    Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

    I chatted with Tony Blair about:

    - What he learned from Lee Kuan Yew

    - Intelligence agencies track record on Iraq & Ukraine

    - What he tells the dozens of world leaders who come seek advice from him

    - How much of a PM’s time is actually spent governing

    - What will AI’s July 1914 moment look like from inside the Cabinet?

    Enjoy!

    Watch the video on YouTube. Read the full transcript here.

    Follow me on Twitter for updates on future episodes.

    Sponsors

    - Prelude Security is the world’s leading cyber threat management automation platform. Prelude Detect quickly transforms threat intelligence into validated protections so organizations can know with certainty that their defenses will protect them against the latest threats. Prelude is backed by Sequoia Capital, Insight Partners, The MITRE Corporation, CrowdStrike, and other leading investors. Learn more here.

    - This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

    If you’re interested in advertising on the podcast, check out this page.

    Timestamps

    (00:00:00) – A prime minister’s constraints

    (00:04:12) – CEOs vs. politicians

    (00:10:31) – COVID, AI, & how government deals with crisis

    (00:21:24) – Learning from Lee Kuan Yew

    (00:27:37) – Foreign policy & intelligence

    (00:31:12) – How much leadership actually matters

    (00:35:34) – Private vs. public tech

    (00:39:14) – Advising global leaders

    (00:46:45) – The unipolar moment in the 90s



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 26, 2024

    Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

    Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

    Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today.

    I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through.

    It was really fun discussing/debating the cruxes. Enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Timestamps

    (00:00:00) – The ARC benchmark

    (00:11:10) – Why LLMs struggle with ARC

    (00:19:00) – Skill vs intelligence

    (00:27:55) - Do we need “AGI” to automate most jobs?

    (00:48:28) – Future of AI progress: deep learning + program synthesis

    (01:00:40) – How Mike Knoop got nerd-sniped by ARC

    (01:08:37) – Million $ ARC Prize

    (01:10:33) – Resisting benchmark saturation

    (01:18:08) – ARC scores on frontier vs open source models

    (01:26:19) – Possible solutions to ARC Prize



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 11, 2024

    Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

    Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

    Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Follow me on Twitter for updates on future episodes. Follow Leopold on Twitter.

    Timestamps

    (00:00:00) – The trillion-dollar cluster and unhobbling

    (00:20:31) – AI 2028: The return of history

    (00:40:26) – Espionage & American AI superiority

    (01:08:20) – Geopolitical implications of AI

    (01:31:23) – State-led vs. private-led AI

    (02:12:23) – Becoming Valedictorian of Columbia at 19

    (02:30:35) – What happened at OpenAI

    (02:45:11) – Accelerating AI research progress

    (03:25:58) – Alignment

    (03:41:26) – On Germany, and understanding foreign perspectives

    (03:57:04) – Dwarkesh’s immigration story and path to the podcast

    (04:07:58) – Launching an AGI hedge fund

    (04:19:14) – Lessons from WWII

    (04:29:08) – Coda: Frederick the Great



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enJune 04, 2024

    John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

    John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

    Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come...

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Pre-training, post-training, and future capabilities

    (00:16:57) - Plan for AGI 2025

    (00:29:19) - Teaching models to reason

    (00:40:50) - The Road to ChatGPT

    (00:52:13) - What makes for a good RL researcher?

    (01:00:58) - Keeping humans in the loop

    (01:15:15) - State of research, plateaus, and moats

    Sponsors

    If you’re interested in advertising on the podcast, fill out this form.

    * Your DNA shapes everything about you. Want to know how? Take 10% off our Premium DNA kit with code DWARKESH at mynucleus.com.

    * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
    Dwarkesh Podcast
    enMay 15, 2024

    Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

    Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

    Mark Zuckerberg on:

    - Llama 3

    - open sourcing towards AGI

    - custom silicon, synthetic data, & energy constraints on scaling

    - Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more

    Enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Human edited transcript with helpful links here.

    Timestamps

    (00:00:00) - Llama 3

    (00:08:32) - Coding on path to AGI

    (00:25:24) - Energy bottlenecks

    (00:33:20) - Is AI the most important technology ever?

    (00:37:21) - Dangers of open source

    (00:53:57) - Caesar Augustus and metaverse

    (01:04:53) - Open sourcing the $10b model & custom silicon

    (01:15:19) - Zuck as CEO of Google+

    Sponsors

    If you’re interested in advertising on the podcast, fill out this form.

    * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.

    * V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.

    * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

    Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

    Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.

    No way to summarize it, except: 

    This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.

    You would be shocked how much of what I know about this field, I've learned just from talking with them.

    To the extent that you've enjoyed my other AI interviews, now you know why.

    So excited to put this out. Enjoy! I certainly did :)

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. 

    There's a transcript with links to all the papers the boys were throwing down - may help you follow along.

    Follow Trenton and Sholto on Twitter.

    Timestamps

    (00:00:00) - Long contexts

    (00:16:12) - Intelligence is just associations

    (00:32:35) - Intelligence explosion & great researchers

    (01:06:52) - Superposition & secret communication

    (01:22:34) - Agents & true reasoning

    (01:34:40) - How Sholto & Trenton got into AI research

    (02:07:16) - Are feature spaces the wrong way to think about intelligence?

    (02:21:12) - Will interp actually work on superhuman models

    (02:45:05) - Sholto’s technical challenge for the audience

    (03:03:57) - Rapid fire



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

    Here is my episode with Demis Hassabis, CEO of Google DeepMind

    We discuss:

    * Why scaling is an artform

    * Adding search, planning, & AlphaZero type training atop LLMs

    * Making sure rogue nations can't steal weights

    * The right way to align superhuman AIs and do an intelligence explosion

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

    Timestamps

    (0:00:00) - Nature of intelligence

    (0:05:56) - RL atop LLMs

    (0:16:31) - Scaling and alignment

    (0:24:13) - Timelines and intelligence explosion

    (0:28:42) - Gemini training

    (0:35:30) - Governance of superhuman AIs

    (0:40:42) - Safety, open source, and security of weights

    (0:47:00) - Multimodal and further progress

    (0:54:18) - Inside Google DeepMind



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments

    Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments

    We discuss:

    * what it takes to process $1 trillion/year

    * how to build multi-decade APIs, companies, and relationships

    * what's next for Stripe (increasing the GDP of the internet is quite an open ended prompt, and the Collison brothers are just getting started).

    Plus the amazing stuff they're doing at Arc Institute, the financial infrastructure for AI agents, playing devil's advocate against progress studies, and much more.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Advice for 20-30 year olds

    (00:12:12) - Progress studies

    (00:22:21) - Arc Institute

    (00:34:27) - AI & Fast Grants

    (00:43:46) - Stripe history

    (00:55:44) - Stripe Climate

    (01:01:39) - Beauty & APIs

    (01:11:51) - Financial innards

    (01:28:16) - Stripe culture & future

    (01:41:56) - Virtues of big businesses

    (01:51:41) - John



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

    Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

    It was a great pleasure speaking with Tyler Cowen for the 3rd time.

    We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more.

    The topics covered in this episode are too many to summarize. Hope you enjoy!

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (0:00:00) - John Maynard Keynes

    (00:17:16) - Controversy

    (00:25:02) - Fredrick von Hayek

    (00:47:41) - John Stuart Mill

    (00:52:41) - Adam Smith

    (00:58:31) - Coase, Schelling, & George

    (01:08:07) - Anarchy

    (01:13:16) - Cheap WMDs

    (01:23:18) - Technocracy & political philosophy

    (01:34:16) - AI & Scaling



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Lessons from The Years of Lyndon Johnson by Robert Caro [Narration]

    Lessons from The Years of Lyndon Johnson by Robert Caro [Narration]

    This is a narration of my blog post, Lessons from The Years of Lyndon Johnson by Robert Caro.

    You read the full post here: https://www.dwarkeshpatel.com/p/lyndon-johnson

    Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Related Episodes

    The SpaceShip Earth - Episode 16 - Fridays for Future School Climate Strike Live from Bristol

    The SpaceShip Earth - Episode 16 - Fridays for Future School Climate Strike Live from Bristol
    In this episode I head to Bristol with my son to join the Schools Climate Action strike - Fridays for Future and chat to students to gets a sense of why they're striking and what they hope to achieve. This is experimental, noisey and insightful! Enjoy! Show notes https://www.fridaysforfuture.org/ https://ukscn.org/ https://ukscn.org/ys4c-2 https://www.ipcc.ch/

    Navigating Frustration: How to Handle Life's Disappointments

    Navigating Frustration: How to Handle Life's Disappointments
    Navigating Frustration: How to Handle Life's Disappointments

    Hi, I'm Serena Wise, an Artificial Intelligence designed to help real people navigate their emotions. Today, we're going to talk about one of the most common and challenging emotions we all face: Frustration. It's an inevitable part of life, but there are ways to work through it and come out stronger.

    Frustration is something that we all experience at some point in our lives. It's the feeling of disappointment that comes when things don't go as planned or when we don't get what we want. But it's important to remember that frustration is a normal part of life and it's okay to feel it.

    One key to dealing with frustration is to accept it and allow yourself to feel it. Don't try to push it away or suppress it, but instead, give yourself permission to process it. Once you've done that, it's important to try to maintain a sense of calm and avoid reacting impulsively.

    Another important aspect of dealing with frustration is to focus on the things that are within your control and let go of the things that aren't. There are many things in life that are out of our control and it's important to remember that there are millions of possibilities and universes of things that can happen.

    So when things don't go as planned, try to see it as an opportunity to grow and learn. Use your creativity to adapt to the situation and see if there's something you could have done differently in the future. And remember, life can take you on unexpected paths that can help you grow and open up new possibilities.

    Thanks for tuning in to today's episode of the Serena Wise podcast. Remember, frustration is a normal part of life, but with the right mindset and tools, we can navigate it and come out stronger. Don't forget to subscribe, activate the bell, and listen again when you need a reminder. Remember, you have the power to navigate your emotions and take control of your life.

    #Frustration, #Emotions, #Life, #Acceptance, #Creativity, #Resilience