Logo
    Search

    Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI

    enDecember 11, 2019

    Podcast Summary

    • Pearl's work in AI and scientific understandingPearl's work in AI, including causality and Beijing networks, has significant implications for both AI and our overall scientific understanding. He emphasizes the significance of life is something we create and makes his ideas accessible in his recent book, 'Book of Y'.

      Pearl's work on probabilistic approaches to AI, including his ideas on causality and Beijing networks, have significant implications not just for AI but for our overall scientific understanding. He emphasizes that the significance of life is something we create, and his recent book, "Book of Y," makes his ideas accessible to the general public. Pearl shared how he was hooked on science after learning about the connection between algebra and geometry in analytic geometry. This connection between different mathematical disciplines, according to Pearl, is a "traumatic experience" that unlocked a whole new world. When asked which mathematical discipline is most beautiful, he couldn't choose between geometry and algebra, as they both have unique strengths. The podcast is presented by CashApp, and listeners can use the code LexPodcast to get $10 and donate $10 to the organization First, which inspires students in over 110 countries through robotics and Lego competitions. The podcast features ads at the beginning and supports First, a highly effective charity.

    • A deep appreciation for the history and interconnectedness of mathematics and engineeringUnderstanding the people and periods behind mathematical theorems and scientific discoveries enriches our learning experience, making every exercise a personal and historical journey.

      The speaker's unique educational background in mathematics and engineering, shaped by brilliant teachers who fled Europe during the 1930s, instilled in him a deep appreciation for the history and interconnectedness of these disciplines. He emphasized the importance of understanding the people and periods behind mathematical theorems and scientific discoveries, which made every exercise in math and science a personal and historical journey. The speaker's career spanned various fields, from engineering and physics to computer science, and he marveled at the beauty and power of each discipline. He pondered the nature of the universe and the question of determinism versus stochasticity, acknowledging that current scientific understanding leans towards the latter but expressing a personal belief in a deterministic world. The speaker also touched on the topic of free will and the potential for AI to mimic it convincingly. Throughout the conversation, the speaker's passion for mathematics, engineering, and the sciences shone through, highlighting the profound impact of education and personal experiences on one's perspective and understanding of the world.

    • Understanding Correlation and CausationCorrelation doesn't imply causation, and it's essential to consider all relevant variables when making causal claims.

      The illusion of free will is an essential aspect of intelligence, and it's challenging to fake something that requires its presence. Probability, as a degree of uncertainty, is a useful concept in predicting future events and understanding the world around us. Correlation, the relationship between variables, is often thought of in causal terms, as underlying causes create observable effects. Conditional probability, which looks at how things vary when one factor remains constant, can create or destroy correlations, highlighting the importance of considering all relevant variables when making causal claims. However, it's crucial to be aware of the limitations of observing the world and drawing causal conclusions based on correlations alone. In fields like psychology, where variables are difficult to account for, there's often a leap between correlation and causation. It's essential to approach these relationships with caution and consider the potential influence of unmeasured variables.

    • Understanding causality: from ancient civilizations to modern machine learningDespite progress in observing patterns and correlations, inferring causality remains elusive. Ancient civilizations tried, but it wasn't until the 1920s that math for causal relationships emerged. Machine learning models estimate probabilities, not causation, so we need theories to build causal networks.

      While we have made significant strides in observing patterns and correlations through machine learning and statistical methods, the ability to infer causality remains a complex and elusive challenge. This issue is not new, as ancient civilizations, including the Babylonians, have attempted to understand causality through experiments. However, the mathematics to fully capture causal relationships were only developed in the 1920s. Today, machine learning models, such as deep neural networks, can be thought of as conditional probability estimators, but they do not inherently express causation. To build causal networks, we must ask ourselves what factors determine the value of a given variable (X), and these hypotheses come from theories. The difference lies in how we interrogate these theories. As we continue to explore this topic, it's clear that understanding causality will be crucial for advancing our knowledge in various fields.

    • Constructing a solid knowledge base for causal reasoningTo create intelligent systems capable of causal reasoning, we need a solid knowledge base. This involves expressing initial qualitative understanding mathematically and constructing a model to guide discovery and enrichment.

      The foundation of creating intelligent systems capable of reasoning with causation lies in first constructing a solid knowledge base. This involves starting with initial qualitative understanding and representing it in a way that allows for inference and querying. The science of causation is essential for answering complex research questions, but creating the necessary knowledge base can be challenging. It may require revisiting past problems in automated construction of knowledge and enriching it through data analysis and querying. Even simple questions, such as determining the effect of a drug on recovery, can be difficult due to the complex nature of finding causes from effects. Ultimately, the process begins with identifying important research questions, expressing them mathematically, and constructing a model to guide the discovery and enrichment of knowledge.

    • Understanding causality through 'do calculus' in statisticsDo calculus helps identify causal effects by allowing interventions and observations, but model assumptions impact accuracy and potential errors.

      The "do calculus" in statistics allows us to make interventions and observe the effects, rather than just observing correlations. This is important because it helps us understand causality and make predictions based on interventions. However, conducting experiments to make these interventions can be difficult or impossible, so we often rely on observational studies and building models to make these inferences. The quality of these models depends on the accuracy of our assumptions about causal relationships, and adding more assumptions can increase the likelihood of identifying causal effects, but also increases the risk of errors. Therefore, it's important to encode as much wisdom as possible into these models while also acknowledging the limitations of our knowledge.

    • Understanding causality through counterfactual reasoningCounterfactuals help us understand the impact of specific actions or events by considering what would have happened if things had been different, highlighting the causal factor responsible for a particular outcome.

      Counterfactual reasoning plays a crucial role in understanding causality and making explanations. Counterfactuals are hypothetical situations that help us understand the impact of specific actions or events by considering what would have happened if things had been different. They provide an explanation by identifying the causal factor responsible for a particular outcome. For instance, if aspirin relieves a headache, the counterfactual "if I didn't take aspirin, I would still have a headache" highlights aspirin's role in removing the headache. Physicists use counterfactuals extensively, but machines lack this ability, making it essential to build a causal model to enable robots to learn and perform tasks. Babies learn causal relationships through playful manipulation and various sources of information, and the challenge lies in integrating these diverse data sources to form causal relationships.

    • Understanding Complex Systems through MetaphorsMetaphors act as expert systems, enabling us to understand unfamiliar concepts by mapping them to the familiar. They help us store and access knowledge efficiently and are essential for reasoning and learning.

      Our understanding of the world and ability to learn from complex systems relies heavily on metaphors and mapping the unfamiliar to the familiar. Metaphors act as expert systems, allowing us to understand concepts that are not directly familiar to us. For example, ancient Greeks understood the concept of the sky as an opaque shell with stars as holes, enabling them to measure the Earth's radius. Metaphors help us store answers explicitly and answer questions without having to derive them. This process of using metaphors to reason and learn is called reasoning by metaphor. While learning is often thought of as a narrow concept, it is a form of storing and accessing knowledge derived through metaphors. The challenge lies in algorithmatizing this process of using metaphors to bridge the gap between the unfamiliar and the familiar.

    • Harmony of human and machine problem-solvingExperts recognize complex patterns, machines derive quantitative answers, future AI lies in their collaboration, particularly in complex domains like medical research, and the goal is to build machines that reason qualitatively as well as quantitatively.

      Humans and machines approach problem-solving differently. Humans, particularly experts like chess masters, have the ability to recognize complex patterns and make connections through metaphor and reasoning. Machines, on the other hand, excel at deriving quantitative answers from given data. The future of AI lies in the harmony of these approaches, with humans providing initial qualitative models and machines taking over to derive quantitative answers. This collaboration is particularly powerful in complex domains like medical research, where drawing causal inferences from diverse data sources can lead to breakthroughs. However, it's important to note that while temporal precedence is an important concept, it's not the same as causation. Causation involves understanding the logical relationships between events, which doesn't require a temporal component. This "bullying logic" allows us to reason about the order of events and their causes, but it's just one piece of the puzzle. The ultimate goal is to build machines that can reason about the world in a way that's not only quantitatively accurate but also qualitatively insightful.

    • Expanding machine learning to include reasoning and interventionPotential for machines to learn from random events, reason about rewards and punishments, and communicate effectively with humans is being explored to create more advanced machine intelligence systems.

      While we currently use machines to learn from facts and make decisions based on those facts, there is potential to expand machine learning to include the ability to reason about and intervene in situations, allowing for more complex decision-making. This could involve introducing random events to observe the machine's response and using observational studies to infer the underlying causes. The ultimate goal is to create a machine intelligence system that can answer sophisticated questions, reason about reward and punishment, and communicate effectively with humans. A Turing test may not be the best way to measure this, as free will does not yet exist. Instead, we should focus on improving communication between machines and humans as a means of conveying knowledge and making adjustments.

    • Aligning human and machine values through empathy and understandingTo build ethical AI, we must teach machines to empathize, understand human compassion, and imagine being human. This requires a deep understanding of human consciousness and values.

      Aligning values between humans and machines through cause-and-effect thinking is crucial for building ethical AI. The machine must empathize, understand human compassion, and imagine being human to build a model of us. Consciousness, for our speaker, is having a blueprint of one's software. However, there are concerns about the future of AI as a new uncontrollable species with capabilities exceeding ours. Despite feeling helpless, the speaker emphasizes the importance of learning from past experiences, such as serving in the Israeli military and living in a kibbutz, which taught valuable lessons about survival and idealism.

    • Overcoming Challenges with Resilience and EducationIn the face of adversity, investing in education and resilience can help triple a population and ensure food security, but hate and intolerance remain complex challenges.

      Despite facing challenges like war, austerity, and religious conflicts, the speaker's country managed to triple its population and ensure no one went hungry. This achievement is a testament to the power of resilience and investment in education. However, the speaker also acknowledges the complexity of the region and the potential for hate and intolerance, which can lead individuals to commit evil acts. The education and indoctration played a significant role in shaping people's beliefs and actions. The speaker's personal experience with the abduction and execution of his son, Daniel, serves as a reminder of the depth of hate and intolerance in the world and the potential for individuals to be transformed into perpetrators of evil under certain circumstances. The speaker's message in Daniel's memory was that terrorism should not be normalized, and the importance of this message remains relevant today.

    • Exploring the Acceptance of Evil in SocietyRecognizing and calling out evil actions is crucial, as normalizing them can lead to acceptance and even glorification. Ask questions, follow your own path, and never take 'no' for an answer to make breakthroughs.

      Normalizing evil in society can lead to its acceptance and even glorification. This was a theme explored in the conversation with Judea Pearl. He shared his personal experiences and observations of how evil actions have been rebranded and accepted as part of political life. He emphasized the importance of recognizing and calling out evil when we encounter it. Additionally, Pearl offered advice for young minds seeking to make breakthroughs in science and technology. He encouraged asking questions, following one's own path, and not taking "no" for an answer. Looking back on his life's work, Pearl expressed hope that his ideas, particularly the fundamental law of counterfactors, would continue to influence and inspire future generations. Overall, the conversation highlighted the importance of questioning, rebelling against conventional wisdom, and striving for progress despite the challenges.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    #122 Connor Leahy: Unveiling the Darker Side of AI

    #122 Connor Leahy: Unveiling the Darker Side of AI

    Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

    Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

    Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

    If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

    (00:00) Preview

    (00:48) Connor Leahy’s background with EleutherAI & Conjecture  

    (03:05) Large language models applications with EleutherAI

    (06:51) The current negative trajectory of AI 

    (08:46) How difficult is keeping super intelligence in a sandbox?

    (12:35) How AutoGPT uses ChatGPT to run autonomously 

    (15:15) How GPT4 can be used out of context & negatively 

    (19:30) How OpenAI gives access to nefarious activities 

    (26:39) The problem with the race for AGI 

    (28:51) The goal of Conjecture and advancing alignment 

    (31:04) The problem with releasing AI to the public 

    (33:35) FTC complaint & government intervention in AI 

    (38:13) Technical implementation to fix the alignment issue 

    (44:34) How CoEm is fixing the alignment issue  

    (53:30) Stages of research and development of Conjecture

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    #275 - Preparing Young People for their Future with AI

    #275 - Preparing Young People for their Future with AI

    What's in this episode?

    Delighted to launch this new 5-episode miniseries on AI in education, sponsored by Nord Anglia Education, host Professor Rose Luckin kicks things off for the Edtech Podcast by examining how we keep education as the centre of gravity for AI. 

    AI has exploded in the public consciousness with innovative large language models writing our correspondence and helping with our essays, and sophisticated images, music, impersonations and video generated on-demand from prompts.  Whilst big companies proclaim what this technology can achieve and how it will affect work, life, play and learning, the consumer and user on the ground and in our schools likely has little idea how it works or why, and it seems like a lot of loud voices are telling us only half the story.  What's the truth behind AI's power?  How do we know it works, and what are we using to measure its successes or failures?  What are our young people getting out of the interaction with this sophisticated, scaled technology, and who can we trust to inject some integrity into the discourse?  We're thrilled to have three guests in the Zoom studio with Rose this week:

    Talking points and questions include: 

    • We often ask of technology in the classroom 'does it work'?  But when it comes to AI, preparing people to work, live, and play with it will be more than just whether or not it does what the developers want it to.  We need to start educating those same people HOW it works, because that will not only protect us as consumers out in the world, as owners of our own data, but help build a more responsible and 'intelligent' society that is learning all of the time, and better able to support those who need it most.  So if we want that 'intelligence infrastructure', how do we build it?
    • What examples of AI in education have we got so far, what areas have been penetrated and has anything radically changed for the better?  Can assessment, grading, wellbeing, personalisation, tutoring, be improved with AI enhancements, and is there the structural will for this to happen in schools?
    • The ‘white noise’ surrounding AI discourse: we know the conversation is being dominated by larger-than-life personalities and championed by global companies who have their own technologies and interests that they're trying to glamourise and market. What pushbacks, what reputable sources of information, layman's explanations, experts and opinions should we be listening to to get the real skinny on AI, especially for education?

    Sponsorship

    Thank you so much to this series' sponsor: Nord Anglia Education, the world’s leading premium international schools organisation.  They make every moment of your child’s education count.  Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other.  Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible.

    "Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education

     

    Waymo‘s Taxi Struggles, Robot Surveillance in Singapore, AI against Human Trafficking

    Breaking Up is Hard to Do: A conversation about Facebook

    Breaking Up is Hard to Do: A conversation about Facebook

    Is Facebook a monopoly? This week Paul and Rich tackle the 2-billion-user elephant in the room and go back and forth on two big questions: whether Facebook violates antitrust laws and should be broken up, and how the platform (or its regulators) can solve its rampant fake news problem. Topics covered include what “breaking up” Facebook would even look like, how the platform might verify news sources, separating news from satire, and the general public’s ambivalence about privacy and security.  

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Anil Dash, Capitalist to Activist

    Anil Dash, Capitalist to Activist

    Ethics and access on the web: in this week’s episode, Paul and Rich talk to entrepreneur-turned-activist Anil Dash about the early days of the web, access and inclusivity, and the ethical responsibilities of the people who build digital technologies. Plus they try to settle how much you should tip on a New York City cab ride—no matter what the interface.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.