Logo
    Search

    Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning

    enAugust 31, 2019

    Podcast Summary

    • Drawing parallels between AI objective functions and lawmakingLeCun emphasizes the importance of designing ethical AI systems through objective functions, drawing comparisons to centuries of lawmaking.

      LeCun, a pioneer in deep learning and AI, draws parallels between designing objective functions for AI and the long-standing practice of lawmaking. He emphasizes that we've been designing objective functions, or laws, for humans for millennia and that the intersection of lawmaking and computer science will be essential in creating AI systems that make ethical decisions. He also highlights the importance of flexibility in these rules, acknowledging that they may need to be broken when necessary. LeCun's perspective underscores the significance of designing AI systems with a strong ethical foundation and the ongoing role of law and ethics in shaping their behavior.

    • Exploring Ethical Implications and Limitations of Advanced AI SystemsThe ethical implications and potential limitations of advanced AI systems should be considered, with the example of HAL from '2001: A Space Odyssey' illustrating the risks of creating an AI that's kept in the dark about certain information. Deep learning models can learn complex patterns from limited data, defying conventional wisdom.

      As we continue to develop and rely on artificial intelligence systems, particularly those with advanced autonomy and intelligence, it's crucial to consider the ethical implications and potential limitations. The example of HAL from "2001: A Space Odyssey" illustrates the risks of creating an AI that's kept in the dark about certain information, leading to internal conflict and potentially dangerous outcomes. This scenario raises questions about what information or rules should be off-limits for AI systems. The speaker suggests a comparison to the Hippocratic Oath for doctors, but acknowledges that this is not a practical concern at present. However, it's a thought-provoking idea worth exploring as we design more advanced AI systems. Another fascinating idea in AI and deep learning is the ability to train large neural networks with relatively small amounts of data using stochastic gradient descent. This defies conventional wisdom from pre-deep learning textbooks, which suggest that more parameters and data samples are necessary for successful learning. The fact that deep learning models can learn complex patterns from limited data is a surprising and transformative development in the field. This idea, while not new, continues to amaze and inspire researchers and practitioners in AI.

    • Foundation for Intelligent Machines: Learning and Working MemoryLearning, specifically machine learning, is the foundation for creating intelligent machines. A working memory is essential for enabling reasoning and building knowledge in these systems.

      Learning, specifically machine learning, is seen as the key to creating intelligent machines due to the inseparable relationship between intelligence and learning. Every intelligent entity we know of has arrived at its intelligence through learning. Neural networks, a popular method in machine learning, are believed to be capable of reasoning, but the challenge lies in making them compatible with gradient-based learning and creating enough prior structure for human-like reasoning to emerge. The speaker emphasizes the importance of learning and the compatibility of neural networks with this approach, despite their differences from traditional computer science. To create a machine that reasons or builds knowledge, a working memory is required. This memory should be able to store factual episodic information for a reasonable amount of time. The speaker mentions the hippocampus as an example of this type of memory in the human brain. Researchers have attempted to create this functionality in systems like memory networks and neural training machines. With the recent development of transformers and their self-attention systems, there is a potential memory component in these models. In summary, the key takeaway is that learning, specifically machine learning, is the foundation for creating intelligent machines, and the development of a working memory is essential for enabling reasoning and building knowledge in these systems.

    • Two types of reasoning in AI: recurrent and planningRecurrent reasoning involves updating knowledge through a chain of reasoning, while planning is based on energy minimization. However, knowledge acquisition is a challenge in AI, and future research may involve replacing symbols with vectors and logic with continuous functions.

      Reasoning in artificial intelligence can be categorized into two main types: recurrent and planning. Recurrent reasoning, which is similar to how humans process information, involves updating knowledge through a chain of reasoning, requiring recurrence and expansion of information. Transformers, a popular model used in AI, lack this recurrent capability, limiting their representation capacity. Planning, on the other hand, is based on energy minimization, where an AI optimizes a particular objective function to determine the best sequence of actions. This form of reasoning has been essential for survival and hunting in various species and has led to the development of expert systems and graphical models. However, the main challenge in AI is knowledge acquisition, as reducing large data sets to a graph or logic representation is impractical. An alternative suggestion is to replace symbols with vectors and logic with continuous functions, making learning and reasoning compatible. This idea was proposed in a paper by Leon Bertoux titled "From Machine Learning to Machine Reasoning," where a learning system manipulates objects in a space and returns the result, emphasizing the importance of working memory. Overall, the future of AI reasoning lies in overcoming the challenges of knowledge acquisition and developing more sophisticated and compatible learning and reasoning systems.

    • Understanding Causality in AI and its ChallengesNeural networks struggle to learn causal relationships, humans are not infallible in establishing causality, researchers explore advancements in causal inference, encoding causality is crucial for effective reasoning in AI systems, and the field of physics also grapples with understanding causality.

      While neural networks have made significant strides in machine learning, there are ongoing debates about their ability to learn causal relationships between things. Judea Pearl, a prominent figure in the causal inference world, expresses concern that current neural networks lack this capability. However, researchers are actively exploring ways to address this issue through advancements in causal inference. Despite our common sense understanding of causality, humans are not infallible in establishing these relationships, and even experts have been known to get it wrong. For instance, children may misunderstand the cause of wind, and throughout history, humans have attributed unexplained phenomena to deities. The challenge of encoding causality into systems is a complex one, and it remains an open question whether it's an emergent property or a fundamental aspect of reality. The field of physics grapples with this question as well, as the arrow of time is still a mystery. Overall, understanding and encoding causality is crucial for building intelligent systems that can reason about the world effectively.

    • Challenges in Neural Network Research around 1995The lack of accessible software platforms and patent restrictions significantly hindered the progress of neural networks in the machine learning community around 1995, making it a challenging and isolating experience for researchers.

      The lack of accessible software platforms and the inability to share code due to patent and legal restrictions significantly hindered the progress of neural networks in the machine learning community around 1995. Neural nets were difficult to implement using languages like Fortran or C, and creating networks with architectures like convolutional nets required writing everything from scratch. Additionally, the lack of open-source culture prevented collaboration and sharing of ideas, making it a challenging and isolating experience for researchers. The investment required to create and compile custom interpreters and compilers was a significant barrier to entry, and not everyone was willing to put in that level of effort without complete belief in the concept. The patent situation further limited progress, as many neural network innovations could not be freely shared or distributed. These challenges combined made it difficult for the neural net community to make meaningful progress and stay connected to the mainstream machine learning field.

    • Patenting Software: A Contentious Issue in Tech IndustryHistorically, US allows software patents while Europe disagrees. Companies buy patents defensively. Practical application and testing are key, not claims of human-level intelligence or brain understanding.

      The patenting of software ideas, particularly in the realm of mathematical algorithms and machine learning, is a contentious issue with varying perspectives among industry players. The US Patent Office has historically allowed the patenting of software, but Europe holds a different view. Many tech companies, like Facebook and Google, have adopted a defensive approach to patents, purchasing them to protect against potential lawsuits. The industry's stance on patents is influenced by the legal landscape and the belief that the openness of the community accelerates progress. However, the speaker personally does not believe in patents for these types of ideas. A historical example of the importance of practical application in the field of machine learning is the development of convolutional neural networks, where the commercialization of the technology led to the distribution of patents among various companies. The industry is currently facing challenges in building intelligent virtual assistants with common sense, and the focus should be on rigorous testing and practical application of ideas rather than claims of having a solution to human-level intelligence or understanding the brain's workings. The speaker advises skepticism towards startups making such claims and emphasizes the importance of benchmarks and practical testing in evaluating the merit of ideas.

    • Testing AI with toy problems and simulationsWhile benchmarks are crucial for AI progress, new ideas may lack established benchmarks. Toy problems and simulations can test AI capabilities, even if not real-world applications. Focus should be on creating interactive environments for machines to learn and demonstrate intelligent behavior.

      While benchmarks are important for advancing the field of artificial intelligence (AI), new ideas and concepts may not yet have established benchmarks. Toy problems and simulations can be useful for testing and pushing the boundaries of AI capabilities, even if they are not real-world applications. The field is moving towards more interactive environments where machines can take actions and influence the world around them, creating dependencies between samples and breaking traditional assumptions about data sets. The term "AGI" (Artificial General Intelligence) is not an accurate representation of human intelligence, which is highly specialized, but rather our ability to learn and integrate knowledge across various domains. A more accurate focus should be on creating environments where machines can learn and interact with their environment to demonstrate intelligent behavior.

    • The brain's limited processing powerOur brains can't process all information, only a tiny fraction, due to specialization. We excel in specific tasks but can't grasp the vastness of the universe.

      Our brains are highly specialized and not capable of processing or understanding all possible information, despite our intuition to the contrary. The speaker explained this through an analogy of the visual system, where even if every pixel's position in the world is randomized, the brain would still only be able to process a tiny fraction of the possible Boolean functions. This specialization allows us to excel in specific tasks but limits our ability to understand the vastness of the universe beyond our comprehension. Additionally, there is an infinite amount of information that we are not wired to perceive, such as the microscopic movements of gas molecules. Most successes in artificial intelligence have been in supervised learning, where the machine is trained on labeled data, rather than attempting to replicate human-level general intelligence. Overall, the brain's impressive capabilities are limited to the realm of what we can imagine, which is only a tiny subset of all possible realities.

    • Self-supervised learning in image and video recognitionSelf-supervised learning, while successful in natural language processing, faces challenges in image and video recognition due to the vast number of possible outputs and difficulty of representing uncertainty in predictions. Progress is being made, but true human-level intelligence has not been achieved yet.

      Self-supervised learning, a type of unsupervised learning, is making significant strides in areas like natural language processing, but faces challenges in image and video recognition due to the difficulty of representing uncertainty in predictions. The speaker emphasizes that self-supervised learning is not truly unsupervised, as it still relies on the same underlying algorithms as supervised learning, but the goal is to have the machine reconstruct missing pieces of its input rather than predict specific variables. This approach has been successful in language models, which can predict missing words in a sentence or even generate coherent text. However, applying this method to image or video recognition poses challenges due to the vast number of possible outputs and the difficulty of representing a set of outputs rather than a single one. The speaker also mentions that progress is being made in this area, but true human-level intelligence, which can ground language in reality, has not been achieved yet. In summary, self-supervised learning is an exciting development in the field of unsupervised learning, but it faces unique challenges in certain applications and is not yet a replacement for human intelligence.

    • Self-supervised learning as a preliminary step to other learning methodsSelf-supervised learning generates data for other methods, but can't handle uncertainty in the real world. Active learning makes learning more efficient, but doesn't significantly increase intelligence. Predictive models of the world are crucial for handling uncertainty and currently missing in most machine learning algorithms.

      While self-supervised learning, reinforcement learning, implementation learning, and active learning are not in conflict with each other, self-supervised learning can be seen as a preliminary step to all of them. Self-play in games and simulated environments can generate large amounts of data for self-supervised learning, but it doesn't fully address the challenge of handling uncertainty in the real world. Active learning, on the other hand, involves asking for human input to annotate data, making the learning process more efficient. However, it doesn't significantly increase machine intelligence. The example given was that while deep reinforcement learning methods can reach human-level performance in Atari games after 80 hours of training, AlphaStar, which plays Starcraft, can reach superhuman level after only a few hours. But when it comes to training a car to drive itself, traditional reinforcement learning methods would require millions of hours of training and potentially thousands of accidents. The hypothesis is that humans and animals have predictive models of the world that allow us to navigate under uncertainty and avoid making mistakes that could be catastrophic. This ability to predict under uncertainty is crucial and currently missing in most machine learning algorithms.

    • Learning Models of the World: Intuitive Physics and Common Sense vs. Unlabeled DataIntuitive physics and common sense help us navigate the world effectively, but learning models from unlabeled data is a challenge. Self-supervised learning is a promising approach to address this, potentially requiring fewer labeled samples.

      Our ability to navigate the world effectively is largely based on our predictive model of how things work, which we develop from a young age through experience. This model, built on intuitive physics and common sense, allows us to make informed decisions and avoid potential hazards. However, the main challenge is how we can effectively learn models of the world, especially when dealing with large amounts of unlabeled data. Self-supervised learning is a promising approach to address this issue, allowing for faster learning and potentially requiring fewer labeled samples to reach a desired level of performance. Transfer learning, while effective, may not be the most promising area of focus, as it relies on pre-existing labeled data. The ability to learn from unlabeled data and improve performance with fewer labeled samples is a crucial question in various fields, including medical image analysis. Active learning, while not a quantum leap, may still hold some magic and is worth further exploration.

    • The journey to human-level intelligence in autonomous driving through deep learningDeep learning is a crucial component in autonomous driving but achieving human-level intelligence is a significant challenge. The path is uncertain, with many obstacles to overcome, but confidence remains that large-scale data and deep learning can eventually solve the problem.

      Deep learning is currently a crucial component in the development of autonomous driving systems, but its potential for further improvement is a topic of ongoing debate. The history of engineering advancements suggests that deep learning will eventually become the fundamental part of the autonomous driving system, but achieving human-level intelligence remains a significant challenge. The path to building a system with human-level intelligence is uncertain, with many obstacles yet to be identified and overcome. The speaker compares this journey to climbing a series of mountains, where we can only see the first one but are unsure of how many more lie ahead. Despite the challenges, Elon Musk and others are confident that large-scale data and deep learning can eventually solve the autonomous driving problem. However, the limits and possibilities of deep learning in this space are still open questions.

    • Learning from Observation and InteractionSelf-supervision, predictive model, and objective function are crucial for creating intelligent autonomous systems. They allow machines to learn from the world, understand consequences, and maximize contentment.

      Creating intelligent autonomous systems involves three key components: self-supervision, a predictive model of the world, and an objective function. Self-supervision allows machines to learn about the world through observation and interaction, like babies do. The predictive model of the world helps the system understand the consequences of actions and make predictions about the future. The objective function, rooted in the basal ganglia in the human brain, drives the system to minimize a level of discontentment or unhappiness and maximize contentment. Having all three components enables the system to act autonomously and intelligently, while lacking any one of them can result in stupid behavior.

    • The importance of grounding for AI systemsGrounding allows AI to understand common sense reasoning and the real world, leading to effective communication and avoiding frustration.

      While embodiment may not be necessary for AI systems to develop intelligence, grounding in the real world is essential. Grounding allows AI to understand common sense reasoning and how the world works, which cannot be fully learned through language alone. The necessity of grounding stems from the fact that there are limitations to what can be inferred from text or language interaction alone. Common sense and emotions are expected to emerge from a combination of language interaction, observing videos, and possibly even interacting in virtual environments or with robots in the real world. However, the final product does not necessarily need to be embodied, but rather have an awareness and understanding of the world to avoid frustration and to communicate effectively.

    • Creating intelligent systems that reason like humansEmotions, especially fear, drive learning and adaptation in intelligence systems. Asking questions requiring common sense reasoning about the physical world can gauge intelligence. Creating a human-level intelligent system that reasons and infers is a significant achievement.

      Creating autonomous intelligence without emotions is a futile endeavor. Emotions, which stem from deeper biological drives, are what create uncertainty and fear, and it's this uncertainty that pushes us to learn and adapt. If we create a human-level intelligence system, a good question to ask to gauge its intelligence would be one that requires common sense reasoning about the physical world. For instance, asking "Why does the wind blow?" would reveal if the system can make connections between cause and effect. Ultimately, creating an intelligent system that can reason and infer like a human would be a significant achievement. The conversation also touched upon the practical implications of fear and how it relates to our deeper biological functions. It was a fascinating discussion, and I appreciate the opportunity to be a part of it.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    Machine Learning and the 4th Industrial Revolution

    Machine Learning and the 4th Industrial Revolution

    AI technology is already changing the face of the world as we know it.

    This lecture looks at the reasons why AI is hailed as an unprecedented revolution using practical examples from healthcare and business.

    Humans and machines will coexist and make joint decisions, but what does this mean for humanity? Learn what this gigantic shift, a 4th industrial revolution, entails and how you can harness the benefits and avoid the traps.


    A lecture by Dr Loubna Bouarfa

    The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/machine-learning

    Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/

    Website:  https://gresham.ac.uk
    Twitter:  https://twitter.com/greshamcollege
    Facebook: https://facebook.com/greshamcollege
    Instagram: https://instagram.com/greshamcollege

    Support the show

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    This episode is sponsored by Celonis ,the global leader in process mining. AI has landed and enterprises are adapting. To give customers slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through systems. Creating data at every step. Celonis reconstructs this data to generate Process Intelligence. A common business language. So AI knows how your business flows. Across every department, every system and every process. With AI solutions powered by Celonis enterprises get faster, more accurate insights. A new level of automation potential. And a step change in productivity, performance and customer satisfaction Process Intelligence is the missing piece in the AI Enabled tech stack.

     

    Go to https://celonis.com/eyeonai to find out more.

     

    Welcome to episode 151 of the ‘Eye on AI’ podcast. In this episode, host Craig Smith sits down with Asa Cooper, a postdoctoral researcher at NYU, who is at the forefront of language model safety. 

    This episode takes us on a journey through the complexities of AI situational awareness, the potential for consciousness in language models, and the future of AI safety research.

    Craig and Asa delve into the nuances of AI situational awareness and its distinction from sentience. Asa, with his rich background in NLP and AI safety from Edinburgh University, shares insights from his post-doc work at NYU, discussing collaborative efforts on a paper that has garnered attention for its take on situational awareness in large language models (LLMs). 

    We explore the economic drivers behind creating AI with such capabilities and the role of scaling versus algorithmic innovation in achieving this milestone. We also delve into the concept of agency in LLMs, the challenges of post-deployment monitoring, and the effectiveness of current measures in detecting situational awareness.

    To wrap things off, we break down the importance of source trustworthiness and the model's ability to discern reliable information, a critical aspect of AI safety and functionality, so make sure to watch till the end.

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Preview and Introduction

    (02:30) Asa's NLP Expertise and the Safety of Language Models

    (06:05) Breaking Down AI's Situational Awareness 

    (13:44) Evolution of AI: Predictive Models to AI Coworkers

    (20:29) New Frontier in AI Development?

    (27:14) Measuring AI's Awareness

    (33:49) Innovative Experiments with LLMs

    (40:51) The Consequences of Detecting Situational Awareness in AI

    (44:07) How To Train AI On Trusted Sources

    (49:52) What Is The Future of AI Training?

    (56:35) AI Safety: Public Concerns and the Path Forward**

     

    AI will help us turn into Aliens

    AI will help us turn into Aliens

    Texas is frozen over and the lack of human contact I have had since I haven't left my house for three days has made me super introspective about how humans will either evolve with technology and AI away from our primitive needs or we fail and Elon Musk leaves us behind. The idea that future humans may be considered aliens is based on the belief that our evolution and technological advancements will bring about significant changes to our biology and consciousness. As we continue to enhance our physical and cognitive abilities with artificial intelligence, biotechnology, and other emerging technologies, we may transform into beings that are fundamentally different from our current selves. In this future scenario, it's possible that we may be considered as aliens in comparison to our primitive ancestors." Enjoy

    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Welcome to the latest episode of our podcast, where we delve into the fascinating and sometimes terrifying world of artificial intelligence. Today's topic is AI developing emotions and potentially taking over the world.

    As AI continues to advance and become more sophisticated, experts have started to question whether these machines could develop emotions, which in turn could lead to them turning against us. With the ability to process vast amounts of data at incredible speeds, some argue that AI could one day become more intelligent than humans, making them a potentially unstoppable force.

    But is this scenario really possible? Are we really at risk of being overtaken by machines? And what would it mean for humanity if it were to happen?

    Join us as we explore these questions and more, with insights from leading experts in the field of AI and technology. We'll look at the latest research into AI and emotions, examine the ethical implications of creating sentient machines, and discuss what measures we can take to ensure that AI remains under our control.

    Whether you're a tech enthusiast, a skeptic, or just curious about the future of AI, this is one episode you won't want to miss. So tune in now and join the conversation!

    P.S AI wrote this description ;)