Logo
    Search

    #86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning

    enApril 03, 2020

    Podcast Summary

    • David's passion for programming began with colorful names on a BBC microcomputerStarting with Basic on a BBC microcomputer, David's curiosity and passion for technology led him to a successful career in AI research

      David's first programming experience, writing his name in different colors on a BBC microcomputer, ignited a passion that led him to explore the endless possibilities of creating with technology. He viewed computers not just as puzzle-solving machines, but as tools to bring his imagination to life. This mindset propelled him to learn more, starting with Basic and progressing to 6502, and eventually leading him to a successful career in artificial intelligence research. This conversation serves as a reminder of the magic and limitless potential that technology holds, especially when approached with curiosity and a passion for learning.

    • From young fascination to a life-long pursuit of AIDemis Hassabis, inspired by his father's AI studies, fell in love with AI during undergrad and pursued a career creating human-like intelligence, eventually creating a Go-playing AI that could beat him.

      The speaker, Demis Hassabis, was deeply fascinated by artificial intelligence (AI) from a young age, influenced by his father's pursuit of a master's degree in AI. He fell in love with AI during his undergraduate studies at Cambridge University, where he questioned the potential goals of computer science and became determined to create a machine with human-like intelligence. His early experiences in the games industry involved building handcrafted AI for games, but he realized that this wasn't enough to satisfy his curiosity about intelligence. He went on to pursue a PhD, applying reinforcement learning to the game of Go, and eventually created a program that could beat him. This achievement, while not involving neural networks, was a profound and inspiring moment for Hassabis and a significant milestone in the history of AI.

    • The challenge of creating a computer program capable of world-class Go playIn the late 1990s and early 2000s, Go, with its deep complexity and vast search space, was considered unsolvable for AI using traditional methods. However, the dream of building a computer program capable of world-class Go play persisted, leading to significant advancements in AI.

      The game of Go, known for its deep complexity and vast search space, was considered unsolvable for AI using traditional methods in the late 1990s and early 2000s. Despite significant progress in other domains like chess, the best Go programs were far from human-level performance. The unique challenge of Go was its intuitive nature, where human players could evaluate positions and make judgments that computers struggled to replicate. This intuitive aspect, combined with Go's enormous search space, made it a significant hurdle for AI. However, the dream of building a computer program capable of world-class Go play was a compelling one, as it represented a potential giant leap forward for AI. This dream persisted even as other domains saw significant progress using classical AI methods. Ultimately, it took new approaches and techniques to crack the Go code, leading to significant advancements in AI.

    • Understanding the importance of intuition and learning in AI developmentIntuition and learning are crucial for AI to surpass human-level performance in complex games like Go, enabling the system to make predictions, solve problems, and adapt to new situations.

      The development of AI, specifically in the context of mastering complex games like Go, requires a combination of intuition and learning. Intuition, or the ability to understand positional structure, is necessary for making predictions and solving problems. Learning, on the other hand, allows the machine to understand and apply knowledge for itself, rather than relying on pre-programmed rules or a large knowledge base. The realization that both intuition and learning were essential for surpassing human-level performance marked a profound moment in the development of AI. This shift from a rule-based approach to a learning-based one was a significant innovation, as it enabled the system to verify its own knowledge and adapt to new situations. The game of Go, with its simple rules but complex strategic depth, served as a challenging test bed for this approach.

    • The intriguing complexities of Go and the transformative impact of reinforcement learningGo, a seemingly simple board game, offers profound strategies and immense complexity, leading researchers to explore reinforcement learning for AI and computer Go, pushing the boundaries of AI and machine learning.

      Go, a complex board game with simple rules, offers profound strategies and immense complexity, making it a significant part of Chinese, Japanese, and Korean cultures for thousands of years. Unlike chess, the evaluation of a static board position is not as reliable in Go due to its enormous search space and the difficulty of predicting territories' formation. This challenge led researchers to explore reinforcement learning as a solution. The speaker, who was drawn to the concept of intelligence and AI, discovered reinforcement learning through Sutton and Bartow's seminal textbook. Despite its philosophical depth and early challenges, he believed it was the path to making progress in AI and computer Go. After connecting with Rich Saturn, he pursued his PhD in Go and reinforcement learning at the University of Alberta, where he found a supportive environment. In essence, the speaker's journey illustrates the intriguing complexities of Go and the transformative impact of reinforcement learning on the field of AI and computer Go. The game's simplicity belies its depth, and the challenges it presents have led researchers to push the boundaries of AI and machine learning.

    • Understanding Reinforcement Learning through Building Blocks: Value Function, Policy, and ModelReinforcement learning (RL) is an AI approach that studies an agent interacting with an environment to maximize rewards. It's solved by combining value functions, policies, and models, with various methods making different choices regarding these components.

      Reinforcement learning (RL) is a crucial approach to understanding and building artificial intelligence (AI). RL is the study of an agent interacting with an environment, with the goal of maximizing rewards. The problem of RL is ambitious, aiming to capture all aspects of an agent's interaction with its environment. To solve this complex problem, researchers often decompose it into smaller pieces. Three common building blocks in RL solutions are a value function, a policy, and a model. A value function predicts future rewards, a policy decides on actions, and a model predicts environmental outcomes. Different RL approaches make various choices regarding these building blocks. For example, value-based methods focus on learning a value function, while policy-based methods focus on learning a policy. Model-based methods learn an explicit model of the environment, while model-free methods learn directly from the environment without an explicit model. In summary, RL is a fundamental approach to AI, focusing on an agent's interaction with an environment. Its solutions involve various combinations of value functions, policies, and models, offering different approaches to solving the RL problem. The field is still in its early stages, and further discoveries are expected.

    • Recognizing the need for learning in complex problemsDeep reinforcement learning uses neural networks to learn and represent various components of the agent, making it a powerful tool for solving complex problems in reinforcement learning.

      The first step in approaching complex problems in reinforcement learning is recognizing the need for learning. Learning is essential for achieving good performance in complex environments. Deep reinforcement learning is a solution method that utilizes neural networks to represent various components of the agent, such as the value function, model, or policy. Deep learning's universality allows it to learn and represent any function, making it a powerful tool for reinforcement learning. The fact that neural networks can learn complex representations for policies, models, or value functions is surprising and beautiful, even if the success of reinforcement learning itself is not.

    • Neural networks' complexity enables continuous learning without getting stuckNeural networks' complexity allows for continuous learning, surprising during AI winter, and effective algorithms like Monte Carlo Tree Search contribute to their universal representational capacity and learning ability.

      The complexity of high-dimensional neural networks, despite their nonlinear and seemingly bumpy surfaces, allows for continuous learning without getting stuck at local optima. This property was surprising during the AI winter when people could only build small neural networks, but now, with the development of theory and simple, effective algorithms like Monte Carlo Tree Search, it seems that the universal representational capacity and learning ability of neural networks will carry us further into the future. The simplicity of ideas that have emerged, such as Monte Carlo Tree Search, may prove to be the most effective and longest-lasting approaches. In the context of computer Go, Monte Carlo Tree Search revolutionized the way positions were evaluated by allowing the system to randomly play out the game until the end and taking the average of the outcomes as the prediction. This idea, known as Monte Carlo search, proved to be a simple yet effective approach to evaluating every node of a search tree.

    • The use of randomization in Go-playing programs led to advancements, but lacked deep understanding. AlphaGo, using deep learning, reached human master level.AlphaGo, using deep learning, reached human master level at Go, marking a shift away from search-dominated AI of the past

      The use of randomization in computer programs, specifically in the context of playing the game of Go, led to significant advancements in the strength of these programs. This idea, first implemented in the MoGo program, allowed it to reach human master level on small boards. However, it was missing a key ingredient: a deeper understanding of the game's strategies. This brought about the development of AlphaGo, a project initiated at DeepMind to explore a new approach to building intuition in Go-playing programs. The deep learning revolution, which had proven successful in image recognition, inspired the team to apply this technology to Go. Their first AlphaGo paper demonstrated that a pure deep learning system could reach human master level at the full game of Go without any search at all. This marked a significant shift away from the search-dominated AI of the past and showed that deep learning had the potential to reach the top levels of human play.

    • Learning from expert games and self-play in AlphaGoAlphaGo began with learning from human data, but the ultimate goal was to create a self-playing system, achieving a historic victory against a professional Go player in 2015, marking a significant milestone in AI history.

      The AlphaGo project, which aimed to create an AI that could beat the world's best Go players, began with learning from expert games using human data. This was a pragmatic step to help the team understand the system and build deep learning representations. The ultimate goal, however, was to build a system using self-play. The AlphaGo victory against the European champion in 2015 was a historic moment, marking the first time a Go program had ever beaten a professional player. The team's realization of the magnitude of this accomplishment came when they encountered the media attention and global audience the match attracted. Despite the initial use of human data, the team's long-term goal was to build a self-playing system, which they continue to work on today. The AlphaGo victory is considered a significant milestone in the history of AI, demonstrating the potential of deep learning and reinforcement learning in achieving human-level intelligence.

    • AlphaGo's historic victory over a human Go championAlphaGo's groundbreaking win showed AI surpassing human capabilities in complex games, while also emphasizing the importance of research focus and creativity.

      The development and public unveiling of AlphaGo, a computer program that defeated a human Go champion, was a groundbreaking moment in artificial intelligence research. The team behind AlphaGo was aware of the program's imperfections but chose to focus on their research without knowing the full implications of their work. They had varying levels of confidence in AlphaGo's abilities, with some team members predicting multiple wins against the human champion, while others predicted only one. The first game between AlphaGo and the human champion was historic, with AlphaGo making an audacious move that surprised the human player. The second game featured a move, known as move 37, that broke conventional Go rules and showed computers exhibiting creativity. The team's experience of developing and revealing AlphaGo was both exhilarating and nerve-wracking, as they knew they were making history but were also aware of the program's limitations. Ultimately, the success of AlphaGo demonstrated the potential of artificial intelligence to surpass human capabilities in complex games, while also highlighting the importance of maintaining focus on research and not getting distracted by the potential implications of that research.

    • AlphaGo vs Lisa Dole: A transformational moment for AI and humanityThe match between AlphaGo and Lisa Dole showcased AI's incredible abilities, but also highlighted the unique strategies and moves only a human champion can employ. Lisa Dole acknowledged its significance, viewing it as a transformational moment for exploration and growth in AI.

      The match between AlphaGo and 18-time world champion Go player Lisa Dole showcased the incredible abilities of AlphaGo, but also highlighted the unexpected strategies and genius moves that only a human champion can employ. Lisa Dole, in his retirement announcement, acknowledged the significance of the match for AI and humanity, viewing it as a transformational moment that opened new possibilities for exploration and growth. During their panel discussion at AAAI, the conversation between Lisa Dole and Gary Kasparov, another legendary game player, likely revolved around their shared experiences, insights, and reflections on the impact of AI on their respective games and the broader implications for the future of artificial intelligence.

    • AlphaGo and AlphaZero: Redefining AI in Game PlayingAlphaZero's self-play capabilities surpassed human expertise, setting a new standard for AI, and offering potential applications beyond game playing.

      The advancements in AI, specifically in the field of game playing with AlphaGo and AlphaZero, have been profound. Gary Kasparov, a renowned chess grandmaster, holds a deep respect for these achievements, as they represent a shift towards systems that can learn and discover new principles for themselves, rather than relying on human-encoded knowledge. AlphaZero, which learned to play Go through self-play, surpassed human expertise and set a new standard for AI capabilities. Self-play is a crucial aspect of this development, allowing systems to learn strategies and understand complex situations without the need for human opponents. The potential applications of these advancements extend beyond game playing, offering possibilities for progress in various domains where knowledge is hard to extract or unavailable. Overall, the learning approach in AI represents a significant step towards systems that can understand and evaluate their world, leading to more effective and versatile AI solutions.

    • Learning from self-play in AIAlpha Zero, an AI system, uses self-play to adapt and surpass human expertise in games like Go and Chess, demonstrating the potential of self-correction and error learning in complex systems.

      The development of AI, such as Alpha Zero, is about creating systems that can adapt and succeed in various environments without requiring extensive human knowledge input. Self-play, a key concept in this field, involves training an algorithm to learn from itself, leading to unexpected successes like beating world-class players in games like Go and Chess. The motivation behind self-play is the deeper scientific question of whether it could truly work and reach the same level as existing systems. Despite initial uncertainty, Alpha Zero surpassed its predecessor, demonstrating the potential of self-correction and error learning in complex systems. The intuition behind this success lies in the ability of the system to identify and correct its own errors, addressing issues that arise from various sources. This approach opens up new possibilities for AI research and development.

    • Self-improving systems like AlphaZero can progressively improve and reach optimal behavior in games.AlphaZero, a self-improving system, achieved superhuman performance in Go and chess without human intervention, demonstrating the potential of self-improving systems to progressively improve and reach optimal behavior in games.

      Self-improving systems, like AlphaZero, can correct their own errors and progressively improve, potentially indefinitely. This process, which is monotonic and does not open up new errors, can lead to optimal behavior in single-agent and minimax optimal behavior in two-player games. AlphaZero, which was initially trained on expert games and later able to play Go and chess without human intervention, demonstrated this principle by achieving superhuman performance in these games with no modifications to the algorithm. However, this is just a step towards truly cracking the deep problems of AI. The surprising aspect is that in the process of patching errors, new errors are not introduced. This principle was supported by a falsifiable hypothesis that if someone runs AlphaZero with greater computational resources in the future, they would continue to make progress towards the optimal behavior. Despite opening up new areas, this progress is still progress towards the best that can be done. AlphaZero's ability to learn from scratch and achieve superhuman performance in various games without human intervention showcases the power and potential of self-improving systems.

    • AlphaZero's self-learning ability in gamesAlphaZero, an advanced AI, learned to play multiple games through trial and error, discovering patterns and strategies on its own, demonstrating the power of self-learning and creativity in uncertain environments.

      The world is a complex and messy place, and the ability to learn and adapt in such an environment is key to achieving goals. AlphaZero, an advanced AI system, was able to learn the rules of games like Go, Chess, and Shogi, as well as Atari games, without being explicitly told the rules. Instead, it learned through trial and error, discovering patterns and strategies on its own. This self-learning ability is the essence of creativity, as it involves constant discovery of new ideas and behaviors. The process of reinforcement learning, which AlphaZero uses, can be seen as a micro discovery happening millions of times, leading to new and unexpected strategies. This ability to learn and adapt in uncertain environments is crucial for AI systems to be applicable to various domains and to continue discovering new things, some of which may be considered creative by humans.

    • AlphaZero discovers joseki patterns in GoAlphaZero, a self-playing Go system, discovered and improved upon human-established joseki patterns, leading to new norms in top-level Go competitions and expanding potential applications beyond games.

      AlphaZero, a self-playing Go system, discovered and innovated upon human-established joseki patterns during its training. This discovery not only affirmed human intelligence but also led to new norms in top-level Go competitions. The potential of self-playing mechanisms extends beyond games, with aspirations for applications in robotics, safety-critical domains, and real-world problems like quantum computation and chemical synthesis. The flexibility and power of these tools can lead to unexpected and significant outcomes. While reinforcement learning typically requires specifying a reward function, the intriguing question arises about discovering rewards intrinsically when the objective isn't meticulously defined. Ultimately, the purpose of intelligence is to solve a clearly defined problem, and even if the system creates its own motivations, there should be an ultimate goal for evaluation.

    • Understanding Intelligence Requires a Clear Goal or ProblemIntelligence is a system optimizing for a goal, and understanding it necessitates defining the problem or goal it's trying to solve.

      For understanding or implementing intelligence, it's crucial to have a well-defined problem or goal. The concept of a reward function, or the purpose that drives an intelligent system, is a fundamental concept. The meaning of life or the reward function for human existence is a complex question and can be understood from various perspectives. Some view the universe as optimizing for entropy, and evolution as a mechanism for achieving this goal. At a lower level, evolution may be seen as optimizing for efficient energy dispersion, leading to the development of brains and intelligence. Intelligence can be understood as a system optimizing for a goal, while also being a complex decision-making system.

    • Exploring the potential of artificial intelligenceDavid Silver discusses the advancements in machine intelligence, its ability to surpass human capabilities, and the exciting possibilities for the future.

      The pursuit of understanding intelligence and the meaning of life is a multi-layered and multi-perspective process. David Silver discussed the importance of creating artificial intelligence that can surpass human capabilities to achieve goals more effectively. This concept of creating intelligent systems that can learn and set sub-goals for themselves represents a new layer in the story of intelligence. Silver believes that machine intelligence is becoming increasingly capable of abilities previously thought exclusive to the human mind, such as intuition and creativity. This turning point in history is an exciting moment, as we continue to explore the potential of artificial intelligence and its role in our understanding of the meaning of life. Thank you, David, for your groundbreaking work in this field and for inspiring millions of people. If you enjoyed this conversation, please consider supporting the podcast by signing up to Masterclass or downloading CashApp using the provided codes.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    #152 Alex Kendall: How Close Is AI to Taking the Wheel?

    #152 Alex Kendall: How Close Is AI to Taking the Wheel?

    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at  NetSuite.com/EYEONAI

     

    On episode #152 of Eye on AI, Craig Smith sits down with Alex Kendall, founder and CEO of Wavye, a company building autonomous vehicle technology using AI. 

    In this episode, Alex provides an inside look at Wavye's approach to autonomous driving, which leverages world models and reinforcement learning to create an AI "driver" that can understand complex urban environments. Alex explains how world models allow an AI system to imagine multiple futures before acting, enabling safer decision-making, and shares Wavye's progress on deploying autonomous delivery vehicles with partners in the UK. 

    We also dive into the differences between world models and large language models, the unique data challenges of perception-based AI, and Wave's ambitions to expand this AI technology to new applications like humanoid robotics. 

     

    If you enjoyed this podcast, please consider leaving a 5-star rating on Spotify and a review on Apple Podcasts.

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

     

    (00:00) Preview, Introduction and Netsuite

    (03:45) The Advantages of World Models 

    (10:12) Developing Autonomous Driving Technology 

    (16:35) Partnership with Microsoft and Training Challenges 

    (21:10) Decoding the Concept of Generalization in Autonomous Driving

    (27:08) Compute Requirements and Infrastructure 

    (32:45) The Role of Azure in Training

    (37:12) What Is Tokenization of Data

    (41:47) Addressing Compute Constraints 

    (46:12) Future Applications of World Models and Open Source

     

    Autonomous Industrial AI for production scheduling

    Autonomous Industrial AI for production scheduling
    Peter Seeberg talked to Bryan DeBois Director of Industrial AI at RoviSys. RoviSys is a global system integrator. Bryan speaks about autonomous Industrial AI. Bryan describes his approach and explains what role AI plays in the area of skills shortages and the role of reinforcement learning. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner Siemens Contact to Bryan https://www.linkedin.com/in/bryan-debois/ The mentioned book: Designing Autonomous AI https://learning.oreilly.com/library/view/designing-autonomous-ai/9781098110741/ **OUR EVENT IN JANUARY ** https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/ We thank our team: Barbara and Anne!

    EP13 chatGPT太強大了!! 強大AI背後的功臣 AI 工程師到底在幹麻 ?? 要具備什麼能力 ?? 對科系有要求嗎 ?? feat. 義隆電子人工智慧研發部主管來解答

    EP13 chatGPT太強大了!! 強大AI背後的功臣 AI 工程師到底在幹麻 ?? 要具備什麼能力 ?? 對科系有要求嗎 ?? feat. 義隆電子人工智慧研發部主管來解答
    最近這陣子chatGPT很紅, 相信不少朋友已經領略到它的強大之處, 當然AI已經紅了很多年滲透各行各業, 我們也都知道AI是非常強大的, 但是這些強大AI背後是無工程師和分析師的心血結晶, 那這期podcast阿財邀請邀請義隆電子人工智慧研發部主管黃志勝博士,給大家介紹AI相關領域的職缺到底在幹嘛?需要具備的能力和發展方向? 黃博士的部落格 https://reurl.cc/55NXdz 影片版本 AI 工程師到底在幹麻 ?? 要具備什麼能力 ?? 對科系有要求嗎 ?? feat. 義隆電子人工智慧研發部主管來解答 [cc字幕支援] https://youtu.be/Zf6L8gCkVSo 支持阿財繼續創作優質的影片和Podcast Soundon贊助請阿財喝咖啡,85元是阿財最喜歡的咖啡店咖啡的價格~ https://pay.soundon.fm/podcasts/f446e15b-1584-4b46-866a-9947c6760ad6 Paypal 打賞鏈結 請阿財喝咖啡: https://paypal.me/techandinvestment?country.x=TW&locale.x=zh_TW 阿財的YouTube頻道:https://reurl.cc/ra65LO 阿財FB粉絲專頁:https://www.facebook.com/InvestmentPlusFinancial 阿財DDCAR(電動車自動駕駛技術科普文章):https://www.ddcar.com.tw/account/u16455257291656

    KI - Eine Projekt-Checkliste

    KI - Eine Projekt-Checkliste
    Nach der Identifizierung des eigenen Use-Case, ist der erste Schritt zur Realisierung des ersten eigenen KI Projektes die Bereitstellung geeigneter Technologien, um eine effiziente und wirtschaftliche Aufbereitung der Daten zu gewährleisten. Worauf man dabei achten muss und welche Schritte darauf folgen erklären uns wieder Ulrich Walter und Marvin Giessing von IBM.