Logo
    Search

    #206 – Ishan Misra: Self-Supervised Deep Learning in Computer Vision

    enJuly 31, 2021

    Podcast Summary

    • What is Self-Supervised Machine Learning?Self-supervised machine learning allows AI systems to understand the visual world with minimal human help. It discovers concepts and learns representations about the world without explicit human supervision, making it particularly useful for images and video.

      Self-supervised machine learning is a way to make AI systems understand the visual world with minimal human help. In supervised learning, humans provide labeled data for the system to imitate. In semi-supervised learning, some of the data is labeled and some is not. Self-supervised learning, on the other hand, aims to discover concepts and learn representations about the world without explicit human supervision. This learning method is particularly useful for images and video, where scaling labeled data is difficult. The term "self-supervised" emphasizes that the data itself serves as its own supervision, allowing the system to learn without significant human intervention.

    • Understanding Self-Supervised Learning for AI DevelopmentSelf-supervised learning uses data to teach machines how to recognize patterns and relationships without explicit supervision. This method can improve natural language processing and computer vision, but more research is needed to unlock its full potential for AI development.

      Self-supervised learning is a type of machine learning where the data itself is used as a source of supervision signals to train algorithms to learn patterns and relationships in the data. This can be applied in many domains, including natural language processing and computer vision, by using tricks like masking words or cropping images to create consistent sequences for the algorithm to predict. By using consistency inherent in physical reality as a source of supervision, self-supervised learning unlocks important insights and can ultimately contribute to the development of more intelligent algorithms. However, there is still much to learn about this type of learning and its potential for advancing AI.

    • The Power of Self-Supervised Learning for Common SenseSelf-supervised learning enables machines to learn common sense without relying on explicit labeling or human supervision, allowing them to observe and infer information about the world through their interactions.

      Self-supervised learning is a powerful way to learn common sense about the world without explicit labeling. While supervised learning is not scalable for labeling every aspect of the world, self-supervised learning can enable an agent to observe and infer a lot of information about the world through its interactions. Humans are not good sources of supervision as they are not consistent and may not be specific, leading to confusion. Creating a perfect taxonomy of objects in the world is a hopeless pursuit as compositional objects can always create new categories. Therefore, machines have to discover supervision in the natural signal and learn through observation and inference.

    • The Power of Similarity and Self-Supervised Learning in Deep UnderstandingRecognizing patterns and understanding similarity between objects can lead to a deeper understanding and help solve complex problems. Self-supervised learning prioritizes discovering underlying structure rather than annotating everything.

      Similarity between objects is a crucial concept for understanding and learning about them. It allows us to recognize patterns and relate new experiences to past ones, even if we don't have explicit knowledge or vocabulary for them. Categorization can be useful, but it can also be limiting and time-consuming to annotate everything. Self-supervised learning, which prioritizes discovering underlying structure rather than labeling everything, is emerging as a powerful alternative. Deep understanding involves embedding objects within a network of related concepts, not just categorizing them. Ultimately, similarity can help us grasp profound ideas and solve complex problems.

    • The Role of Self-Supervised Learning in Overcoming Challenges of Computer VisionSelf-supervised learning helps to improve computer vision, but it is not a complete solution. Building a common sense understanding of concepts is crucial for effective communication, which cannot be achieved through supervised learning alone.

      The article discusses the challenges of computer vision and the role of self-supervised learning in improving it. While self-supervised learning can play a crucial part in improving computer vision, it is not the solution to everything. The ultimate goal of computer vision is to communicate with humans, which requires a human understanding of the concepts being used. Hence, building a base of common sense concepts or semantics is crucial in achieving this goal. Supervised learning alone cannot provide the necessary understanding for communication. The article suggests that computer vision is a challenging domain, and self-supervised learning is just one part of the solution.

    • The Success of Self-Supervised Learning in Natural Language ProcessingThe distributional hypothesis has been key in achieving success through techniques like masking. However, there is room for further exploration and other methods that can be leveraged for language modeling.

      Self-supervised learning in natural language processing has been successful due to the distributional hypothesis, which states that words in similar contexts have similar meanings. Masking, a technique in which words are removed from a sentence and the neural network is tasked with predicting what was originally there, has proven to be a powerful tool in language modeling. While this method has been successful, there may be other tricks and methods that can be leveraged for language modeling. It is important to note that masking is only one method of self-supervised learning, and there may be many other methods that can be explored in the future.

    • Advancements in Artificial Intelligence and Computer VisionAs technology advances, AI and computer vision improve predictions in medicine and image analysis. While language and vision present challenges, computer vision is more complex due to its common sense understanding.

      Advancements in technology, specifically in the area of artificial intelligence, have allowed for better prediction of outcomes. The use of neural networks and increasing the amount of data trained on has led to improved predictions in areas such as medicine. In computer vision, the use of masking and transformers, specifically self-attention models, allow for a wider context to be considered in order to better understand the meaning of an image. While both language and vision present their own challenges, computer vision is considered to be more difficult as it requires understanding a sense that is common to many animals and does not have a structured language system.

    • Understanding the Differences Between Self-Supervised Learning for Vision and LanguageLanguage-based self-supervised learning is easier due to finite vocabulary and more context, while vision-based self-supervised learning is more difficult due to predicting large numbers of pixel values. Effective methods need to understand these differences.

      The success of self-supervised learning differs for vision and language due to their fundamentally different nature. Language is more structured as it relies on a finite vocabulary, making it easier to produce a distribution of predictions. On the other hand, vision is more challenging as it involves predicting a large number of pixel values, making it intractable for prediction problems. Additionally, language has more context and structure, making it easier to understand the meaning of words in different contexts. Overall, while both language and vision present challenges for AI, their differences must be understood to develop effective self-supervised learning methods for both.

    • Understanding Contrastive Learning and Energy-Based Models in Machine LearningContrastive learning helps computers to recognize patterns by contrasting them with unrelated samples while energy-based models explain the cost of actions in the learning process. These methods improve computer understanding but there is more work to do for human-like comprehension.

      Contrastive learning is a way of teaching a computer to recognize patterns by contrasting them with unrelated patterns. For example, if we want the computer to learn what a pet is, we can show it pictures of a cat and a dog as the positive samples, and a banana as the negative sample. Energy-based models are used to explain how these models work. It's a way of talking about the energy or cost of a certain action in the computer's learning process. This helps to make all the different types of models used in machine learning seem less complex and more understandable. Overall, these methods are used to teach computers to understand language and images with impressive precision, but there is still work to do to achieve human-like understanding.

    • Understanding Data Augmentation and Self-Supervised LearningData augmentation is a technique used to manipulate and enhance existing data sets. Self-supervised learning aims to learn features without labels. Contrast learning, a popular method, compares two perturbations of an image to ensure similar feature extraction.

      Data augmentation is a process where we enhance or generate more data by applying transformations or manipulations to an existing dataset. It plays a crucial role in the field of computer vision and self-supervised learning, where it helps to increase the size of the dataset and generate examples that are similar. The goal of self-supervised learning is to learn the features of the data without the use of labels. Contrast learning is a popular method used in self-supervised learning, where two perturbations of an image are compared to ensure the features extracted from both are similar. This method mimics the way humans learn by observing multiple angles and perspectives of an object to understand it better.

    • The Role of Data Augmentation in Self-Supervised Visual RecognitionData augmentation helps improve machine learning, but current techniques have limitations and involve human bias. More breakthroughs are needed to create a more efficient and realistic approach for predicting features from images.

      Data augmentation plays a crucial role in self-supervised learning for visual recognition. However, current techniques are limited and do not involve much learning. Augmentation involves human-specific bias and encodes much of the human knowledge in the process. More breakthroughs are possible in data augmentation to improve machine learning, such as creating a more efficient mechanism for predicting features from images that are robust. The challenge is to balance imagination with physical reality to ensure consistency in the learning process. Current data augmentation is not parametrized, and there is still work to be done to enable more generative and realistic possibilities.

    • What is Data Augmentation and How it Helps Improve Machine Learning ModelsData augmentation is a technique of artificially generating new data samples from existing data, which can help improve the accuracy and robustness of machine learning models. It should be realistic and subtle, taking into account the physical realities of each domain. Tagging can also aid in discovering semantically similar images.

      Data augmentation is a technique of artificially generating new data samples from existing data in order to increase the size, diversity, and quality of the training dataset. It can help improve the accuracy and robustness of machine learning models. However, data augmentation should not be completely independent of the image or task at hand, but rather take into account the physical realities of each domain. Realistic and subtle data augmentation can give significant gains in performance, and can be more useful than relying on a large dataset of natural images alone. Moreover, tagging can also aid in discovering semantically similar images, but relying solely on human tags may not always make it a self-supervised learning process.

    • SuAVE Algorithm for Preventing Collapse in Self-Supervised LearningSuAVE algorithm combines contrast and clustering assignment to prevent neural network collapse and improve feature learning in self-supervised non-contrastive energy-based learning methods.

      The task of discovering strong signals from human-generated data for self-supervision is important in teaching machines without extra effort. One way to do this is through non-contrastive energy-based self-supervised learning methods like clustering and self-distillation. Unlike contrastive learning, these methods do not require access to a lot of negatives and work towards similarity maximization between features. The main challenge is preventing collapse where the neural network learns the same feature representation for all inputs. The proposed improvement in the paper on supervised learning of visual features is the SuAVE algorithm, which combines contrast and clustering assignment to prevent collapse and improve feature learning.

    • Suave: A Clustering Technique for Self-Supervised LearningSuave computes clusters online using partition constraint to avoid collapse and allows for soft clustering to represent a large number of clusters. Suave is more effective than previous self-supervised learning methods.

      The key takeaway from this section is that Suave is a clustering technique used for self-supervised learning. It involves computing clusters online as opposed to offline, and the key methodology involves imposing an equally partition constraint to ensure that all samples are equally partitioned into K clusters. This ensures that collapse does not occur, which is viewed as a way in which all samples belong to one cluster. Suave also involves soft clustering, which allows for the representation of a large number of clusters. This technique was demonstrated on ImageNet and was shown to be more effective than previous self-supervised learning methods.

    • Training a Model on Billion Random Images without Filtering for Better PerformanceResearchers used unfiltered internet images to train a model, and the efficient reg net model proved successful in learning objects. Memory efficiency is crucial in designing effective network architectures.

      In a study, researchers trained a large convolutional model in a self-supervised way on a billion random internet images without filtering them out. This means that images uploaded by people, like cartoons, memes, or actual pictures, can be used to train a model without sorting them out. The model learned different types of objects and even performed better than an image-trained model in certain tasks. Researchers used reg net model, which is efficient in terms of compute and memory. They emphasized that designing network architectures that are efficient in memory space is more important than just in terms of pure flops (floating point operations).

    • Optimization of Self-Supervised Learning NetworksSelf-supervised learning networks can be optimized for computational efficiency and memory usage. Data and data augmentation techniques are more important for accuracy compared to architecture. Vissl Python-based SSL library helps in training and evaluating models efficiently.

      A recent study in self-supervised learning optimized networks for both computational efficiency and memory usage, resulting in powerful neural network architectures with many parameters and low memory usage. The key takeaway is that, in the era of self-supervised learning, data and data augmentation techniques have more impact on algorithm accuracy than the type of architecture used. To train such large neural networks effectively, the synchronization steps between all computer chips involved in the distributed training should be minimized to reduce communication costs. Vissl, a Python-based SSL library, provides a common framework to evaluate and train self-supervised models, allowing researchers to build new techniques and evaluate them consistently.

    • Enhancing Image Training with Multimodal LearningCombining audio and video in multimodal learning can improve recognition of human actions and sounds. Further research in this area could benefit various fields, such as speech recognition and object detection.

      The Visceral project is built upon benchmarking and self-supervised learning methods. However, smaller-scale setups in image training have proven to be challenging because the observations drawn from those experiments do not always translate well to larger datasets. One promising area of research is multimodal learning, which involves learning common feature spaces for multiple modalities, such as audio and video. In a recent study, a powerful feature representation for video was learned using contrast learning, which has potential applications in recognizing human actions and different types of sounds. Further research in multimodal learning could lead to advancements in various fields, such as speech recognition and object detection.

    • Self-Supervised Video Network Learning to Recognize Human Actions and ObjectsA self-supervised video network can learn to recognize and distinguish different human actions and objects without any annotations by observing correlations between sounds and objects or actions in multiple videos, with vision being the main source of learning. Active learning can still have value within this context.

      Researchers have found that a self-supervised video network can learn to recognize and distinguish different human actions and objects. The network can also locate where a sound is coming from in a video, such as detecting the location of a guitar or a celebrity's voice. These associations are made by observing correlations between sounds and objects or actions in multiple videos, without any annotations. While multiple modalities can provide additional insight, most of the learning is based on vision. Active learning can still have value within a self-supervised context by selecting parts of the data for optimal learning benefit.

    • The Power of Active Learning for Efficient Data Use and Better OutcomesActive learning is a technique where models ask questions and learn from answers, making it useful for data labeling, self-supervised learning, and neural network deployment for data collection and annotation.

      Active learning is a powerful technique that can lead to more efficient use of data and better learning outcomes. It involves an interactive exploration of data where a model asks questions and learns from the answers. Active learning is particularly useful for models that have knowledge gaps or weak spots in certain areas. It can also be used for data labeling, where the model can learn from a selected set of images that are neither too similar nor too dissimilar to a labeled image. In addition, active learning can be used for self-supervised learning and discovery mechanisms through a function that determines the most useful image given current knowledge. The deployment of neural networks in the wild for data collection and annotation is another example of active learning.

    • Self-Supervised Learning for Improving Autonomous DrivingSelf-supervised learning can help improve autonomous driving technology by using prediction uncertainty to identify edge cases where the model fails and retraining the system based on those cases. With advancements in camera technology and computer vision, fully autonomous driving is expected within the next decade.

      Self-supervised learning, where predictive models are learned by looking at data and making predictions of what will happen next, is a promising approach to autonomous driving. This is especially true for edge cases, which are the main reason why autonomous driving has not become more mainstream. Utilizing prediction uncertainty to identify cases where the model fails and then retraining the system based on those cases is a smart way to improve autonomous driving technology. While the development of fully autonomous driving is challenging, recent advancements in camera technology and computer vision-based approaches, such as that used by Tesla, suggest that it may be possible within the next five to ten years.

    • The Promise and Challenges of Vision-Based Autonomous DrivingWhile advancements in sensor technology and AI can improve driving, human-robot interaction presents a significant challenge that may require AGI to solve. A deeper understanding of human factors is needed to ensure the safety of autonomous driving.

      The potential for vision-based autonomous driving is promising with advancements in sensor technology. However, the interaction between human behavior and autonomous driving presents a human-robot interaction problem that may require AGI to solve. While AI and self-supervised learning can improve driving, the safety of human life requires a deeper understanding of human factors, psychology, and emotions. There may be a significant amount of time needed to recover most of the United States. While some cities or contexts may work well with autonomous driving, there is a long tail of pessimism for autonomous driving to be fully realized.

    • The importance of societal context in the integration of AI systemsAI systems must consider societal context and navigate challenges like data efficiency, generalization, and machine learning algorithm guarantees. Successful integration with society hinges on regulation and collaboration with politicians and journalists.

      AI systems need to consider the societal context within which they operate, as they become integrated into society and face the challenges of navigating human nature. One major challenge for deep learning is data efficiency, as it requires multiple instances of a concept to generalize effectively. While humans are better at generalizing from a single example, they rely on their own domain knowledge and biases. Additionally, there are no clear guarantees for machine learning algorithms, and their correctness is often nebulous. As AI becomes more successful, the importance of regulation and integration with society, politicians, and journalists increases.

    • Challenges in Implementing Long-Term Memory Mechanisms in Neural NetworksWhile neural networks excel at pattern recognition, they struggle with reasoning and handling complex problems. Continual learning paradigms are needed to improve generalization. Engaging with AI using human biases can hinder its natural learning.

      Neural networks are good at recognizing patterns but struggle with reasoning and composing information to solve complex problems. Current machine learning techniques lack the ability to characterize how well a model will generalize to unseen data, and there is a need for continual learning paradigms. While humans may not be aware of their background knowledge, they are exceptional at retaining information and building on it to reason and compose concepts. It remains an open problem whether long-term memory mechanisms and the storage of interrelated concepts in a single neural network can lead to more explainable AI. Ultimately, trying to understand AI with human biases can hinder its natural learning from data.

    • The Importance of Emotion, Self-Awareness, and Consciousness in Building Superhuman Intelligence SystemsEmotion, self-awareness, and consciousness are key elements for creating a superhuman intelligence system. Including them can help create surprise, contextualize the system's role, and promote cautious behavior in relations with other living entities.

      Emotion, self-awareness, and consciousness are all important elements for building a superhuman intelligence system. Emotion, although not typically attributed in standard machine learning, is important for creating surprise and the mismatch between what is predicted and what is observed. Self-awareness is critical for contextualizing the system's role and limitations in relation to other entities. Consciousness, particularly the ability to display cautiousness, is essential for human connections with other living entities. It may also be necessary for the system to have some kind of embodiment or interaction with the physical world to fully understand it.

    • The Importance of Consciousness in AGI Systems Through Self-Supervised LearningAGI systems need to interact with humans naturally without external incentives. Self-supervised learning has shown to automatically group objects and understand basic concepts, making it a promising approach for developing advanced AGI systems.

      The success of creating AGI systems lies in their ability to richly interact with humans in a natural and interesting way, without the need for external incentives or payments for interaction. This means that AGI systems must display consciousness, or the capacity to suffer and feel things in the world and communicate them to others. Self-supervised learning techniques have proven to be powerful in allowing machines to automatically group objects together and even understand fundamental concepts like object permanence. These emergent abilities suggest that even more complex ideas like symmetry and rotation could also emerge from self-supervised learning on billions of images, making it a promising approach for AGI development.

    • The Limits of Simulations for Training Machine Learning SystemsSimulations have limitations in capturing the constantly changing real world and are expensive to build. While they have certain applications, real world training is essential for computer vision and machine learning.

      The speakers discuss the use of simulations to train machine learning systems, such as for autonomous driving, but express doubts about the effectiveness and feasibility. They note that simulations are expensive to build and may not accurately capture the constantly changing real world with its many edge cases and human behavior. Additionally, accurately simulating visual environments is a difficult task. While simulations may have certain applications, they may not apply to a lot of concepts. Ultimately, the speakers believe that simulations are not a necessary prerequisite for computer vision or machine learning, and that real world training is essential.

    • Choosing a Feasible Research Problem and Focusing on One Idea for Successful Paper WritingPick an interesting and feasible research problem to sustain motivation. Focus on one idea and write early to clarify and strengthen it. Clear and simple papers with a central idea are successful.

      When it comes to research, it's important to pick a problem that is both interesting to you and feasible to make progress on within a reasonable timeframe. Passion for the problem is crucial to sustain interest and motivation throughout the research process. When it comes to writing papers, it's important to focus on one simple idea rather than cramming multiple ideas into a short paper. Writing early on in the research process can help to clarify and strengthen ideas while also revealing any gaps in the research. Many successful papers throughout history are short and simple, with a clear focus on one central idea.

    • Choosing the Best Tools for Machine LearningPython is the easiest and most widely used programming language for machine learning, and TensorFlow and PI torch are two good frameworks for different types of projects. Dive deep into troubleshooting and embrace competition in the field.

      When starting a machine learning project, it is common to begin with an area of interest and then conduct research. Python is often the best programming language to learn for machine learning because it is easy and widely used, though there are other options like Swift, JavaScript, and R. When choosing a framework, PI torch and TensorFlow are both good options, with PI torch being more embedded and easier to debug, while TensorFlow is more popular for application-based machine learning. The competition between different frameworks is healthy and benefits the field overall. For those new to the field, don't be afraid to get hands-on and dive deep into troubleshooting when things don't work.

    • Embracing Struggle for Personal and Professional GrowthEmbrace challenges, persevere through failures, and take the time to figure things out on your own. Failure is a part of the process, use it as a learning experience and embrace your hunger for success.

      The key takeaway from this section is to embrace struggle and persevere through it, whether it's spending hours debugging or pushing through failures. Googling for quick answers is helpful, but taking the time to figure things out yourself can lead to more learning and growth. Being driven and hungry for what you want is important, and committing to it despite a fear of missing out on other opportunities is necessary. Failure is a part of the process, and having a thick skin and using it as a learning experience can lead to success. Overall, embracing challenges and persevering through them can lead to personal and professional growth.

    • Exploring Life's Meaning and AI's Potential for AnswersLife presents varying perspectives, while AI relies on objective functions; learning from and avoiding mistakes is vital. Questing for eternal answers with technology is magical and worthwhile.

      In this podcast conversation, the speakers discuss the meaning of life and the potential for AI to help us find answers. While the human ability to have different objective functions and perspectives is seen as a positive feature of our existence, AI operates under well-defined objective functions. The speakers also highlight the importance of learning from mistakes, even the mistakes of others. Ultimately, the conversation leaves us with the idea that technology can sometimes seem like magic, and while we may not have all the answers, the quest to find them is an endless pursuit worth undertaking.

    Recent Episodes from Lex Fridman Podcast

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life
    Ivanka Trump is a businesswoman, real estate developer, and former senior advisor to the President of the United States. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/ivanka-trump-transcript EPISODE LINKS: Ivanka's Instagram: https://instagram.com/ivankatrump Ivanka's X: https://x.com/IvankaTrump Ivanka's Facebook: https://facebook.com/IvankaTrump Ivanka's books: Women Who Work: https://amzn.to/45yHAgj The Trump Card: https://amzn.to/3xB22jS PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:17) - Architecture (22:32) - Modern architecture (30:05) - Philosophy of design (38:21) - Lessons from mother (1:01:27) - Lessons from father (1:09:59) - Fashion (1:20:54) - Hotel design (1:32:04) - Self-doubt (1:34:27) - Intuition (1:37:37) - The Apprentice (1:42:11) - Michael Jackson (1:43:46) - Nature (1:48:40) - Surfing (1:50:51) - Donald Trump (2:05:13) - Politics (2:21:25) - Work-life balance (2:27:53) - Parenting (2:42:59) - 2024 presidential campaign (2:46:37) - Dolly Parton (2:48:22) - Adele (2:48:51) - Alice Johnson (2:54:16) - Stevie Ray Vaughan (2:57:01) - Aretha Franklin (2:58:11) - Freddie Mercury (2:59:16) - Jiu jitsu (3:06:21) - Bucket list (3:10:50) - Hope
    Lex Fridman Podcast
    enJuly 02, 2024

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    Related Episodes

    Oprah Winfrey & "John of God"

    Oprah Winfrey & "John of God"

    Special guest Kimberly Springer joins us to talk about an infamous, dangerous faith healer and his two — two! — appearances on Oprah's talk show. This episode contains, we're sorry to say, detailed descriptions of sexual assault. 

    Buy Kimberly's books:

    Support us:

    Links!


    Support the show

    Rachel Hollis Part 2: Girl, Start Apologizing