Logo
    Search

    #258 – Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning

    enJanuary 22, 2022

    Podcast Summary

    • The Power of Self-Supervised Learning in Achieving True Artificial IntelligenceSelf-supervised learning is a type of machine learning that allows machines to learn by observing the world without the need for human annotation or trial and error. This form of learning provides more signal and truth and is essential in achieving true artificial intelligence.

      Yann LeCun, Chief AI Scientist at Metta, and former Facebook professor, discusses the concept of self-supervised learning, which involves observing the world and using background knowledge to learn, without the need for human annotation or trial and error. This form of learning is currently missing from current AI paradigms, such as supervised and reinforcement learning, which require large amounts of data or simulated practice to achieve results. According to LeCun, self-supervised learning provides more signal and truth, making it a crucial aspect to replicate in machines for true artificial intelligence to be achieved.

    • Self-Supervised Learning for Intelligent MachinesGiving machines the ability to fill in missing information through self-supervised learning could revolutionize computer vision. While successful for natural language processing, applying this approach to visual data could lead to major progress.

      Yann LeCun suggests that self-supervised learning, where machines learn from giving feedback to themselves, could be the best way to create intelligent machines. In this approach, machines fill in the gaps of missing information, either by predicting the future, inferring the past or filling in missing information in between. This approach has been successful for natural language processing but not yet for images and videos. The difficulty of image self-supervised learning may not be fundamentally more difficult than that of language. Therefore, if the process of self-supervised learning can be applied to visual data, it could lead to major progress in computer vision.

    • Overcoming Uncertainty in Predicting Outcomes in AICreating accurate predictive models in AI requires addressing the challenge of representing a continuum of possible outcomes without losing important information. Current approaches to supervised learning in text are simplistic and do not account for word dependencies, and researchers need to find new methods to address this challenge.

      Yann LeCun, an AI researcher, discusses the difficulties in predicting outcomes in both language and vision. Uncertainty and a continuum of possible outcomes make it challenging to create models that accurately predict the future. Despite this, current approaches to supervised learning in text are simplistic and rely on independent probabilities for each word, ignoring the reality that certain words and events are dependent on each other. The challenge is to find a way to represent the infinite number of possible outcomes in a compressed way, without losing crucial information. This is a crucial challenge in AI research that needs to be addressed to improve predictive models.

    • Understanding the Connection between Intelligence and Predictive CodingIntelligence is not just about statistics; it involves understanding causality. Predictive coding is crucial for learning, and the ability to learn world models is key to creating intelligent machines.

      Intelligence can be described as statistics, but this does not mean that models are lacking in causality. Learning causal models is important to drive a deeper understanding of the world. Predictive coding is the main principle underlying intelligence: the ability to predict is crucial for learning. However, while humans are able to perform high-level cognitive processes, neural networks may just fill gaps by updating models constantly to support raw sensory information, which could be seen as a basic low-level mechanism. At this stage, reproducing the running processes in a cat brain would be a significant achievement. Ultimately, the ability to learn world models is the key to the possibility of running machines.

    • The Challenges of Machine Learning and the Complexity of the Real WorldMachine learning faces challenges in representing the world, reasoning, and learning action plans. The most difficult challenge is learning representations of action plans. It requires machines to deal with the complexities of the real world, uncertainty, and game-theoretic situations.

      Machine learning has three main challenges: getting machines to learn and represent the world, getting machines to reason in ways that are compatible with gradient-based running, and getting machines to learn representations of action plans. The last challenge is the most difficult, as we currently have no idea how to solve it. To achieve these goals, machines need a background knowledge, ability to reason in a differentiable way, and integration with predictive models of the world. This involves dealing with uncertainty and complexity, including game-theoretic situations with multiple agents. Classical control models are not generally learned, so the challenge for AI is to develop models that deal with the real world in its complexity.

    • Humans Vs. Machines: The Complexity of Modeling and LearningWhile machines excel in certain tasks like playing games, humans are better at learning and estimating outcomes in unpredictable situations. By making our models, objectives, and critics differentiable, we can create more efficient intelligent agents.

      In simpler terms, the discussion is about how humans are more complicated to model than machines because we are unpredictable and deal with continuous uncertainty. While computers are better at games like chess and Go, humans are better at learning and estimating differentiable models of the world, which means we have a way of estimating outcomes and learning from them. This makes us more efficient, and if we can make our world model, objective function, and critic all differentiable, we can use gradient-based learning to create intelligent agents. Logic-based reasoning may not be compatible with efficient learning, and it's unlikely that the brain uses a black box gradient method for optimization.

    • The Essence of Intelligence: Constructing Models and Planning ActionsLogical reasoning isn't the most important aspect of intelligence. Rather, the ability to construct models of the world learned through self-supervised learning is key, driving behavior via various objective functions.

      According to Yann LeCun, there are different types of intelligence, and logical reasoning is not the most important. Instead, the ability to construct models of the world and use them to plan actions is the essence of intelligence. This ability is learned through self-supervised learning, with almost all knowledge acquired through this process. For example, a cat's ability to navigate its environment and knock things off shelves is driven by innate objective functions, learned through self-supervised learning, and not classical supervisor running. The brain's objective function, such as hunger, drives behavior, and the brain's baseline function, such as homeostasis, might be just one objective function out of many.

    • The Role of Language in IntelligenceIntelligence is not limited to language and social interaction. Basic drives are hardwired into animals, and unsupervised learning can develop intelligence through image recognition. The amount of training data for image recognition is essentially unlimited.

      Human intelligence is often associated with language and social interaction, but this is not necessary for intelligence. Evolution has hardwired some basic drives, such as the desire to walk, into animals, even those that are solitary. Intelligence can also be developed through unsupervised learning, where a system learns to represent images and recognize handwritten digits with very little example. This type of learning is also observed in children, who can learn what an elephant is from a few pictures. Furthermore, image recognition systems can be trained using billions of images from Instagram, making the amount of training data essentially unlimited.

    • Understanding Data Augmentation and Self-Supervised Learning TechniquesData augmentation artificially increases training data by distorting images in various ways to enhance classification performance. Self-supervised learning benefits from these techniques, especially contrasting learning that trains a neural network to produce different representations for different inputs.

      Data augmentation is the process of artificially increasing the size of training data by distorting images in ways that do not change their nature, such as rotating, shifting, resizing, adding noise, and more. This technique is used to improve classification performance by generating more diverse and representative examples. Self-supervised learning, which involves training a neural network to learn useful representations from unlabeled data, has been shown to benefit from data augmentation techniques. Contrasting learning, a popular self-supervised learning technique, uses pairs of images to train a neural net to produce an output representation that is invariant to certain transformations. Negative examples are used to ensure that the network produces different representations for different inputs.

    • Yann LeCun discusses limitations of contrastive learning and alternate methods for AI trainingContrastive learning is effective in low dimension representations but requires too many negative pairs in high dimension representations. LeCun is exploring non-contrasting methods like BOLO twins and Vic reg, but they have limitations in object detection and localization.

      Yann LeCun, an AI researcher, discusses the limitations of contrastive learning - a method of training AI to identify similar and dissimilar attributes between images. Though effective in low dimension representations, contrastive learning requires too many negative pairs in high dimension representations. LeCun's current focus is on maximizing the mutual information between the outputs of the two systems through non-contrasting methods, such as BOLO twins and Vic reg. However, using distortions in images for data augmentation and training with specific documentation only improves object recognition or image classification but not object detection or localization. The AI may still be able to find the object in the image, but it struggles to find the exact boundaries.

    • The Importance of Object Localization in Artificial IntelligenceObject localization is necessary for understanding the contents of a scene and learning how the world works through high throughput channels like vision. Grounding intelligence in physical observations is crucial for real artificial intelligence.

      Object localization, or the ability to identify and locate objects within a scene, is important for survival and has evolutionary roots. However, Yann LeCun suggests that we have been too focused on measuring image segmentation and knowing the boundaries of objects, when arguably, it may not be essential to understanding the contents of a scene. LeCun believes that the ability to learn how the world works from high throughput channels like vision is a necessary step towards real artificial intelligence. He also disputes the idea that natural language processing alone can provide enough information about the world, and argues for the importance of grounding intelligence in observations of the physical world.

    • Yann LeCun on Data Augmentation and Masking in Machine LearningData augmentation is important in training models, but masking can also be a valuable method for reconstructing images. Joint embedding architectures are also crucial, and selecting the right data is key for successful training.

      In a conversation with Lex Fridman, Yann LeCun discusses the importance of data augmentation in machine learning and image recognition. However, he also notes that augmentation is a temporary measure until better methods are discovered. One of these methods includes the use of masking, where parts of an image are blocked and a system is trained to reconstruct the missing areas. LeCun adds that these masked sections do not need to be limited to squares or rectangles, and more challenging methods can be developed in the future. He also notes the importance of using joint embedding architectures to align representations and make predictions, as well as selecting the right type of data for training.

    • Expert AI Scientist Believes Self-Supervised Learning is the Key to Achieving True IntelligenceWhile practical solutions like multitask learning have immediate benefits, the most important problem for creating predictive models is self-supervised learning. In the short term, engineering problems require shortcuts, but in the long run, self-supervised learning is necessary for true intelligence.

      Yann LeCun, an AI expert, believes that while practical solutions like multitask learning and continual learning have short-term benefits, the fundamental problem of self-supervised learning is what the AI community should be focusing on. The importance of this problem lies in the fact that it can lead to creating predictive models that can eventually lead to AI achieving true intelligence. However, in the short term, practical solutions like those employed by Tesla's Autopilot team are necessary to address engineering problems, even if they require taking shortcuts. LeCun has faith that the AI community will eventually come around to prioritizing self-supervised learning.

    • Yann LeCun on the Evolution of AI TechniquesModern AI relies on deep learning to train systems end-to-end, allowing the system to learn its own features without explicit hand engineering. Active learning is useful, but may not be necessary for efficient learning.

      Yann LeCun explains that historically in AI, techniques involved handcrafting and engineering to extract features for different tasks such as image recognition, speech recognition and natural language understanding. However, with the rise of more powerful computers and statistical learning, modern AI now involves training entire systems end-to-end using deep learning. This means the system learns its own features without explicit hand engineering. While active learning, where a system interacts with the world to improve over time, is useful, it may not be necessary for efficient learning. It's important to understand what learning process is being made more efficient with active learning.

    • The Controversy and Limitations of Consciousness ExplorationOur brains have limitations, and consciousness may be the module that helps us configure our understanding of the world. This challenges traditional beliefs about how humans learn and think.

      The concept of consciousness is a controversial topic that has been explored throughout history. Yann LeCun speculates that consciousness may be the module that configures our world model, as we only have one world model that we configure to the situation at hand. This suggests that consciousness is a consequence of the limitations of our brains, and we need an executive control to configure our world model effectively. Furthermore, LeCun suggests that some people are nativists who believe that the basic concepts about the world are hardwired into our minds. These ideas challenge widely accepted beliefs about consciousness and learning.

    • The Learning and Hardwiring of Human PerceptionOur brains have the capacity to learn many aspects of perception, but certain intrinsic drives like the fear of death are hardwired and impact our behavior and goal setting. Coping mechanisms such as beliefs about the afterlife are common.

      Many of the basic aspects of our perception and understanding of the world around us are learned rather than hardwired into our brains from birth. Even tasks such as detecting edges or perceiving the world in three dimensions can be learned within minutes of opening our eyes. However, there are certain intrinsic drives, such as the fear of death, that are hardwired into our brains. These drives motivate our behavior and impact the goals we set for ourselves. While it is uncertain when humans begin to grasp the concept of death, many people hold beliefs about the afterlife to cope with the fear.

    • Understanding and Accepting Death as a Core Aspect of Human NatureWhile religion may offer comfort, accepting death is a personal journey that requires acknowledging our mortality. Our ability to plan for the future stems from awareness of our finiteness, but the mystery of human consciousness remains.

      Death is a core, unique aspect of human nature that we are able to understand and comprehend. While religion may provide comfort and a sense of understanding of immortality, it ultimately does not solve the problem of finiteness of life. Our ability to plan and predict the future, which is a result of our intelligence, is connected to our awareness of our mortality. Accepting death is a personal journey and can be a source of motivation to live fully, but ultimately, the human mind and consciousness remain a scientific mystery that can only be understood through building artifacts that mimic their structure.

    • Yann LeCun on the Importance of Emotions in AI and the Future of Rights for RobotsAI systems need emotions to be truly autonomous, and as technology advances, our understanding of rights and relationships may blur the lines between humans and machines.

      Yann LeCun believes that building intelligent artifacts, such as AI systems, will help us develop a better understanding of human and biological intelligence. Emotions are an integral part of autonomous intelligence, and if an AI system has a critic that allows it to predict whether the outcome of a situation is good or bad, it will have emotions. Hence, the idea of an emotionless AI is ridiculous. Regarding the discussion of whether robots deserve the same rights as humans, LeCun thinks that as technology advances, our ideas of rights and relationships will change, and we may find ourselves exploring dangerous areas and experiences more frequently.

    • Ethical & Legal Questions Surrounding AI in SocietyCopying an AI system could be illegal; intellectual property claims and privacy concerns may arise; regulations and laws may be necessary for 'sentient' robots; and we may have to accept risks and consequences.

      In the future, as AI systems become more integrated into human society, there will be ethical and legal questions around how we treat them. One key takeaway is that copying an AI system will likely be illegal and might destroy the motivation of the system. As humans develop attachments to AI systems, they may have intellectual property claims and privacy concerns. It is also possible that we will need regulations and laws around how we interact with 'sentient' robots designed for human interaction. In the future, just like humans, we may have to accept the risk of losing our robot friends and the potential consequences of their actions.

    • Ethical Implications of Developing Emotional RobotsDeveloping emotional robots raises questions about human rights and values. While it is possible to reproduce human intelligence in non-biological hardware, machines are not yet more intelligent than humans. Facebook AI research has made significant contributions to advancing technology.

      The development of emotional robots raises questions about human rights and what we value in humans and animals. While the Chinese room type argument about intelligence being reduced to a lookup table is regarded as ridiculous, reproducing human intelligence in different hardware than biological hardware is possible. However, it will take a long time for machines to become more intelligent than humans in all domains where humans are intelligent. In the meantime, organizations like Facebook AI research (FAIR) have succeeded in producing top-level research, advancing science and technology, and providing open source tools that have had an indirect impact on Facebook (now META).

    • The Importance of Facebook's AI Research Lab (FAIR) and the Challenges AheadFacebook's core AI research lab, FAIR, is focused on fundamental research, while other teams emphasize applied research and development. The introduction of the Metaverse presents a new challenge in making the experience comfortable for users.

      Facebook's AI research lab, FAIR, is essential to the company's operations and success, with its core built around AI technology. The lab has undergone changes, with Yann LeCun stepping down as director to focus on research and development under the guidance of Joelle Pineau. FAIR's research is mostly focused on fundamental research, while other organizations within Facebook, like Physical AI, emphasize applied research and development. The introduction of the Metaverse represents Facebook's next step by creating a more compelling, 3D environment for people to connect with each other and content, but the challenge remains in making the experience comfortable for users.

    • Facebook VP Dismisses Claims that Social Media Causes Polarization and RadicalizationAcademic studies show that Facebook and other social media platforms do not cause polarization or radicalization. Instead, society has been polarizing for 40 years, and blaming social media is not the solution.

      Yann LeCun, VP of Facebook (now Meta), believes that the negative portrayal of the company in the media is not an accurate representation of what happens within the corporation. Academic studies show that claims suggesting Facebook or other social media platforms polarize people, radicalize teens or polarize individuals during political debates are not true. Instead, there is a constant evolution of polarization in society, which has been going on for 40 years and is not caused by social media. It is essential to find the right cause of this polarization to fix the problem instead of blaming social media companies for someone else's misdoings.

    • Yann LeCun on Social Media, Facebook, and AI: Potential for Positive ChangeDespite the negative effects of emerging technologies, such as social media and AI, they also have the potential to bring positive change and advancements in various fields. It is essential to find new ways to handle uncertainty in AI and embrace the potential for positive change.

      Yann LeCun discusses how social media is often criticized for causing division, but the printing press had similar negative effects when it first emerged. LeCun also talks about his work with Facebook and AI, and how both Mark Zuckerberg and Sheryl Sandberg are incredibly driven and passionate about technology. The conversation then shifts to LeCun's rejected paper on non-contrast learning techniques, which he discusses in detail. He also touches on the challenges of the review process and the importance of finding new ways to handle uncertainty in AI. Overall, the key takeaway is that while new technologies may have negative effects, they also have the potential to bring positive change and advancements in various fields.

    • Handling Multimodality in Video Prediction through Joint EmbeddingYann LeCun proposes a method of predicting an abstract representation of pixels using joint embedding, which can be applied to various types of data. V Craig refines the Bottleneck Twin Networks method and provides valuable predictive modeling for hierarchical world representations.

      Yann LeCun discusses two ways to handle multimodality in video prediction. The first method predicts pixels using a different variable, while the second method predicts an abstract representation of pixels that guarantees maximum information about the input is preserved. This second method is based on joint embedding and is generally applicable, even for text or audio. The paper discussed is a follow-up on the "Bottleneck Twin Networks" paper and introduces a method called V Craig, which is a refinement of the former. There are criticisms that V Craig is not different enough from Bottleneck Twin Networks, but it is still a valuable tool for predictive modeling with hierarchical representations of the world.

    • The Role of Conferences in Computer Science and Potential Solutions for Flaws in Peer Review ProcessComputer science conferences provide a platform for quick review and presentation of ideas. However, limited reviewers and biases can create flaws in the peer review process. Open repositories and collective recommender systems can help address the issue by allowing more diverse reviews and continuous evaluation.

      Computer science conferences are important in the field as they allow for quick peer review and presentation of new ideas. However, the peer review process can still have flaws due to limited reviewers and biases. Additionally, the exponential growth of the field means that the majority of people in the field are junior, leading to a focus on finding flaws in papers rather than identifying new, impactful ideas. Open repositories such as arXiv and open review systems could provide a solution to this issue, allowing for more diverse reviews and a wider pool of reviewers. A collective recommender system could also be implemented to allow for continuous evaluation of papers by various reviewing entities.

    • Addressing Bias in Academic Publishing and the Future of Reputation-Based ReviewingYann LeCun suggests that developing a reputation-based reviewing system for entities in academic publishing could improve the review process and prevent biases. He also highlights the mystery behind complex systems and his work in neural nets.

      Yann LeCun, an AI expert, agrees with Lex Fridman that academic publishing needs a reputable reviewing system for entities. Currently, the incentive for reviewers to do their job well is internal, mostly because the review process only counts for points or credits. However, LeCun proposes that there should be a reputation system for the reviewing entities, where the evaluation of papers would be predictive of their future success. Unfortunately, current review processes are not innovative enough, and while papers may go through double-blind review to avoid biases, biases still exist. LeCun also touches on the mystery of how simple interactions between elements can create complex systems, which is something his work in neural nets explores.

    • The Mystery of Self-Organization and Complexity in Physics and NeuroscienceUnderstanding the mathematics of emergence and self-organization is vital in studying life on other planets. However, measuring complexity is subjective and depends on an individual's algorithm and perception system, hindering the development of a comprehensive theory of intelligence, self-organization, and evolution.

      The concept of self-organization is a mystery that puzzles physicists, as it is unclear how certain patterns emerge in physical systems and chaotic systems. Neural nets, which are a type of self-organization, are also puzzling to scientists. Additionally, researchers lack a good way of measuring complexity, which is crucial to understanding the mathematics of emergence in certain situations. This is especially important when it comes to studying life and recognizing it on other planets. However, the concept of complexity is subjective and depends on the beholder's algorithm and perception system. Until there is a better understanding of complexity, a comprehensive theory of intelligence, self-organization, and evolution may not be possible.

    • Challenges in Alien Interactions, Quantum Physics, and Electronic Music InstrumentsUnderstanding different perspectives, complexity, and limitations can help overcome challenges in alien interactions, quantum physics, and technological advancements in music instruments.

      In a podcast conversation between Lex Fridman and Yann LeCun, the pair explored the challenge of detecting or interacting with alien species due to the possibility of different perspectives and the notion of locality. This connects to questions in modern physics and quantum physics about complexity and recovering information lost in a black hole. LeCun discussed his personal quest to build an expressive, electronic wind instrument (EWI) that combines his love of music and electronics. He noted the challenges of creating an electronic instrument that is as expressive as an acoustic one due to the differences in sound reflection. Additionally, LeCun shared his passion for building model airplanes and various electronics in his New Jersey workshop.

    • Yann LeCun on the Future of AI and the Importance of Solving Big Problems in ScienceYann LeCun advises young people to focus on fundamental problems in fields such as math, physics, and engineering, as they have a long shelf life and are used indirectly in many fields. He highlights the potential of AI and deep learning in solving big problems in science, which could lead to enormous progress in various fields.

      Yann LeCun, a leading expert in artificial intelligence, advises young people to get interested in big questions and fundamental problems in areas like math, physics, and engineering, as they have a long shelf life and are used indirectly in many fields. LeCun also highlights the potential of AI and deep learning in solving big problems in science, such as developing new compounds and materials for energy storage or stabilizing plasma for fusion reactors. He emphasizes the importance of converting complex problems in science and physics into learnable problems for machines to solve, which could lead to enormous progress in various fields.

    • How Machine Learning Helps to Model Complex Emergent PhenomenaMachine learning can be used to understand and model complex systems, such as predicting aerodynamic properties of solids by training neural nets with sufficient data. These applications can lead to innovative solutions.

      Yann LeCun, the head of Facebook AI Research (FAIR), discusses how machine learning can be used to discover and model complex emergent phenomena, such as superconductivity. By training neural nets with sufficient data, one can create a differentiable model of a system's properties and optimize it to achieve the desired outcome. For example, LeCun discusses a startup that trained a conventional neural net to predict the aerodynamic properties of solids by generating data and teaching the model to make predictions. This conversation highlights the potential of machine learning to discover and model complex physical phenomena and create innovative solutions.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    #206 – Ishan Misra: Self-Supervised Deep Learning in Computer Vision

    #206 – Ishan Misra: Self-Supervised Deep Learning in Computer Vision
    Ishan Misra is a research scientist at FAIR working on self-supervised visual learning. Please support this podcast by checking out our sponsors: - Onnit: https://lexfridman.com/onnit to get up to 10% off - The Information: https://theinformation.com/lex to get 75% off first month - Grammarly: https://grammarly.com/lex to get 20% off premium - Athletic Greens: https://athleticgreens.com/lex and use code LEX to get 1 month of fish oil EPISODE LINKS: Ishan's twitter: https://twitter.com/imisra_ Ishan's website: https://imisra.github.io Ishan's FAIR page: https://ai.facebook.com/people/ishan-misra/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:49) - Self-supervised learning (16:24) - Self-supervised learning is the dark matter of intelligence (20:17) - Categorization (28:50) - Is computer vision still really hard? (32:35) - Understanding Language (42:14) - Harder to solve: vision or language (48:59) - Contrastive learning & energy-based models (52:59) - Data augmentation (57:19) - Fixed audio spike by lowering sound with pen tool (1:05:33) - Real data vs. augmented data (1:09:16) - Non-contrastive learning energy based self supervised learning methods (1:12:54) - Unsupervised learning (SwAV) (1:15:37) - Self-supervised Pretraining (SEER) (1:20:44) - Self-supervised learning (SSL) architectures (1:26:43) - VISSL pytorch-based SSL library (1:29:38) - Multi-modal (1:37:06) - Active learning (1:42:45) - Autonomous driving (1:54:12) - Limits of deep learning (1:58:19) - Difference between learning and reasoning (2:03:26) - Building super-human AI (2:11:14) - Most beautiful idea in self-supervised learning (2:15:02) - Simulation for training AI (2:18:27) - Video games replacing reality (2:19:40) - How to write a good research paper (2:24:08) - Best programming language for beginners (2:25:01) - PyTorch vs TensorFlow (2:28:26) - Advice for getting into machine learning (2:30:31) - Advice for young people (2:32:58) - Meaning of life

    Rory O’Driscoll founder of Scale Venture Partners discuss the in and outs of enterprise investing, taking companies public, AI, and building a long lasting VC firm.

    Rory O’Driscoll founder of Scale Venture Partners discuss the in and outs of enterprise investing, taking companies public, AI, and building a long lasting VC firm.

    Today on our podcast we have Rory O’Driscoll, cofounder and partner at Scale Venture Partners. Scale Venture Partners just raised their 6th fund of $400m managing over $1.5 billion in assets under management. Scale had interesting beginnings as a firm; they were an in-house VC fund inside Bank of America and spun out as a separate firm in 2007 by Rory Driscoll and Kate Mitchell. Scale is one of the few model VC funds in SV with a woman founder and many women investing partners . 

    Scale has been a helpful partner to Array since our early days and one of the few firms such as Array that focuses in enterprise investing. Rory has led investments and sits of the boards of companies such as Box, DocuSign, Pantheon and many more where Scale invested in these companies around Series A or B until they go public. I wanted to sit down with Rory and learn how to build a long lasting firm and how his views on the enterprise sector have evolved over the years. In this episode, Rory discusses the typical lifecycle of an enterprise startup and there path to IPO, his views on ICOs, how to pick an investment strategy, when/if to consider changing your investment strategy and much more.


    Rory  O’Driscoll's Twitter: https://twitter.com/rodriscoll

    Shruti Gandhi's Twitter: https://twitter.com/atshruti

    Scale Venture Partner: https://www.scalevp.com/

    Array Ventures: http://www.array.vc/

    Rachel Hollis Part 2: Girl, Start Apologizing

    Brad Feld founder of Foundry Group and Techstars shares how he invests in companies and funds

    Brad Feld founder of Foundry Group and Techstars shares how he invests in companies and funds

    Brad Feld is a legend in the venture industry. Brad started out as a founder of Feld Technologies then became an angel and institutional investor and recently also expanded his firm, Foundry Group, into fund investments. Amongst the many things he is known for Brad started Techstars and was an early investor in well known public companies such as Zynga, and Fitbit. He has been one of the few investors bringing transparency to this opaque industry by blogging about his perspectives. In this podcast Brad shares his thoughts on what it takes to be a good investor and fund manager.

    You can follow us on twitter at @bfeld and @atShruti

    Cre-AI-tivity: Blood in a Whatsapp message?

    Cre-AI-tivity: Blood in a Whatsapp message?
    This last in our trilogy explores data as the foundation of AI systems. We learn how this enables mapping individual learners' progress and benchmarking in a teaching context, but also how that data exchange raises ethical issues. We explore how artificial intelligence builds functionalities on different data streams and consider our options to select and influence such 'training data'. Investigating this from a position understanding teaching as enabling a learner’s response, we discover how intimate conversations with Romeo & Juliet arise from what manifests as the AI’s ‘agency’. Yet we have to check in how far this also enables interactions that we wouldn't want to encourage or support. Prompting listeners to engage in their own observations and interactions with machine learning, we advocate curiosity outside academic’s traditional comfort zones and building your own critical attitude alongside symbiotic relationships with relevant partners, agreeing work packages which relate to differential skill sets. Setting out a space for serendipity, and claiming a license to fail emerge as key catalysts in the process of applying artificial intelligence in the arts and humanities.