Logo
    Search

    Jeremy Howard: fast.ai Deep Learning Courses and Research

    enAugust 27, 2019

    Podcast Summary

    • Emphasizing practical application and hands-on exploration in learning deep learningJeremy Howard, founder of Fast AI, recommends the platform for its accessibility, focus on cutting-edge research, and lack of excessive hype. His background in music and programming shaped his love for data and practical learning.

      Jeremy Howard, the founder of Fast AI, emphasizes the importance of practical application and hands-on exploration in learning deep learning. He recommends Fast AI as a top resource due to its accessibility, focus on cutting-edge research, and lack of excessive hype. Howard's background in music and programming intertwined, starting with his first program on a Commodore 64 that searched for better musical scales. He's used various programming languages, including Visual Basic for Applications (VBA) and Microsoft Access, which he misses for its ease of use in creating user interfaces and managing data. Howard's love for data and programming has been a constant thread throughout his career, influencing his entrepreneurial, educational, and research endeavors.

    • Delphi and Jay: Two Loved LanguagesThe speaker expresses admiration for Delphi's speed and ease of use, and for Jay's expressiveness and focus on computation

      The speaker has a deep interest in using programming languages to create useful tools and applications, with a particular fondness for Delphi and Jay. Delphi, a compiled language created by Anders Hejlsberg, was loved for its speed and ease of use, which the speaker believes Swift could potentially emulate. Jay, an array-oriented language inspired by Ken Iverson's "Notation as a Tool for Thought," is described as the most expressive and beautifully designed language the speaker has encountered. APL, the predecessor to Jay, was also discussed as a highly influential array-oriented language, with two main branches: J and K. K, an extremely powerful and expensive language used by hedge funds, is virtually unknown due to its high cost. The speaker expresses awe for the power and focus on computation that these languages offer, which is often overlooked in more commonly used programming languages.

    • Choosing the Right Programming Language for Data Science and Machine Learning ResearchThe right programming language for data science and machine learning research depends on productivity, flexibility, and a strong community. Python is popular but has limitations, and Swift's hackability offers potential for innovation.

      Productivity and flexibility are key factors in choosing programming languages for data science and machine learning research. The speaker shares his experiences with Perl, Python, and the importance of having a strong leader and community behind a language's success. He expresses his hope for Swift, as it aims to be infinitely hackable, allowing for easy experimentation and innovation. The speaker believes that the current limitations in Python, particularly around recurrent neural networks and natural language processing, hinder progress and research. He emphasizes the need for a more hackable language to address these gaps and enable researchers and practitioners to innovate more effectively. Ultimately, the choice of programming language plays a significant role in advancing our understanding and capabilities in deep learning and related fields.

    • Simplifying GPU programming through domain-specific languagesProjects like tensor comprehensions, MLIR, and TVM aim to make GPU programming simpler by allowing developers to create domain-specific languages for tensor computations and having compilers optimize them, potentially reducing code size and improving performance.

      The current process of writing acceptably fast GPU programs is overly complicated and requires extensive optimization, making it inaccessible to many developers. However, projects like tensor comprehensions, MLIR, and TVM aim to simplify this process by allowing developers to create domain-specific languages for tensor computations and having compilers optimize them. This research is built on Halide, a project that has shown a 10x reduction in code size and equivalent performance. MLIR is also working to make these domain-specific languages accessible through Swift. Although CUDA and NVIDIA GPUs dominate the market currently, MLIR's ability to target multiple backends could provide competition and potentially lower costs. The origin story of fast AI stems from a previous startup, Inletec, which aimed to address the global shortage of doctors by using deep learning for medical diagnosis and treatment planning, particularly in areas with limited healthcare resources. The biggest benefit of AI in medicine today is the potential to provide diagnostic and treatment services in regions with limited access to healthcare professionals.

    • Deep learning in healthcare in developing countriesDeep learning systems can improve diagnosis process in developing countries by triaging high-risk cases, but implementation faces challenges like expertise shortage, regulatory frameworks, and data privacy concerns.

      The integration of deep learning systems in healthcare, particularly in developing countries like India and China, can significantly improve the diagnosis process by triaging high-risk cases, allowing human experts to focus on complex cases. However, the implementation of such systems faces challenges such as a shortage of expertise, regulatory frameworks, and data privacy concerns. Despite these challenges, the potential benefits of increasing the productivity of human experts and improving patient outcomes make it a worthwhile pursuit. The regulatory challenges surrounding data privacy and usage, as exemplified by HIPAA, are reminiscent of those faced by the autonomous vehicle industry and require a shift in mindset from lawyers and hospital administrators towards embracing technology for the greater good.

    • Respecting privacy while utilizing data for deep learning innovationFocus on transfer learning and existing data, use general models, and respect individual control over data for deep learning innovations

      Respecting privacy while utilizing data for innovation in deep learning is a complex issue. While data is essential for fueling advancements in technology, it's crucial to minimize the collection of personal data and instead focus on using transfer learning and existing data to achieve impactful results. Companies, such as Google and IBM, often push for more data and computation to maintain their competitive edge, but it's possible to make significant progress with less. Recommender systems, medical diagnosis, and other applications can benefit from general models that don't require individual data from every person. However, there are exceptions, like the cold start problem in recommender systems, where new users require more data to provide personalized recommendations. In such cases, data sharing can be a viable solution. Ultimately, individuals should have control over their data and be able to decide who they share it with. Before starting Fast AI, Jeremy Howard spent a year researching the biggest opportunities for deep learning and discovered that it was becoming the go-to approach in various fields. With a focus on using less data and computation, it's possible to create impactful innovations while respecting privacy concerns.

    • Empowering domain experts to apply deep learning to real-world problemsFast AI aims to bridge the gap between deep learning research and practical applications by providing tools and resources for domain experts to upskill and apply deep learning to their fields, maximizing societal impact.

      While deep learning research has the potential to make significant practical impacts, much of the research in the field is focused on theoretical advancements that may not directly benefit society. To address this issue, Fast AI was founded with the goal of empowering domain experts to apply deep learning to their fields and solve real-world problems. The founder of Fast AI, who has an applied and industrial background, recognized the value of domain experts and their expertise, and understood that most of them lack the coding skills and time to invest in extensive education. By providing tools and resources to help domain experts upskill and apply deep learning to their areas of expertise, Fast AI aims to maximize societal impact. The founder's experience shows that while most research in deep learning may not have practical applications, important advancements like transfer learning and active learning, which can have world-changing impacts, are often understudied. In practice, domain experts often reinvent these concepts when they encounter real-world problems, but there is a lack of academic interest in these areas due to the pressure to publish research that is familiar to their peers. The founder's one published paper, which introduced transfer learning to NLP, is a prime example of the practical impact that can be made when theoretical advancements are applied to real-world problems.

    • Leveraging limited resources for impactful researchEven with limited resources, researchers can make significant discoveries and advancements in computational linguistics by being determined, creative, and utilizing available tools effectively.

      Even seemingly insignificant research projects can lead to important discoveries and advancements in the field of computational linguistics. The speaker shares an example of participating in a Stanford competition called Dawnbench and achieving impressive results using Fast AI. Initially, they joined the competition late and with limited resources, but they managed to make the most of their current knowledge and practices. They focused on a smaller dataset called sci-fi 10, which allowed them to train a model in a few hours instead of the usual longer time for larger datasets. The team used multiple GPUs for the first time to increase speed and rented an AWS machine to access more computing power. This experience showed that with determination and creativity, researchers can achieve significant results even with limited resources. The speaker also emphasizes the importance of making research accessible to as many people as possible, rather than relying on vast infrastructure.

    • Using smaller datasets and simpler models can lead to significant time savings and competitive resultsSmaller datasets and simpler models can be just as effective for research purposes and can save time and resources, even when competing against larger tech companies.

      Using smaller datasets and simpler models can lead to significant time savings and competitive results, even when competing against larger tech companies. The speaker, who was part of a team that won parts of a machine learning competition using this approach, discovered that they could train models effectively on a single GPU in a fraction of the time it would take on larger datasets like ImageNet. They also argued that smaller datasets are just as effective for research purposes and can encourage creativity by making deep learning more accessible to a wider range of people. The speaker also criticized the notion that only large companies with access to multiple GPUs or machines can make meaningful contributions to AI research. Instead, they believe that most major breakthroughs in AI in the next 20 years will be achievable with a single GPU. The speaker also released smaller datasets, such as French ImageNet and Image Wolf, to help researchers train models more efficiently and effectively.

    • Advancements in machine learning and AI lead to improvements in image and audio processingMachine learning techniques like super-resolution and colorization can now be achieved with a single GPU and a few hours of computation. Opportunities exist for innovation in audio processing, and 'super convergence' suggests faster neural network training with higher learning rates.

      Advancements in machine learning and artificial intelligence are leading to significant improvements in various fields, including image and audio processing. For instance, techniques like super-resolution and colorization, which were once the domain of expensive hardware and complex algorithms, can now be achieved using a single GPU and a few hours of computation. This is due to the development of effective loss functions and transfer learning methods. Moreover, there are opportunities for innovation in areas like audio processing, where the use of multiple sensors to improve quality has not been extensively explored. The potential for computational methods to enhance audio, such as noise reduction and background removal, is vastly underutilized. Another intriguing discovery is the concept of "super convergence," which suggests that certain neural networks can be trained much faster using higher learning rates. However, due to the current academic climate, it may be challenging for researchers to publish findings on such discoveries without a solid theoretical explanation. Overall, these advancements demonstrate the power of machine learning to revolutionize various industries and offer exciting possibilities for future research and development.

    • Learning rate magic: Optimizing deep learning performanceCarefully selecting and adjusting learning rates during deep learning training can significantly improve model performance and convergence. The future of this research may lead to the disappearance of fixed learning rates as optimizers and credence interpretation are better understood.

      The field of deep learning is constantly evolving, with new discoveries being made and old assumptions being challenged. One such discovery is the importance of carefully selecting and adjusting learning rates during training. This process, known as "learning rate magic," has been shown to significantly improve model performance and convergence. The future of this research suggests that the concept of a fixed learning rate may soon disappear, as researchers gain a better understanding of how optimizers work and how to interpret the change of credence during training. Additionally, the analysis of misclassified data and understanding the model's features can help domain experts become more knowledgeable and effective in their work. Overall, the continuous exploration and refinement of deep learning techniques are essential for unlocking their full potential and driving advancements in various industries.

    • Importance of debugging data for machine learning modelsNvidia GPUs, Google Cloud Platform (GCP), PyTorch, and FastAI are essential tools for effective machine learning model development and debugging data. GCP's ready-to-go instances and FastAI's simplified API make getting started easier.

      Understanding which parts of data are important for machine learning models and using models to debug data is crucial. From a hardware perspective, Nvidia GPUs are currently the best choice due to their ease of use and extensive software support. In terms of platforms, Google Cloud Platform (GCP) and AWS offer GPU access, but GCP's ready-to-go instances make it easier to get started. Regarding deep learning frameworks, PyTorch and FastAI have been game-changers due to their interactive and debugging capabilities, making research and development more accessible. PyTorch's initial learning curve can be steep, but FastAI's multi-layered API simplifies the process. Overall, these advancements have made deep learning more accessible and effective for researchers and practitioners.

    • Exploring Swift as an alternative to Python for machine learningPython's limitations with complex ML models lead to considering Swift for TensorFlow. Swift's strong compiler support and MLIR foundation make it a potential superior alternative, but lack of community and libraries may delay its practical use. FastAI and PyTorch recommended for beginners, easy transition between libraries.

      Python's limitations with complex machine learning models, particularly recurrent neural nets, have led some experts to explore alternatives like Swift for TensorFlow. The speaker expresses frustration with the current state of Python's TensorFlow library, citing its slow performance and disorganized code base. Swift, on the other hand, has the potential to be a superior alternative due to its strong compiler support and foundation in MLIR. However, it may still be several years before Swift becomes a practical choice for machine learning due to the lack of a robust Swift data science community and libraries. For new students, the recommendation is to start with FastAI and PyTorch for a quicker understanding of concepts and access to state-of-the-art techniques. The transition between different libraries is relatively easy once the foundational concepts are grasped. Swift for TensorFlow offers an intermediate solution by allowing the use of Python code and libraries within Swift environments. The time it takes to complete both FastAI courses varies greatly, ranging from two months to two years, depending on the individual's coding ability.

    • Anyone can learn deep learningWith dedication and persistence, anyone can succeed in deep learning, regardless of their background. Free resources like the fast.ai course provide practical tools and examples to help beginners build their skills and projects.

      Anyone can get started in deep learning with dedication and persistence, regardless of their background. The key to success is not just having strong coding skills, but also being tenacious and willing to experiment and learn from mistakes. The best way to understand deep learning is by doing – training and fine-tuning models, and studying their inputs and outputs. This hands-on approach can lead to impressive results, even for beginners. The author emphasizes that there's no need to be intimidated by the snobbish attitude that only certain people can learn deep learning. Instead, the availability of free resources, such as the fast.ai course, makes it accessible to everyone. The course not only teaches the fundamentals of deep learning but also provides practical tools and examples to help students build their own projects and datasets. The author's experience of teaching deep learning to a diverse group of students has reinforced his belief that anyone can succeed in this field.

    • Combine deep learning expertise with domain knowledgeTo make a significant impact in deep learning, combine a strong foundation in the technology with deep domain expertise. Identify a real-world problem and use deep learning to solve it for a successful startup.

      To become an expert in deep learning and make a meaningful impact, it's essential to combine a strong foundation in the technology with deep domain expertise. This means training lots of models in your specific area of interest and focusing on solving real-world problems. Becoming an expert in your passion area will allow you to do that thing better than others and make a significant contribution. It's also crucial to understand the limitations of deep learning and recognize the importance of domain expertise. For instance, someone studying self-driving cars without any practical experience is like trying to commercialize a PhD thesis that doesn't solve an actual problem. Creating a successful startup also requires a similar approach. Instead of trying to commercialize a PhD thesis, it's essential to identify a problem you understand and care about and then use deep learning to solve it. Being pragmatic and focusing on building a viable business without relying on venture capital funding is also crucial. In summary, to make a significant impact in deep learning, you need to combine a strong foundation in the technology with deep domain expertise, and to create a successful startup, you need to identify a real-world problem and use deep learning to solve it.

    • Maintaining a lean business model and generating early revenueEffectively managing costs, selling smaller projects, and generating revenue early on were crucial for achieving profitability within six months. Utilizing ancient learning techniques like space repetition and modern tools like Anki can enhance personal growth.

      Maintaining low costs and generating revenue early on were crucial factors in the speaker's successful entrepreneurial journey. He kept his expenses minimal and sold smaller projects to cover costs while making clients feel valued. Profitability was achieved within six months. The speaker also shared his experience with venture capital, expressing that it felt scarier due to the uncertainty and pressure to optimize for a large exit. He suggested that a lifestyle exit, where selling a business for a smaller profit, could be a viable option for those who don't want to build something for the long term. Regarding learning, the speaker emphasized the importance of space repetition, an ancient learning technique discovered by Hermann Ebbinghaus. This method involves reviewing material at increasing intervals to maximize retention. The speaker uses Anki, a program that implements this algorithm, to learn new concepts and ideas effectively. He also emphasized the importance of treating the brain well by using mnemonics, stories, and context. In summary, maintaining a lean business model and generating early revenue, along with the effective use of learning techniques like space repetition, were essential components of the speaker's entrepreneurial and personal growth.

    • Effective learning strategies for retaining informationMemorable stories, regular practice, allowing for failures, and applying knowledge to solve real-world problems are effective strategies for retaining information. Addressing labor force displacement caused by AI is a pressing societal concern.

      Effective learning involves making a conscious effort to remember information, whether through spaced repetition or understanding concepts deeply. The use of memorable stories and regular practice are key to retaining information. Additionally, allowing for failures and setbacks is important, as the brain is capable of quickly relearning previously learned material. Lastly, applying this knowledge to solve real-world problems is more valuable than focusing on predicting future technological breakthroughs. In the context of societally important issues, addressing labor force displacement caused by AI is a pressing concern.

    • The impact of technology on the middle class and the need for ethical considerations in data scienceData scientists must consider human implications and ethical consequences of their research, focusing on how humans will be in the loop, avoiding runaway feedback loops, and ensuring an appeals process for those impacted by algorithms.

      The changing workplace and the advancement of technology, particularly deep learning, have raised serious concerns about the future of the middle class and the potential negative impact on society. Andrew Yang's presidential campaign highlights this issue, as students today face a less rosy financial future than previous generations. Data scientists, who work with this powerful technology, have a responsibility to consider the human implications and ethical consequences of their research. They should think about how humans will be in the loop, how to avoid runaway feedback loops, and ensure an appeals process for those impacted by their algorithms. Data scientists should view themselves as more than just engineers and educate themselves and others about these important human issues. Jeremy GCGBB's work in deep learning and inspiring others to join the field has the potential to change the world, and it's crucial that this includes a focus on the ethical and societal implications of this technology.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    765: NumPy, SciPy and the Economics of Open-Source, with Dr. Travis Oliphant

    765: NumPy, SciPy and the Economics of Open-Source, with Dr. Travis Oliphant

    Explore the origins of NumPy and SciPy with their creator, Dr. Travis Oliphant. Discover the journey from personal need to global impact, the challenges overcome, and the future of these essential Python libraries in scientific computing and data science.

    This episode is brought to you by the DataConnect Conference, by Data Universe, the out-of-this-world data conference, and by CloudWolf, the Cloud Skills platform. Interested in sponsoring a SuperDataScience Podcast episode? Visit passionfroot.me/superdatascience for sponsorship information.

    In this episode you will learn:
    • Travis's journey to creating NumPy and SciPy [08:05]
    • How Anaconda got started [42:24]
    • How Numba, a high-performance Python compiler, was brought to market [54:48]
    • Python's influence on the thought processes of scientists and engineers [1:04:21]
    • The commercial projects that support Travis’s vast open-source efforts and communities [1:10:22]
    • How to get involved in Travis's commercial projects and communities [1:22:34]
    • The future of scientific computing and Python libraries [1:29:50]

    Additional materials: www.superdatascience.com/765

    KI - Eine Projekt-Checkliste

    KI - Eine Projekt-Checkliste
    Nach der Identifizierung des eigenen Use-Case, ist der erste Schritt zur Realisierung des ersten eigenen KI Projektes die Bereitstellung geeigneter Technologien, um eine effiziente und wirtschaftliche Aufbereitung der Daten zu gewährleisten. Worauf man dabei achten muss und welche Schritte darauf folgen erklären uns wieder Ulrich Walter und Marvin Giessing von IBM.