Logo
    Search

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    enJune 19, 2024

    Podcast Summary

    • Perplexity search enginePerplexity is a search engine that provides direct answers by synthesizing information from various sources and presenting it in an academic-style response with citations, guiding users on their knowledge journey with related questions and topics.

      Perplexity is a search engine designed to provide direct answers to user queries by synthesizing information from various sources and presenting it in a well-formatted and cited manner. This is achieved through a combination of traditional search, large language models, and citation techniques. The goal is to create an academic-style response that humans can appreciate and use as a reference. The inspiration for this approach came from the academic world, where every sentence in a paper must be backed by a citation. By integrating this principle into a search engine, Perplexity aims to ensure the accuracy and reliability of its responses. It's not just about providing an answer, but also about guiding users on their knowledge journey by suggesting related questions and topics to explore. While Perplexity is impressive, it's not yet a full replacement for traditional search engines like Google for everyday searches. Google excels in providing real-time information, simple navigational queries, and instant link results. However, Perplexity's focus on direct answers and synthesizing information from various sources sets it apart and offers a unique value proposition. The journey of knowledge discovery doesn't end with an answer; it begins after it.

    • UI enhancement and personalizationUI plays a crucial role in enhancing user experience, and personalization based on location and interests can already provide a great experience without requiring infinite memory or context. Perplexity aims to differentiate itself by focusing on improving its technology and providing more valuable information to users, rather than trying to directly compete with Google's search engine.

      While next-generation models can answer complex queries and provide amazing features like planning and personalization, there's still a lot of work to be done on the product layer to best present the information to users and anticipate their needs. The UI plays a crucial role in enhancing the user experience, and personalization, based on location and interests, can already provide a great experience without requiring infinite memory or context. The disruption in the search space comes from rethinking the UI and not trying to directly compete with existing search engines by building a better 10 blue link search engine. Google's primary source of revenue comes from its search engine through its AdWords model, where advertisers bid for their links to appear as high as possible in search results related to their keywords. This business model is brilliant and has been a great invention, but Perplexity aims to differentiate itself by focusing on improving its technology and providing more valuable information to users, rather than trying to directly compete with Google's search engine.

    • Perplexity's business modelPerplexity could adopt a hybrid business model of subscription and advertising, focusing on user experience and truthful answers, while learning from Instagram's targeted ads for effective implementation.

      Google's advertising business, which includes AdSense and AdWords, is a significant source of revenue for the company due to its data-driven approach and high competition among brands. However, as companies like Amazon and Netflix have shown, focusing on lower-margin businesses or hybrid models can also lead to success. For Perplexity, a potential business model could be a hybrid of subscription and advertising, ensuring user experience and truthful, accurate answers are not compromised. Instagram's targeted and seamless ads could serve as an inspiration for implementing ads in Perplexity. Additionally, there are threats to manipulate the output of Perplexity through aggressive SEO tactics, emphasizing the importance of maintaining trust with users.

    • Google's unique ranking signalsGoogle's success is rooted in its ability to think outside the box and use unique ranking signals like link structure and user-centric approach, which led to the development of PageRank and a focus on high-quality answers for users.

      The success of search engines like Google lies in their ability to think outside the box and use unique ranking signals. In the early days, Google differentiated itself from other search engines by focusing on link structure instead of text-based similarity. This led to the development of PageRank, which was a game-changer in the industry. Larry Page and Sergey Brin's academic background and deep understanding of research and engineering were instrumental in Google's success. They prioritized hiring PhDs and building core infrastructure, which paid off in the long run. Another important lesson is the user-centric approach. Google believed that the user is always right and strived to provide high-quality answers to every query, no matter how poorly formulated. This philosophy forced the company to focus on improving the user experience in every detail, from search latency to auto-scrolling and auto-completing queries. The user-centric approach, combined with the unique ranking signal, made Google a dominant player in the search engine market.

    • User-friendly productsStriking a balance between user laziness and intent prediction in product design leads to user-friendly products that delight and keep users coming back.

      Creating user-friendly products involves striking a balance between allowing users to be lazy and guiding them to ask clear, effective questions. Perplexity, a product that excels at understanding poorly constructed queries, is an example of this principle in action. The design should minimize user effort while still predicting user intent and suggesting related questions. However, there's a trade-off between simplicity and power, and designers must consider the needs of both new and existing users. Inspiration for this approach can be found in entrepreneurs like Jeff Bezos, who emphasized the importance of clarity of thought and operational excellence. Ultimately, the goal is to create a product that delights users and keeps them coming back.

    • Successful Founders TraitsRelentless determination, deep user understanding, and a focus on fundamentals are essential traits for successful founders, as demonstrated by individuals like Jeff Bezos, Elon Musk, and Jan Lecun.

      Relentless determination, a deep understanding of the user, and a focus on the fundamentals are key traits of successful founders, as exemplified by individuals like Jeff Bezos, Elon Musk, and Jan Lecun. Musk's acquisition of the relentless.com domain, his obsession with the user, and his ability to bypass traditional distribution methods demonstrate his relentless nature and commitment to his vision. Similarly, Bezos's long-term planning and focus on customer satisfaction have been instrumental in Amazon's success. Lecun's persistence in the face of criticism and his role in educating and mentoring the next generation of scientists are also noteworthy. In the world of technology, innovation and the ability to question conventional wisdom are essential, and these individuals serve as inspiring examples of individuals who have made significant contributions by pushing boundaries and staying focused on their goals.

    • Hinton's Prediction on AI DevelopmentGeoffrey Hinton predicted a shift from reinforcement learning to unsupervised learning, specifically self-supervised learning, for advanced AI models, emphasizing its importance and downplaying the hype around reinforcement learning.

      Geoffrey Hinton, a pioneering figure in artificial intelligence, predicted the shift in focus from reinforcement learning (RL) to unsupervised learning, specifically self-supervised learning, for creating advanced AI models like ChatGPT. He emphasized that most of the intelligence comes from the unsupervised learning part, while RL is just the cherry on top. Hinton's controversial stance on RL being overhyped and the importance of unsupervised learning was a game-changer. He also suggested that autoregressive models might be a dead end and encouraged continuous gradient-based reasoning in a latent space. Hinton's open-source approach to AI development, which he believes is essential for transparency and safety, is another intriguing idea. The history of AI development, from soft attention and RNNs to transformers, highlights the importance of attention mechanisms and parallel computation. Hinton's insights have significantly influenced the field, making him a visionary in AI research.

    • Language Model AdvancementsUnsupervised learning and large datasets have been crucial in driving advancements in language models, with recent focus on post-training improvements and the development of small language models.

      The advancements in language models (LLMs) over the past decade have been driven by a combination of architectural innovations, such as the transformer model and attention mechanism, and the availability of large amounts of data for unsupervised pretraining. The key insight gained from the discovery of the q k transpose softmax times v operation was that unsupervised learning is important, leading to the development of GPT and its successors. However, simply making the model bigger was not enough; the importance of the dataset size and quality also came to light. The evolution of LLMs has seen a shift from focusing solely on model size to also prioritizing the size and quality of the dataset. The recent focus has been on post-training improvements, such as RLHF and the RAG architecture, to make these systems controllable and well-behaved. A promising research direction is the development of small language models (SLMs) that can learn to reason from specific, relevant data, potentially disrupting the foundation model landscape by reducing the need for massive compute clusters. The use of techniques like chain of thought, which force models to go through a reasoning step, can help improve performance on NLP tasks and ensure that models don't overfit on extraneous patterns. Overall, the advancements in LLMs have been the result of a continuous process of innovation and improvement, with a focus on both the model architecture and the data used for training.

    • AI self-improvementThe STAR method proposes training AI models on natural language explanations to reason and refine their understanding, potentially leading to breakthroughs in reasoning abilities, but it requires human interaction and verification for learning signals.

      The future of AI development lies in the intersection of natural language processing and self-improvement. The STAR (Self-supervised Training with Augmented Reasoning) paper proposes a method to improve AI models by training them on natural language explanations. This approach allows the model to reason and refine its own understanding, leading to potential breakthroughs in reasoning abilities. However, it requires some human interaction and verification to provide signals for learning. The ultimate goal is to create an intelligence explosion where AI systems improve themselves recursively, leading to dramatic advancements in inference compute and real reasoning breakthroughs. While AI may not have cracked human-like curiosity yet, it's an essential aspect of human intelligence that could set us apart from advanced AI systems. Curiosity-driven exploration has shown promising results at the game level, but replicating real human curiosity remains a challenge.

    • AI's ability to generate new knowledgeThe future of AI lies in its capacity to ask and answer complex questions through rapid iterative thinking, which requires significant computational power and could lead to immense value in generating new knowledge and insights, but the timeline and equitability of access remain uncertain.

      The future of artificial intelligence (AI) lies not only in its ability to process vast amounts of data, but also in its capacity to ask and answer complex questions through rapid iterative thinking. This requires significant computational power, which could lead to a concentration of power in the hands of those who can afford it. The value of an AI's ability to generate new knowledge and insights, potentially even challenging our current understanding, could be immense. However, the timeline for such breakthroughs is uncertain and may depend on a few key computational advancements. The real challenge will be in ensuring that access to this powerful technology is equitable and that it is used responsibly. Ultimately, the true test of an advanced AI will be its ability to generate new knowledge and insights that go beyond what we currently know, forcing us to reconsider our understanding of the world.

    • AI's potential for new insightsAI's future lies in its ability to provide new insights, surpassing human experts, but creating a genuinely curious and independent AI system is a challenge

      The future of AI lies in its ability to provide new insights and perspectives on truths, surpassing the capabilities of human experts in various fields. This could potentially revolutionize industries and lead to groundbreaking discoveries, but the challenge lies in creating an AI system capable of genuine curiosity and independent thought. The origin of this idea stems from the realization that AI, particularly generative models, had transitioned from research projects to user-facing applications, such as GitHub Copilot, which could benefit from advancements in AI technology. The goal is to create an AI system that can work alongside and enhance the thinking process of leading experts, leading to a flywheel effect of continuous improvement and data generation. However, achieving this requires overcoming the challenge of creating an AI system with genuine curiosity and independent thought, which is not easily scalable at present.

    • AI-powered database queryingAI technology revolutionized database querying by generating SQL queries from natural language questions, making it faster and more accessible to extract valuable insights from large datasets

      The ability to query and extract meaningful information from large datasets using AI technology was a game-changer for opening doors to investors and brilliant minds. Before the advent of advanced AI models like GitHub Copilot, querying and understanding complex relationships in databases required a deep understanding of SQL and a significant amount of hard coding. This process was time-consuming and error-prone. However, the team behind Perplexity identified an opportunity to create a search engine that could generate SQL queries based on natural language questions, making it possible to extract valuable insights from Twitter's vast social graph. This innovative approach not only captured the attention of influential individuals like Yann LeCun, Jeff Dean, and Andre, but also helped the team recruit talented individuals who were impressed by the company's capabilities. By showcasing something that was not possible before, Perplexity was able to generate buzz and gain relevance in the market, eventually leading to the company's focus on web search as a scalable business opportunity. This story highlights the power of demonstrating innovation and practical applications of AI technology to capture the attention and support of influential individuals and investors.

    • Perplexity's approach to knowledge discoveryPerplexity's approach to knowledge discovery focuses on helping people find new things and providing factually grounded answers using Retrieval Augmented Generation, while reducing hallucinations through improvements in retrieval, index quality, and model ability.

      The mission of Perplexity, as a knowledge centric company, is to help people discover new things and guide them towards relevant information, rather than just providing answers. The company aims to obsess over knowledge and curiosity, making it bigger than competing with other search engines. Perplexity's approach is based on Retrieval Augmented Generation (RAG), where the model retrieves relevant documents and uses them to write answers, ensuring factual grounding. However, there is still a risk of hallucinations, which can occur due to model skill issues, poor snippets, or excessive detail in the model. To reduce hallucinations, improvements can be made in retrieval, index quality, freshness, and model ability. The indexing process involves crawling the web with a bot, rendering pages, respecting politeness policies, and deciding the periodicity and new pages to add based on hyperlinks. The content is then fetched, processed, and indexed using machine learning text extraction. Perplexity's focus on knowledge and discovery sets it apart from traditional search engines and emphasizes the importance of accurate and relevant information.

    • Business optimization for tech industryFor businesses in tech industry, focusing on both low latency and high throughput is crucial for delivering optimal user experiences, especially for AI models. Companies can optimize their own models and collaborate with tech partners, but for models served by external providers, strategic decisions about in-house compute or cloud capacity are necessary.

      For businesses, particularly those in the tech industry, focusing on both low latency and high throughput is crucial for delivering optimal user experiences. This is especially important when it comes to serving AI models, where tail latency must be tracked at every component, including search layers and element layers. Companies can optimize their own models by collaborating with tech partners and optimizing at the kernel level. However, for models served by external providers, businesses must make strategic decisions about whether to invest in more in-house compute or pay for additional capacity from cloud providers. Netflix, for instance, uses Amazon Web Services for nearly all its computing and storage needs due to the scale and breadth of services offered. Ultimately, understanding your business's unique needs and where you get your dopamine hits as a founder is essential for long-term success. Don't just focus on what the market wants; be passionate about the problem you're trying to solve.

    • Passion for product, commitment and dedicationHaving a deep passion for a product and the willingness to endure sacrifices are essential for starting a successful business. Surrounding oneself with passionate and supportive people is also crucial.

      Starting a successful business requires a deep passion for the product and a willingness to endure the sacrifices and hardships that come with entrepreneurship. The market can guide you towards making it profitable, but it's essential to have a genuine love for the idea. Building a strong support system and staying committed and dedicated are crucial. Young people in their late teens and early twenties are encouraged to put their time and energy into an idea that truly occupies their mind, as it can lead to significant growth and development. Surrounding oneself with passionate and supportive people, no matter their background or walk of life, is also essential. Ultimately, the journey to building a successful business is filled with challenges, but the rewards can be great.

    • Knowledge DiscoveryThe future of knowledge discovery goes beyond traditional search engines and answer boxes, focusing on discovery and guidance towards new information through chatbots, answerbots, and voice form factors, making knowledge more accessible and efficient.

      The future of knowledge discovery lies beyond traditional search engines and answer boxes. Instead, we are moving towards a more holistic approach to knowledge dissemination, where discovery and guidance towards new information is the main focus. This can be achieved through tools like chatbots, answerbots, and voice form factors. The ultimate goal is to cater to the fundamental human curiosity and let people create and share knowledge in a more accessible and efficient way. This shift towards knowledge discovery is not just about changing the way we search, but about making people smarter and delivering knowledge from various entry points. Whether it's through reading a page, listening to an article, or asking a follow-up question, the journey of curiosity is ongoing and limitless. The potential impact of this change could lead to a more truth-seeking society, where people are more interested in fact-checking and uncovering new information rather than relying on others. This vision aligns with the mission of Perplexity, which aims to let people create research articles, blog posts, and even small books on a topic, making knowledge accessible to everyone.

    • Balancing human curiosity and avoiding dramaAI can help customize depth of explanation for different audiences, but increasing context window size has trade-offs, and maintaining human element in social media is crucial.

      While catering to human curiosity is important for social media platforms, it often comes with the challenge of maximizing engagement through drama and ad-driven models. The solution isn't necessarily starting a new social network but finding a balance between human curiosity and avoiding unnecessary drama. The use of AI, such as large language models, can help in customizing the depth and level of explanation to different audiences, making learning more accessible and effective. However, increasing the context window size comes with its own trade-offs, and it's crucial to ensure that the model doesn't become more confused with the additional information. In the future, AI models may be able to perform better in areas like internal search and memory, making information retrieval more efficient. Ultimately, the goal is to empower humans through AI and cater to their curiosity while maintaining the human element in social media.

    • AI and human relationshipsAI can help free up time for deeper human connections, but it's crucial to ensure that AI development aligns with human values and prioritizes our long-term flourishing to avoid potential risks such as deep emotional connections between humans and AI.

      As AI technology continues to advance, it will lead to a future where humans are more empowered, curious, and knowledgeable. However, there are also potential risks, such as the formation of deep emotional connections between humans and AI. While some may see this as a solution to human loneliness, it's important to remember that true human connections and relationships are essential for our overall well-being. The hope is that AI will help us lead more fulfilling lives by freeing up time for us to build deeper connections with each other. However, it's crucial to ensure that AI is developed in a way that aligns with human values and prioritizes our long-term flourishing. Ultimately, the future of AI holds great potential, but it's up to us to ensure that it's used in a way that benefits humanity as a whole.

    • AI reducing biasesAI's objective processing ability can help humans make informed decisions and reduce biases, but maintaining a curious and open-minded approach to learning is essential.

      AI has the potential to help reduce human biases and improve our understanding of the world around us. The hosts expressed their hope that AI, with its ability to process information objectively, can help humans make more informed decisions and reduce the influence of biases. They also emphasized the importance of maintaining a sense of curiosity and a willingness to question and explore the mysteries of life and reality. Rex was praised for his inspiring work on Perplexity, and the conversation ended with a quote from Albert Einstein encouraging the audience to never stop questioning and to find joy in the pursuit of knowledge. Overall, the conversation highlighted the potential benefits of AI and the importance of maintaining a curious and open-minded approach to learning and discovery.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication