Logo
    Search

    #416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

    enMarch 07, 2024

    Podcast Summary

    • Jan LeCun discusses coexistence of humans and AGIRenowned AI scientist Jan LeCun believes AGI will coexist with humans, emphasizing the importance of securing AI models as they become more prevalent and complex.

      Jan LeCun, a renowned AI scientist and proponent of open-source AI development, believes that the development of AGI will not lead to human extinction, but rather coexist with humans. He emphasizes the importance of securing AI models as they become more prevalent and complex. This episode is sponsored by Hidden Layer, a company that ensures the security of AI models, and Element, a drink that provides essential electrolytes. Jan's optimistic perspective on AGI contrasts with those who warn of potential dangers, leading to intense online discussions. As AI development advances, securing these models becomes increasingly important for companies and individuals alike. Protect your models and visit hiddenlayer.com/lex to learn more. Stay hydrated with Element, available at element.com/lex.

    • Large language models aren't the key to superhuman intelligenceLarge language models like GPT-4 have limitations and can't compare to human or animal intelligence gained from sensory input and real-world experiences.

      While large language models like GPT-4 have made significant strides in artificial intelligence, they are not the key to achieving superhuman intelligence. These models lack the ability to understand the physical world, have limited memory, and cannot reason or plan. Though they have been trained on vast amounts of text, they do not compare to the amount of information humans and animals absorb through sensory input in their first few years of life. The comparison between sensory data and language is complex, as language is already compressed and contains a lot of information. However, most of what we learn and know comes from observation and interaction with the real world, not through language. Therefore, it's essential to consider the limitations of large language models and the importance of sensory input and real-world experiences in the development of intelligent systems. Additionally, the podcast discussed the benefits of using platforms like Shopify to improve businesses and enhance people's lives. The trial period for Shopify is available for $1 a month at Shopify.com/Lex. The podcast was also sponsored by AG1, an all-in-one daily drink for better health and peak performance. AG1 is delicious, whether consumed warm, cold, or slightly frozen, and covers the basic multivitamin needs. It's an essential addition to any diet, especially for those undertaking physical and mental challenges.

    • Understanding the Limitations of Language ModelsLanguage models predict words based on probabilities, but they don't possess a deep understanding of the world or plan their answers like humans do. Building a world model requires observing and understanding the world's evolution, not just predicting words.

      Language models, such as LLMs, operate by predicting words based on a probability distribution over all possible words, which is an auto-regressive process. This is different from human language use, where thinking and planning occur at a more abstract level before language production. The language model doesn't possess a deep understanding of the world or plan its answers like humans do. It simply generates one token at a time based on the probability distribution. However, if the world model is sophisticated enough, the sequence of tokens generated can represent profound understanding. The fundamental question is whether we can build a truly complete world model through prediction, and while the answer is likely yes, the ability to build it through predicting words is most probably no due to the limited information provided by language. Instead, building a world model requires observing the world and understanding its evolution, with the additional component being the ability to predict how the world will change as a result of actions.

    • Challenges of Applying Generative Models to Video DataGenerative models struggle to effectively represent and predict video data due to its complexity and continuous nature, with current research focusing on models with latent variables to represent unperceived information but facing inconsistency and complexity challenges.

      While generative models have been successful in handling text data, they face significant challenges when applied to video data. The complexity and continuous nature of video make it difficult to predict and represent all relevant details, unlike text which is discrete. Traditional approaches like training models to learn representations of images or video, and then using those representations for tasks like object recognition or segmentation, have not been successful. The failure lies in the difficulty of forming good representations of images and the consistency of image to image in a video sequence. Instead, current research focuses on models with latent variables that attempt to represent all the information about the world that is not directly perceived, but this approach has also been largely unsuccessful. The inconsistency and complexity of video data present unique challenges that require new approaches and technologies to overcome.

    • Joint embedding architectures like JAPA encode both input and corrupted version for predictive learningJAPA uses joint embedding to prevent system collapse and extract essential features while eliminating unnecessary details, saving resources for advanced machine intelligence research

      Joint embedding architectures, such as JAPA (Joint Embedding Predictive Architecture), offer an alternative to self-supervised learning methods like labeling data or reconstruction for training deep learning models. JAPA achieves this by encoding both the full input and its corrupted version, then training a predictor to determine if the representations of the two versions match. This method, which involves joint embedding, can prevent the system from collapsing and producing constant representations, unlike early contrastive learning methods. While JAPA is a promising first step towards advanced machine intelligence, it differs from generative architectures like LLMs in that it doesn't aim to predict all the input's pixels. Instead, it focuses on extracting easily predictable information while eliminating unnecessary details. This approach can save resources and allow the model to focus on essential features. However, it's important to note that JAPA and other joint embedding architectures are not a guaranteed path to AGI, and ongoing research continues to explore new methods for achieving advanced machine intelligence.

    • Learning abstract representations of the world using JAPAJAPA is a self-supervised learning approach for abstracting and predicting patterns in perceptual inputs, helping machines understand the world before combining it with language.

      Joint Embedding and Predictive Architectures (JAPA) is a self-supervised learning approach that allows systems to learn abstract representations of the world by abstracting and predicting patterns in data, especially in perceptual inputs like images which have more redundancy and structure. This is different from language, which already has abstract representation and is less redundant. JAPA can be used to learn common sense and complex actions, but before combining it with language, we need to focus on helping machines understand the world. Techniques like distillation, used in methods like iJPA and Dino, can help by training predictors to predict a representation of an uncorrupted input from a corrupted one. However, combining JAPA with language too early risks cheating and relying on language as a crutch for vision systems. Ultimately, the goal is to help machines learn how the world works before combining it with language for even greater intelligence.

    • Learning from Corrupted Data with Dino and IJEPPAResearchers at FAIR created techniques, Dino and IJEPPA, to train neural networks on corrupted data, allowing systems to learn good representations and make predictions despite incompleteness or masking. Potential applications include advanced AI capabilities like self-driving cars.

      Researchers at FAIR have developed techniques, Dino and IJEPPA, to train neural networks on corrupted data by only training the branch that processes the corrupted input. These techniques, which can be applied to images and videos, create internal models that predict the representation of the original data from the corrupted one. This approach allows the system to learn good representations of data, even if it's incomplete or masked. The potential applications of this technology are vast, including the ability to predict what's happening in a video based on a partial view or to create a world model that understands enough about the world to plan actions. For instance, by training a system to predict the state of a car based on the angle of its wheel, it can make abstract predictions about what will happen next. This technology could potentially lead to advanced AI capabilities, such as self-driving cars, but it will take time before we reach that point. Overall, these techniques represent an important step forward in creating AI systems that can learn from incomplete or corrupted data and make predictions based on those learnings.

    • Hierarchical planning for complex tasksAI systems can assist in complex task planning but lack muscle control information for detailed hierarchical planning, requiring training to learn multiple levels of representation

      Our minds create an internal model of the world, allowing us to plan sequences of actions to reach specific goals using model predictive control. However, for complex tasks like traveling from New York to Paris, hierarchical planning is necessary to break down high-level objectives into manageable sub-goals. AI systems, such as LMs and RLLMs, can assist in answering questions related to these tasks, but they cannot currently provide the detailed muscle control information required for hierarchical planning. Instead, they can provide answers at a certain level of abstraction based on their training. The challenge lies in training AI systems to learn the appropriate multiple levels of representation for effective hierarchical planning.

    • LLMs excel at language but struggle with physical tasksLanguage models can generate plans and solve high-level problems, but they're limited to situations they've been trained on and can't invent new solutions. They excel at understanding and generating language but struggle with low-level, abstract-free tasks.

      While language models (LLMs) can generate plans and solve high-level problems, they are limited to situations they've been trained on and cannot invent new solutions from scratch. They excel at understanding and generating language but struggle with low-level, abstract-free tasks that require physical experience. The success of autoregressive LLMs can be attributed to self-supervised learning, where they learn to reconstruct missing parts of text by predicting words based on the context provided. This has led to impressive advancements in language understanding, translation, summarization, and question answering. However, their ability to plan and reason about the physical world remains limited.

    • The focus should shift from generative AI to learning and representing the worldWhile large language models can mimic human conversation, they don't truly understand the world. Focus on developing models that effectively learn and represent data through self-supervised learning for human-level AI.

      While large language models, such as those based on autoregressive transformers, can be impressively fluent and convincing, they do not truly understand the world in the same way humans do. The Turing test, which measures a machine's ability to mimic human conversation, has been criticized for being an inadequate measure of intelligence. Instead, the focus should be on developing models that can effectively learn and represent the internal structure of data, such as through self-supervised learning. This has led to significant progress in areas like multilingual translation, speech recognition, and real-time translation between hundreds of languages. However, it has also become clear that generative models, which aim to create new data, are not effective at learning good representations of the real world. Therefore, those interested in human-level AI should abandon the idea of generative AI and instead focus on developing models that can effectively learn and represent the world. This has been a long-standing obsession of the speaker, who co-founded the International Conference on Learning Representations over 14 years ago and has dedicated almost 40 years to the field. While self-supervised learning has had remarkable successes, there is still much to be discovered and figured out.

    • LLMs lack common sense understandingLanguage models struggle to grasp human experiences and common sense reasoning due to lacking sensory data and access to private information.

      While language models (LLMs) have shown impressive capabilities in handling text data, they fall short when it comes to understanding common sense reasoning and the underlying reality that is not explicitly expressed in language. This is due to the fact that LLMs lack the low-level common sense experience that humans have, which is essential for building a consistent world model and understanding complex scenarios. The speaker argues that a significant portion of human knowledge is accumulated through sensory experiences, particularly in the first few years of life, and is not present in text. This missing data makes it difficult for LLMs to fully understand and navigate the world as humans do. Furthermore, the speaker points out that there is a vast amount of information, such as private conversations and humor, that is not available to LLMs, making it even more challenging for them to fully comprehend human experiences. In order to truly understand and generate language correctly, LLMs would need to be able to bridge the gap between low-level and high-level common sense, which is currently beyond their capabilities.

    • Large language models have a fundamental limitation: they can produce hallucinations or nonsensical answers due to the autoregressive prediction process.Despite fine-tuning, LLMs can still produce nonsensical answers due to their autoregressive prediction process and limited ability to understand context and meaning, making them unreliable for complex or unusual prompts.

      Large language models (LLMs) have a fundamental limitation: they can produce hallucinations or nonsensical answers due to the autoregressive prediction process. This means that every token produced by the model has a probability of leading to an error, and these errors can accumulate exponentially, making it more likely for the answer to be nonsensical as more tokens are generated. This is a result of the assumption that errors are independent across a sequence of tokens. While fine-tuning the system can help cover a large percentage of questions people might ask, the enormous set of prompts that have not been covered during training is a significant issue. The system will behave properly on prompts that have been trained or fine-tuned, but it can easily be thrown off by a prompt that is significantly different or outside of its conditioning. The long tail of prompts that humans might generate is so large that it's not feasible to fine-tune the system for all possible conditions. Additionally, the type of reasoning that takes place in LLMs is very primitive, as the amount of computation devoted to producing each answer is constant. This limits their ability to reason and plan in complex ways. In essence, LLMs are limited by their inability to understand the context and meaning behind prompts and questions, leading to a high probability of producing nonsensical answers for unseen or unusual prompts.

    • From Autoregressive Prediction to Optimization-Based ReasoningFuture dialogue systems may use energy-based models to enable planning and reasoning abilities, allowing for more advanced and capable conversational AI.

      Current large language models (LLMs) have limitations when it comes to complex reasoning and planning abilities. They primarily rely on autoregressive prediction of tokens and lack the capability for deliberate planning or reasoning that uses an internal world model. However, the future of dialogue systems may involve energy-based models that utilize LLMs as a foundation for abstract thought representation and optimization-based answer generation. This approach would enable systems to plan and reason about their answers before converting them into text, making them more Turing complete. This shift from autoregressive prediction to optimization-based reasoning is crucial for building more advanced and capable dialogue systems.

    • Optimizing answers in abstract representation spaceEfficiently generating answers to complex questions involves optimizing abstract representations using gradient-based inference and energy-based models, which are more efficient than generating and selecting answers from a large discrete space.

      Optimization in the abstract representation space for generating answers to complex questions involves an optimization process using gradient-based inference. This process includes encoding a prompt, producing a representation, predicting an answer representation, modifying the answer representation to minimize a cost function, and converting the optimized representation back to text. This approach is more efficient than generating and selecting the best answer from a large discrete space of possible sequences of tokens, which is often used in less efficient methods. An energy-based model, which is a function with a scalar output that determines the compatibility of an input (x) and a proposal for an answer (y), can be used to train a system to optimize answers in abstract representation space. This is done by showing the system compatible x-y pairs during training and adjusting the model's parameters to minimize the energy for these pairs. Non-contrastive methods, which focus on having low energy for compatible x-y pairs from the training set, are preferred over contrastive methods due to their efficiency. Ultimately, the goal is to optimize abstract representations to generate accurate and well-reasoned answers.

    • Regularization and Joint Embedding Techniques for High Energy Machine LearningUse regularization functions and joint embedding architectures to maintain high energy in machine learning models and efficiently compress reality. Generative, probabilistic, and contrastive methods each have their uses, and RL should be minimized but not abandoned entirely.

      To ensure high energy and good outputs in machine learning models, particularly in language and visual data, it's important to use regularization techniques and joint embedding architectures. Regularization functions minimize the volume of space for low energy, preventing collapse and ensuring high energy for untrained data. Joint embedding architectures, such as the JAPA architecture, use energy-based models and joint embeddings to compare and contrast representations of data, resulting in efficient and effective compression of reality. While abandoning generative models, probabilistic models, and contrastive methods might seem appealing, they still have their uses. Generative models can be useful for specific tasks, and probabilistic models can provide valuable insights when used correctly. Contrastive methods can also be effective for learning representations. Regarding reinforcement learning (RL), it's not recommended to abandon it entirely. Instead, it's suggested to minimize its use and only employ it when planning doesn't yield the desired outcome. In these cases, RL can be used to adjust the world model or the critic. Despite past criticisms, reinforcement learning (RLHF with human feedback) is not hated, but it's considered inefficient due to the large number of samples required for training. A more effective approach is to first train the model on good representations of the world and world models through observation, then use RL for fine-tuning and adjustments.

    • Exploring the role of objective functions and world models in Reinforcement LearningRL relies on objective functions and world models to optimize AI behavior, but both can be inaccurate. Human feedback helps refine these elements, but raises concerns regarding bias and censorship. Encouraging diverse perspectives and free speech in AI development is essential.

      In the realm of artificial intelligence, particularly in Reinforcement Learning (RL), there are two crucial elements to consider: the objective function and the world model. The objective function represents what you want the AI to optimize, while the world model is its understanding of the environment. Both can be inaccurate, leading to incorrect predictions. RL is used to adjust these elements, either by exploring areas where the world model is known to be inaccurate (curiosity or play) or by using human feedback to refine the objective function or world model. Human feedback is a transformational tool in RL, enabling systems to learn and improve. However, the use of human feedback, particularly in large language models, has raised concerns regarding bias and censorship. It is impossible to create an entirely unbiased AI system due to the subjective nature of bias. Instead, the solution lies in promoting a free and diverse AI development, allowing various perspectives and opinions to shape the technology. This approach aligns with the principles of liberal democracy and the importance of free speech.

    • Ensuring diversity in AI systems through open sourceOpen source AI systems promote diversity, access, and democracy by allowing for fine-tuning by various groups, ensuring a wide range of AI systems catering to different languages, cultures, values, and technical abilities.

      As we increasingly rely on AI systems for our interactions with the digital world, it's crucial that these systems are diverse and not controlled by a small number of companies. The reason being, these AI systems will constitute the repository of all human knowledge, and we cannot afford to have that controlled by a select few. The cost and complexity of training these systems make it difficult for many organizations to do so independently. However, by making these systems open source and allowing for fine-tuning by various groups, we can ensure a diverse range of AI systems that cater to different languages, cultures, values, and technical abilities. This not only promotes democracy but also ensures access to important information for people in different parts of the world. The future of AI development lies in open source platforms, where companies can build specialized systems on top of these foundations. This approach will lead to a thriving AI industry with a wide range of applications and minimal bias.

    • Companies making money through open-source AI projectsCompanies can generate revenue through various business models while releasing open-source AI projects, expanding user base, and creating potential customers. Diversity is essential to ensure unbiased AI systems.

      Companies like Meta can make money through various business models, even when they release open-source models for large-scale AI projects. This approach not only benefits the company by expanding its user base and potential customers but also allows for the creation of applications that can be bought back if they're not useful for the company's own customers. The challenge lies in ensuring that these systems are perceived as unbiased by diverse groups, which is an impossible task due to the varying political leanings and expectations of different audiences. Diversity in all aspects is the only viable solution to address this issue effectively. Startups and open-source projects may have an advantage in avoiding the pressures and demands that come with being a large tech company, making it easier for them to focus on generating unbiased and factually accurate AI products.

    • Managing Large Language Models: Challenges and Ethical DilemmasLarge language models have ethical dilemmas, including potential offense, but open source promotes diversity and innovation. Guardrails are necessary for safety and non-toxicity, and technology doesn't make harmful activities easier.

      Creating and managing large language models (LLMs) comes with significant challenges and ethical dilemmas. These models can potentially offend people due to their ability to formulate political opinions, moral issues, and cultural differences. Minimizing the number of unhappy people is almost impossible, and open source is a better approach to enable diversity and innovation. However, there are limits to what these systems should be authorized to produce, and guardrails should be implemented to ensure safety and non-toxicity. Additionally, having access to an LLM doesn't necessarily make it easier to build weapons or engage in harmful activities, as the actual implementation is much more complex. Ultimately, technology enables humans to make decisions, and it's up to us to establish ethical guidelines and regulations.

    • Combining software and hardware advancements for human-level AIOpen source projects like LAMA advance software capabilities, but hardware innovations are crucial for power efficiency and computational resources needed for advanced AI models.

      Achieving advanced AI capabilities, such as human-level intelligence, planning, and reasoning, requires a combination of both software and hardware advancements. While open source projects like LAMA are making significant strides in the software aspect, there's still a long way to go in terms of hardware capabilities and power efficiency. The training of these advanced AI models requires vast computational resources, and hardware innovations are crucial to make progress towards matching the complete power of the human brain. The excitement lies in the potential of these advancements, as researchers see a path towards potentially human-level intelligence. However, the journey is far from over, and there are still challenges to be addressed, such as the need for competition and more efficient ways of implementing popular architectures. The open-source nature of projects like LAMA allows for collaboration and progress to be monitored, with various versions and improvements on the horizon. Overall, the potential for AI to understand the world, remember, plan, and reason is an exciting prospect, and the combination of software and hardware advancements is essential to making that a reality.

    • The complex journey towards Artificial General IntelligenceThe achievement of AGI is a gradual process, requiring significant hardware innovation and advancements in learning, reasoning, and planning, rather than a single event.

      Achieving Artificial General Intelligence (AGI) is a complex and gradual process that requires significant hardware innovation, as well as advancements in areas such as learning from video, associative memories, reasoning and planning, and hierarchical planning. The idea of AGI being an event where someone discovers a secret and turns on a machine with human-level intelligence is a misconception. Instead, it will be a gradual progression with each step taking time. Intelligence is multi-dimensional and cannot be measured by a single number like IQ. The optimism surrounding AGI may be due to the Marvellous Paradox, where the realization that the world is not as easy as we think leads to eternal optimism. However, the perspective of AI doomers, who fear AGI leading to catastrophe, is based on false assumptions, such as AGI being an event and having the ability to escape human control. Instead, AGI will be a complex system that will take decades to develop.

    • Creating Safe and Controllable AI SystemsWhile AI may not have a desire to dominate humans, creating safe and controllable AI systems is crucial due to potential unintended consequences. Progressive, iterative approach and effective guardrails are necessary for reliable and safe technology.

      The development of advanced AI systems is a complex and iterative process. While AI may eventually reach human-level intelligence, it's unlikely to have the desire to dominate or eliminate humans due to the absence of a hardwired desire to dominate in non-social species. Guardrails and controls will be put in place to ensure AI systems behave properly and adhere to human values. However, unintended consequences are a possibility, and designing effective guardrails will require a progressive, iterative approach. The comparison to turbojet design illustrates the complex nature of creating reliable and safe technology. While there may not be a silver bullet solution for ensuring AI safety, the focus should be on creating better, more controllable AI systems. The potential for AI to be used as a weapon, such as an incredibly convincing AI system that can control people's minds, is a concern, but the comparison to nuclear weapons is not entirely accurate due to the differences in their capabilities and applications.

    • Continuous progress of AI technologyAI assistants act as intermediaries to filter out potential threats, advancements will be quickly adopted, but human fear of new technology may lead to calls for regulation or bans

      The development of AI technology will be a continuous progress, not a singular event. The use of AI assistants as intermediaries means that potential threats, such as attempts at manipulation or propaganda, will be filtered out before reaching individuals. The rapid dissemination of information and innovations in the tech industry will ensure that advancements are quickly adopted and replicated. However, there is a natural human fear of new technology and its potential impact on society. This fear can manifest in calls for regulation or even bans, as people worry about the consequences for their culture, jobs, and future.

    • New technologies and cultural phenomena face skepticism and fearEmbrace change, distinguish real dangers from imagined ones, and consider benefits of new technologies like big tech and AI. Mitigate risks with open source platforms and diverse AI systems, and trust humans to build AI for human benefit.

      Throughout history, new technological innovations and cultural phenomena have faced skepticism and fear, with some attributing societal problems to these changes. However, it's crucial to distinguish real dangers from imagined ones and consider the benefits of embracing change. With the rise of big tech and AI, concerns include the potential misuse of power and the risk of a few entities controlling the technology. Open source platforms and diverse AI systems are proposed solutions to mitigate this risk and preserve democracy and diversity of ideas. Despite concerns about security, trusting humans to build AI systems that benefit humanity is essential, as the future may involve collaborative AI-human relationships and the presence of robots in our physical reality.

    • Limited progress in advanced robotics due to lack of world models and self-learning capabilitiesRobotics industry relies on AI for progress towards true autonomy, but until then, robots will excel in specialized tasks or assist humans in simpler environments. Students can focus on developing world models for innovative discoveries.

      Significant progress in robotics is still limited due to the lack of advanced world models and self-learning capabilities. While we have made strides in areas like navigation and object recognition, true autonomy and understanding of complex environments and tasks are yet to be achieved. The robotics industry is betting on AI to make sufficient progress towards this goal, but until then, we can expect to see robots excel in specialized tasks or assist humans in simpler, more structured environments. For students interested in robotics research, focusing on developing world models through observation and planning with learned models could lead to innovative discoveries.

    • Lack of ability for machines to learn hierarchical representations of action plans on their own is a challenge for advanced AI capabilitiesMachines struggle to learn complex, multi-step plans independently, but AI holds great promise for amplifying human intelligence and increasing efficiency in our daily lives

      While we have made strides in hierarchical planning with AI, particularly in two-level planning, we still lack the ability for machines to learn hierarchical representations of action plans on their own. This poses a significant challenge in achieving advanced AI capabilities, such as a robot traveling from New York to Paris autonomously. However, the future of humanity holds great promise with AI, as it has the potential to amplify human intelligence and make us all "bosses" of smart virtual assistants. This could lead to increased efficiency and productivity in our daily lives, both personally and professionally. The analogy of the printing press is often used to illustrate the potential impact of AI - it made knowledge more accessible and transformed the world, but also brought about challenges. Despite these challenges, the overall impact was positive, and the same can be expected from AI.

    • Impact of historical events on modern technology and employmentHistorical events, like the Ottoman Empire's ban on Arabic printing presses, offer insights into AI's impact on employment and industries. New professions will emerge, but it's crucial to understand historical contexts and embrace technology responsibly.

      Historical events, like the ban on printing presses in the Ottoman Empire for the Arabic language, can provide valuable insights into the potential impacts of modern technologies, such as AI, on employment and industries. The ban was not primarily to preserve religious or political control, but to protect the livelihoods of calligraphers, an art form that held significant power and influence in the empire. Similarly, as AI and automation continue to transform the job market, there are concerns about mass unemployment and the need for regulation. However, experts suggest that new professions will emerge, and it's impossible to predict which ones. The open source movement, with its emphasis on making AI research and models accessible to everyone, can empower individuals and foster innovation, assuming that people are fundamentally good and capable of using technology responsibly. Ultimately, the conversation highlights the importance of understanding historical contexts and embracing the potential of technology while acknowledging the challenges it brings. It's a reminder that the only way to discover the limits of the possible is to go beyond them and explore the impossible.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    This episode is sponsored by Celonis ,the global leader in process mining. AI has landed and enterprises are adapting. To give customers slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through systems. Creating data at every step. Celonis reconstructs this data to generate Process Intelligence. A common business language. So AI knows how your business flows. Across every department, every system and every process. With AI solutions powered by Celonis enterprises get faster, more accurate insights. A new level of automation potential. And a step change in productivity, performance and customer satisfaction Process Intelligence is the missing piece in the AI Enabled tech stack.

     

    Go to https://celonis.com/eyeonai to find out more.

     

    Welcome to episode 151 of the ‘Eye on AI’ podcast. In this episode, host Craig Smith sits down with Asa Cooper, a postdoctoral researcher at NYU, who is at the forefront of language model safety. 

    This episode takes us on a journey through the complexities of AI situational awareness, the potential for consciousness in language models, and the future of AI safety research.

    Craig and Asa delve into the nuances of AI situational awareness and its distinction from sentience. Asa, with his rich background in NLP and AI safety from Edinburgh University, shares insights from his post-doc work at NYU, discussing collaborative efforts on a paper that has garnered attention for its take on situational awareness in large language models (LLMs). 

    We explore the economic drivers behind creating AI with such capabilities and the role of scaling versus algorithmic innovation in achieving this milestone. We also delve into the concept of agency in LLMs, the challenges of post-deployment monitoring, and the effectiveness of current measures in detecting situational awareness.

    To wrap things off, we break down the importance of source trustworthiness and the model's ability to discern reliable information, a critical aspect of AI safety and functionality, so make sure to watch till the end.

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Preview and Introduction

    (02:30) Asa's NLP Expertise and the Safety of Language Models

    (06:05) Breaking Down AI's Situational Awareness 

    (13:44) Evolution of AI: Predictive Models to AI Coworkers

    (20:29) New Frontier in AI Development?

    (27:14) Measuring AI's Awareness

    (33:49) Innovative Experiments with LLMs

    (40:51) The Consequences of Detecting Situational Awareness in AI

    (44:07) How To Train AI On Trusted Sources

    (49:52) What Is The Future of AI Training?

    (56:35) AI Safety: Public Concerns and the Path Forward**

     

    Hinton and Bengio on Managing AI Risks

    Hinton and Bengio on Managing AI Risks
    A group of scientists, academics and researchers has released a new framework on managing AI risks. NLW explores whether we're moving to more specific policy proposals. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Can Open Source AI Compete with Big Tech? A Look at TruthGPT, RedPajama and MiniGPT

    Can Open Source AI Compete with Big Tech? A Look at TruthGPT, RedPajama and MiniGPT
    Some argue that open source AI is the key to making AI's benefits to the entire world, as well as making AI safer. Others think that open source can multiply risks. With a slate of new projects being announced this week, the conversation is heating up. Discussed in this episode: Elon Musk's planned TruthGPT  Dolly 2.0, an open source LLM based on the EleutherAI pythia model built by Databricks RedPajama, an open source proxy of Facebook/Meta's LLaMA, from Together  MiniGPT, an open source AI model that can extract information from images  Stable Diffusion XL, the latest open source text-to-image model from Stability AI

    ChatGPT Adds Private Mode as HuggingFace Launches Open Source Competitor

    ChatGPT Adds Private Mode as HuggingFace Launches Open Source Competitor
    Yesterday ChatGPT announced a number of important updates, including a privacy preserving mode that doesn't record chat history (and doesn't train OpenAI models on chat history) as well as a forthcoming business offering. Many think that the moves are at least in part response to European scrutiny around GDPR compliance. Also yesterday, AI platform HuggingFace announced HuggingChat, an open source ChatGPT competitor.

    #132 - FraudGPT, Apple GPT, unlimited jailbreaks, RT-2, Frontier Model Forum, PhotoGuard

    #132 - FraudGPT, Apple GPT, unlimited jailbreaks, RT-2, Frontier Model Forum, PhotoGuard

    Our 132nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links: