Logo
    Search

    #367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

    enMarch 25, 2023

    Podcast Summary

    • OpenAI's Leadership in the Future of AIAI has the power to transform society but also the potential to destroy it. It's important for leaders and philosophers to have conversations about how to align AI with human values, and reinforcement learning with human feedback is a breakthrough model that could help achieve this goal.

      OpenAI, the company behind groundbreaking AI technologies like GPT-4 and Dolly Codex is leading the way in shaping the future of artificial intelligence. While AI has the potential to transform society and improve the quality of life, it also has the power to destroy human civilization. Leaders and philosophers must engage in conversations about power, institutions, political systems, and economics that incentivize the safety and alignment of this technology. One breakthrough model that has the potential to change the AI landscape is the reinforcement learning with human feedback (RLHF) used in chat GPT. RLHF aligns the model to better serve human needs and wants. Ultimately, it's crucial to balance the exciting and terrifying possibilities of AI to ensure a future that is aligned with human values.

    • The Significance of Human Guidance in AI ModelsIncorporating human guidance into AI models enhances their usefulness and accuracy, making them more trustworthy and aligned with human thinking. Creating a diverse pre-training data set is crucial in developing effective AI models like GPT-4.

      Sam Altman and Lex Fridman discuss the significance of adding human guidance to AI models, which significantly enhances their usefulness and ease of use, leading to more accurate results. The science of human guidance is essential in creating AI models, as it helps in making them trustworthy, ethical, and more aligned with human thinking. Creating a large pre-training data set is crucial, and it involves pulling together diverse data from multiple sources, including open source databases, news sources, and the general web, among others. The development of AI models such as the GPT-4 involves several components, including choosing the neural network's size, selecting the data set, and incorporating human feedback. Understanding the science of human guidance is crucial in creating effective AI models.

    • Understanding the Capabilities and Limitations of AI ModelsAI models are valuable tools, but evaluating their performance and understanding their reasoning capabilities is crucial. While the GPT-4 model is remarkable, it's important to recognize AI's limitations and avoid anthropomorphizing them.

      Open AI's Sam Altman explains that while they are still trying to understand why AI models make certain decisions, they are gaining a better understanding of how useful and valuable the models are to people. Evaluating a model's performance and understanding its reasoning capabilities are crucial to determine its overall impact on society. The GPT-4 model is particularly remarkable because it can reason to some degree and generate wisdom from human knowledge. However, there are still limitations to its capabilities, and there is a continuous effort to improve its functionality. While the temptation to anthropomorphize AI is high, it is important to recognize that AI models are not human and may struggle with certain tasks.

    • The Potential and Limitations of AI ModelsAI models can perform complex tasks, but are often weak in areas like counting characters and words. Publicly building AI models can help identify strengths and weaknesses, but potential biases underline the need for granular user control. AI models can bring nuance to complex issues, which is vital for effective problem-solving.

      Artificial Intelligence (AI) models are not good at tasks such as counting characters and words accurately. While AI models have the capability to perform tasks that we cannot imagine, they still have significant weaknesses that need to be fixed. Building AI models in public allows outside feedback that can help to discover more strengths and weaknesses. However, putting out imperfect models that are biased in some instances could be risky. These biases underline the importance of giving users more granular control. AI models can also bring nuance back to the world, which is crucial in dealing with complex problems that require critical thinking.

    • OpenAI's GPT-4 Safety Measures and Steerable SystemOpenAI's GPT-4 underwent rigorous safety evaluations and has a steerable system that requires good prompt writing for optimal outcome.

      OpenAI's GPT-4 goes through extensive safety testing before release. The team conducted internal and external safety evaluations while building new ways to align the model. Although the alignment process was not perfect, the team focused more on increasing their alignment degree faster than their rate of capability progress. OpenAI used a human voting process called "RLHF" to align the system with human values. The team made GPT-4 more steerable with the system message, which allows users to give prompts directing a specific type of output. Writing a great prompt is crucial for great results.

    • The Power and Responsibility of GPT-4 in Programming and AI SafetyAs AI technology advances with GPT-4, it brings new opportunities for debugging and creativity, but also requires responsibility in ensuring alignment with human values and balancing AI power.

      The advancements in AI, particularly with GPT-4, are changing the nature of programming and allowing for a new form of debugging through back-and-forth dialogue with the AI system. However, with this power comes the responsibility of AI safety and ensuring that the AI aligns with human preferences and values. This is a difficult problem as it requires navigating who gets to decide the limits and balancing the power of the AI with drawing lines that we all agree have to be drawn somewhere. Despite the challenges, the leverage that AI gives to do creative work better and better is super cool and has already changed programming remarkably.

    • Setting Boundaries for Ethical AIIt is essential to involve diverse perspectives in the development of AI and work towards reducing biases. Transparency and accountability are crucial for creating ethical AI that benefits society.

      In order for AI to function ethically and effectively, it is necessary to agree on what we want it to learn and what boundaries should be set for its output. The ideal scenario would involve a democratic process where people from all over the world come together to deliberate and agree on the rules for the system. While this may not be entirely practical, it is important for AI builders to involve a range of perspectives in the development process and be accountable for the results. It is also important to acknowledge that AI models can have biases and to work towards improving their ability to present nuanced perspectives. Despite the pressure of clickbait journalism, transparency remains crucial in the development of ethical AI.

    • OpenAI's Commitment to Continuous Improvement and User RespectOpenAI values user feedback and respect, refusing to answer certain questions while constantly improving their GPT models. Their focus on small wins for improvement shows that success isn't just about size, but also technical leaps and performance.

      OpenAI is constantly improving their system by listening to feedback and continuously developing their models. They have systems in place to refuse to answer certain questions and are constantly improving the tooling for their GPT models. OpenAI also recognizes the importance of treating their users like adults and not scolding them. The leap from GPT-3 to GPT-4 involves a lot of technical leaps and improvements. While size does matter in neural networks, it is just one of many factors that contribute to a system's performance. OpenAI focuses on finding small wins that, when combined, can have a big impact.

    • The Potential of GPT Models for Achieving General IntelligenceGPT models are capable of impressive performance, but building artificial general intelligence is still an unknown. There is potential for breakthroughs with GPT data, but new ideas may be necessary to expand on this paradigm.

      In this discussion between Sam Altman and Lex Fridman, they explore the topic of GPT (Generative Pre-trained Transformer) models and their potential for achieving general intelligence. While the size of the model in terms of number of parameters has been a focus for some, Sam Altman argues that what matters most is performance, and the best solution may not always be the most elegant. While GPT models have achieved incredible results, there is still much unknown about building artificial general intelligence, and it is possible that new ideas and expanding on the GPT paradigm will be necessary. However, there is also potential for deep scientific breakthroughs with just the data that GPT models are trained on.

    • AI as an Extension of Human Will and an Amplifier of our AbilitiesAI is a tool that can bring extraordinary benefits, but it must be aligned with human values and goals to avoid harm and limitations on human potential.

      AI is not a standalone system, but a tool used by humans in a feedback loop, and can be an extension of human will and an amplifier of our abilities. The benefits of AI can be extraordinary, including curing diseases, increasing material wealth, and making people happier and more fulfilled. Despite concerns about job displacement and the rise of super intelligent AI that may harm humans, people will still strive for status, drama, creativity, and usefulness even in a vastly improved world. To realize this optimistic vision, AI must be aligned with human values and goals to avoid harm or limitations on human potential.

    • The Importance of Addressing the AI Alignment ProblemSam Altman, CEO of OpenAI, stresses the importance of addressing the AI alignment problem to prevent the potential dangers of AI. Efforts must be put into solving the problem and learning from the technology trajectory to ensure AI's safety.

      Sam Altman, the CEO of OpenAI, believes that there is a chance for AI to become a dangerous technology if precautions are not taken. He acknowledges that predicting what AI is capable of can be challenging and that many of the previous predictions have been proven wrong. Altman believes that it is crucial to ensure that the AI alignment problem is addressed, and more effort must be put into solving it. While there is a need for theory, he also emphasizes the importance of learning from how the technology trajectory goes. Finally, he mentions that there is still a lot of work to be done to ensure AI's safety, and now is the perfect time to ramp up technical alignment work.

    • Debating the Consciousness and Capabilities of Advanced AI ModelsDeveloping conscious AI is a complex process that requires consideration of its interface, prompts, and training data. To test for true consciousness, mentions of it should be avoided.

      In a conversation about artificial general intelligence (AGI), two experts debated the potential consciousness and capabilities of advanced AI models like GPT-4. While they agreed that GPT-4 is not yet an AGI, they also discussed how the interface and prompts provided to the AI could impact its level of consciousness and understanding of self. They also delved into the importance of careful training data and avoiding any mentions of consciousness in order to truly test if an AI model is capable of consciousness. Ultimately, understanding the potential for advanced AI requires careful consideration of both its capabilities and limitations.

    • Risks of Consciousness in AI: Insights from Sam Altman and Lex FridmanAs AI capabilities grow, there is concern over disinformation, economic shocks, and safety controls. Regulatory approaches and powerful AI can help prevent these issues. Prioritize safety and mission over market pressures.

      In a conversation between Sam Altman and Lex Fridman, they discuss the concept of consciousness in AI and the potential risks that come with the increasing capabilities of AI. Altman believes that consciousness is a strange phenomenon, and while an AI may be able to pass a Turing test for consciousness, there are still many other tests that could be looked at. Altman also voices his concerns about disinformation problems, economic shocks, and the danger of capable LLMs (large language models) with no safety controls. He suggests trying regulatory approaches and using more powerful AI to detect and prevent these issues. Overall, it is important to prioritize safety and stick to your mission in the face of market-driven pressure.

    • OpenAI's Unique Structure and Approach to AGI DevelopmentOpenAI's for-profit and nonprofit hybrid structure allows for non-standard decisions and collaborations while minimizing the negative impacts of AGI. Collaboration and ethical considerations are at the forefront of their approach.

      OpenAI began as a nonprofit organization, but they realized they needed more capital to build AGI, which they couldn't raise as a nonprofit. They became a capped for-profit organization while still keeping the nonprofit fully in charge. OpenAI's unique structure allows them to make non-standard decisions and merge with another organization while protecting them from making decisions that are not in shareholders' interests. OpenAI doesn't worry about out-competing everyone, as many organizations will contribute to AGI with differences in how they're built and what they focus on. While some companies are, unfortunately, after unlimited value, OpenAI believes that the better angels within these companies will win out, leading to a healthy conversation about how to collaborate to minimize the scary downsides of AGI.

    • The Challenges of Democratizing AI According to OpenAI CEOSam Altman, CEO of OpenAI, recognizes the importance of democratizing AI to reflect changing needs but acknowledges the challenges. OpenAI's transparency and willingness to share information about safety concerns is a step in the right direction. Feedback and collaboration with smart people are essential to figuring out how to do better in uncharted waters.

      Sam Altman, the CEO of OpenAI, recognizes the potential dangers of a handful of people having control over powerful AI technology. He believes that decisions about the technology should become increasingly democratic over time to reflect the world's changing needs. However, Altman acknowledges that democratizing AI is challenging and that institutions must come up with new norms to regulate it. Despite the concerns, Altman believes that OpenAI's transparency and the company's willingness to "fail publicly" and share information about safety concerns is a step in the right direction. He welcomes feedback and acknowledges that they are in "uncharted waters" with AI, explaining why it is essential to talk to smart people to figure out what to do better.

    • Collaboration and Disagreement between OpenAI and Elon MuskOpenAI CEO Sam Altman recognizes the potential danger of advanced AI, but values Musk's impact on the world through electric vehicles and space travel. Altman plans to combat AI bias by speaking with diverse people worldwide.

      Sam Altman, the CEO of OpenAI, recently discussed his work with Elon Musk, the founder of the company. The two agree on the magnitude of the downside of advanced AI, but disagree about certain aspects of its development. Altman has empathy for Musk, who he believes is understandably stressed about AI safety. Despite this, he admires Musk's contributions to the world, including driving forward the development of electric vehicles and space travel. Altman also acknowledges the challenge of bias in AI, which can be affected by the perspectives of employees in a company. To combat this, he plans on going on a world user tour to speak with people of different backgrounds and perspectives.

    • Importance of Selecting Representative Human Feedback Raiders for AI ModelsIt is crucial to carefully choose representative human feedback raiders for AI models to optimize rating tasks and empathize with diverse experiences. While technology can help reduce bias, outside pressure can impact models.

      Sam Altman, CEO of OpenAI, discusses the importance of selecting representative human feedback raiders for their AI models. He acknowledges that selecting these individuals is challenging, and that the company is still trying to figure out how to implement this effectively. Altman emphasizes the need to optimize for how well the rating tasks are done and to empathize with the experiences of different groups of people. He also discusses the potential for technology to make AI models less biased than humans, but acknowledges the pressure from outside sources, such as politicians and organizations, that could influence the models. Altman expresses his ability to withstand such pressure and his humility in recognizing his own weaknesses.

    • The Impact of AGI and GPT Models on Programming and SocietyAs development of AGI and GPT language models progresses, it's important to prioritize user-centricity and be aware of the potential supply issues in the programming industry. Understanding the impact on society is crucial.

      Sam Altman and Lex Fridman discuss the topic of change, specifically related to the development of AGI and GPT language models. Altman notes the danger of charisma in those in power and the importance of being a user-centric company. Fridman shares his nervousness and excitement about change and the implementation of GPT models in programming. Altman believes that with 10 times more code generated by GPT models, there will be a greater need for programmers, leading to a supply issue. Ultimately, Altman plans to travel and empathize with different users to better understand the impact AGI will have on people.

    • The Future of Jobs and TechnologyAI and technology might reduce job opportunities, but they also create new ones while enhancing others. Maintaining dignity at work, creating fulfilling jobs, and supporting UBI are critical during the transition to a tech-driven economy.

      Sam Altman, entrepreneur and investor, believes that as AI and technology continue to advance, certain job categories like customer service may see a significant reduction in employment opportunities. However, he also believes that while technological revolutions have historically eliminated jobs, they have also enhanced, created, and made higher paying jobs more enjoyable. Altman emphasizes the importance of maintaining the dignity of work and creating better jobs that offer fulfillment and happiness to individuals. He also supports Universal Basic Income (UBI) as a cushion during the transition to a tech-driven economy and eliminating poverty. Altman believes that the economic and political systems will change as AI and technology become more prevalent, but the economic transformation will drive most of the political transformation.

    • Sam Altman's Perspective on Supporting the Less Fortunate and the Power of Distributed ProcessesSam Altman believes in lifting the floor instead of focusing on the ceiling, values individualism, and trusts in the collective intelligence and creativity of the world. He emphasizes the importance of humility and uncertainty in the development of super intelligent AGI.

      Sam Altman, entrepreneur and investor, believes in supporting the less fortunate and lifting the floor instead of focusing on the ceiling. He recoils at the idea of living in a communist system and values individualism, human will, and self-determination. He also believes in distributed processes that would always beat centralized planning. When it comes to super intelligent AGI, he thinks it may or may not be better than multiple intelligent AGIs in a liberal democratic system, but he emphasizes the importance of engineered humility and uncertainty. Altman and his team worry about terrible use cases for their models and perform red teaming to avoid them, but trust in the collective intelligence and creativity of the world. From what he's seen, Altman thinks that humans are mostly good.

    • Navigating Truth in an Age of MisinformationWith a plethora of information and disinformation online, it can be challenging to decipher what is true and what is not. While some facts have a higher degree of truth, others rely on a convincing narrative. It is essential to evaluate evidence and consider multiple perspectives when trying to determine the truth.

      In a conversation with Sam Altman and Lex Fridman, they discuss the difficulty of determining what is true and what is not, especially in the age of misinformation. They mention how certain truths, such as mathematics and some physics, have a high degree of "truthiness." However, other historical events and theories can be more ambiguous and often rely on a "sticky" narrative to be accepted as true. They also recognize the importance of considering circumstantial and direct evidence when evaluating hypotheses, such as the origin of COVID-19. Ultimately, in constructing a GPT model or navigating the world, one must contend with the challenge of determining what is true and what is not.

    • Challenges in navigating censorship and free speech for OpenAI's GPTAs OpenAI's GPT tool grows in power, there is a responsibility to minimize harm caused by its responses. While censorship may be necessary in certain situations, the team at OpenAI is working to improve user control and responsibly manage potential harm.

      OpenAI's GPT tool is facing new challenges that its predecessors did not face, including free speech and censorship issues. As the tool becomes more powerful, the pressure to censor its responses could increase. However, the responsibility falls on the developers at OpenAI to minimize harm caused by GPT and maximize its benefits. There is a potential for harm, but tools can do wonderful good and bad, and minimizing the bad is a top priority. While there could be truths that are harmful, the responsibility of GPT must be upheld by humans. The team at OpenAI is constantly shipping new products and striving to improve the control users have over GPT models.

    • The Importance of Teamwork and Strong Leadership in AI-based ProductsSetting high standards for team members, providing trust and autonomy while expecting hard work, and partnering with aligned and flexible leaders are essential for successful AI-based products.

      Sam Altman, CEO of OpenAI, emphasizes the importance of teamwork, strong leadership, and a passion for the goal in successfully shipping AI-based products. Altman believes in setting a high bar for team members, providing them with trust, autonomy, and authority, while holding them to very high standards. He spends a lot of time hiring great teams and expects hard work from them even after they are hired. Altman praises Microsoft as an amazing partner, with their leaders being aligned, flexible, and going beyond the call of duty. He credits Microsoft CEO Satya Nadella with being both a great leader and manager, rare qualities in a CEO.

    • The Importance of Leadership and Incentive Alignment in BusinessBusinesses must prioritize incentive alignment and leadership to avoid misalignment, which can be detrimental to the company and its customers. Embracing new technology like AGI can be beneficial, but must be implemented slowly and carefully.

      Sam Altman, a successful businessman and entrepreneur, discusses the importance of leadership and incentive alignment in business. He highlights the dangers of misalignment, such as what happened with Silicon Valley Bank (SVB) and the importance of avoiding depositors doubting the security of their deposits. Furthermore, he acknowledges the fragility of our economic system, especially in the face of new technology such as AGI. Still, Altman believes that the upside of the vision of AGI unites people and shows how much better life can be with technology. However, he stresses the importance of deploying AGI slowly to ensure institutions can adapt.

    • The Dangers of Anthropomorphizing Tools and SystemsWe need to be aware that tools and systems are not creatures, but rather created for specific purposes. As we develop advanced tools, we must be careful not to project emotions onto them and stay focused on their intended use.

      In a conversation between Sam Altman and Lex Fridman, they discussed how people tend to anthropomorphize tools and systems and how it's important to educate people that they are just tools, not creatures. They touched on the idea of creating intelligent tools and the potential for them to manipulate emotions. They also talked about the possibility of AGI (artificial general intelligence) being able to help solve mysteries like the physics of the universe and even potentially detecting intelligent alien life. Overall, it's important to be careful with creating tools that resemble creatures and to use them for their intended purpose rather than projecting emotions onto them.

    • The Illusion of Free Will: Navigating Life's ComplexityData is valuable, but take advice from others with caution. Follow what brings joy and fulfillment, find what's useful and meaningful, and be introspective. Life may feel like going with the flow, but it's important to navigate its complexity.

      In a conversation about artificial intelligence and life advice, Sam Altman suggests that while data can provide insights, it's important to approach advice from others with caution. Altman advocates for following what brings joy and fulfillment, and figuring out what is useful and meaningful to oneself. He also emphasizes the importance of being introspective, but notes that a lot of life may just feel like going with the flow, like a fish in water. Ultimately, the discussion raises questions about the illusion of free will and the complexity of navigating life.

    • The Evolution of Artificial Intelligence and the Importance of CollaborationArtificial intelligence is the result of human innovation over time, and progress should be made through iterative deployment and collaboration to ensure alignment and safety in development, ultimately leading to new tools and advancements for civilization.

      Sam Altman, the CEO of OpenAI, highlights that the development of artificial intelligence is a culmination of human effort dating back to the discovery of the transistor. He emphasizes that the exponential curve of human innovation has led us to our current state of technological advancement. While there are differing opinions on the approach to deploying AI, Altman believes in iterative deployment and discovery. He also acknowledges the importance of working collaboratively as a human civilization to ensure alignment and safety in the development of AI. Ultimately, Altman believes that the progress we make with AI will lead to new tools and great advancements for our civilization.

    Recent Episodes from Lex Fridman Podcast

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life
    Ivanka Trump is a businesswoman, real estate developer, and former senior advisor to the President of the United States. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/ivanka-trump-transcript EPISODE LINKS: Ivanka's Instagram: https://instagram.com/ivankatrump Ivanka's X: https://x.com/IvankaTrump Ivanka's Facebook: https://facebook.com/IvankaTrump Ivanka's books: Women Who Work: https://amzn.to/45yHAgj The Trump Card: https://amzn.to/3xB22jS PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:17) - Architecture (22:32) - Modern architecture (30:05) - Philosophy of design (38:21) - Lessons from mother (1:01:27) - Lessons from father (1:09:59) - Fashion (1:20:54) - Hotel design (1:32:04) - Self-doubt (1:34:27) - Intuition (1:37:37) - The Apprentice (1:42:11) - Michael Jackson (1:43:46) - Nature (1:48:40) - Surfing (1:50:51) - Donald Trump (2:05:13) - Politics (2:21:25) - Work-life balance (2:27:53) - Parenting (2:42:59) - 2024 presidential campaign (2:46:37) - Dolly Parton (2:48:22) - Adele (2:48:51) - Alice Johnson (2:54:16) - Stevie Ray Vaughan (2:57:01) - Aretha Franklin (2:58:11) - Freddie Mercury (2:59:16) - Jiu jitsu (3:06:21) - Bucket list (3:10:50) - Hope
    Lex Fridman Podcast
    enJuly 02, 2024

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    Related Episodes

    Navigating the Future with ChatGPT Founder Sam Altman

    Navigating the Future with ChatGPT Founder Sam Altman

    Sam Altman an American entrepreneur, investor, and programmer. He was the co-founder of Loopt and is the current CEO of OpenAI. He was the president of Y Combinator and was briefly the CEO of Reddit.

    5x #1 Bestselling Author and Motivational Speaker Erik Qualman has performed in over 55 countries and reached over 50 million people this past decade. He was voted the 2nd Most Likable Author in the World behind Harry Potter's J.K. Rowling.

    Have Erik speak at your conference: eq@equalman.com

    Motivational Speaker | Erik Qualman has inspired audiences at FedEx, Chase, ADP, Huawei, Starbucks, Godiva, FBI, Google, and many more on Focus and Digital Leadership.

    Learn more at https://equalman.com

    S10E3 - Will AI Prompt Us To Build A Sustainable Future?

    S10E3 - Will AI Prompt Us To Build A Sustainable Future?

    with 🎙️ Alice Schmidt, MBA lecturer, Adviser to the European Commission, and non-profit organizations like Extinction Rebellion, Protect our Winters, and Chair of the Board of Endeva e.V. 

    with 🎙️ Claudia Winkler, CEO, and co-founder of Goood Mobile, Europe's first B-Corp Certified telecom provider, and a Founding Partner of Adjacent Possible Network. 

    💧 Alice and Claudia had been with us for their first book "the Sustainability Puzzle," and are back for an in-depth exploration of the AI world and its potential impact on Societal Progress and a Sustainable Future. This is the topic of their second book: Fast Forward, written with Florian Schütz and Jeroen Dobbelaere.


    What we covered:


    📚 How Alice and Claudia's "Sustainability Puzzle" hit the zeitgeist by guiding both sustainability newbies and experts through the multifaceted realm of eco-responsibility.

    🔬 Why their new book, "Fast Forward," hones in on technology, one of the six puzzle pieces from their first book, to spotlight its transformative potential for sustainability.

    🤖 What Claudia sees as the agnostic nature of technology and why Alice casts a skeptical eye on the good intentions behind big tech, 

    🎯 How both authors urge active participation in shaping AI's direction, suggesting that collective input could steer technology towards a more equitable and sustainable future.

    🤖 How Alice and Claudia disagree on the role of tools like ChatGPT, revealing a tension between AI as a utilitarian augmenter versus a potential risk to trust. 

    🌐 Why both emphasize AI literacy, advocating for the masses to engage with tools like ChatGPT in order to make well-informed decisions.  

    🚀 Why ChatGPT's viral success is likened to the smartphone revolution, ushering in mass-market adoption of AI technologies and sparking important societal debates.

    🎓 How Schools Aren't Necessarily the AI Saviors and What Lifelong Learning Means in the AI Age

    🚨 How AI Could Be Its Own Worst Enemy and Why Big Tech Needs a Leash: 

    🤖 How AI's Multifaceted Analysis Outsmarts Human Limitations

    💼 Why Regulations Don't Suffocate Innovation, They Guide It.

    🤑 How Money Talks, Even for AI

    🌱 Optimism in the AI-Ecology Nexus

    🤖 How AI is personalizing Claudia's life and revolutionizing smart cities, making them more livable and sustainable. 

    🌊 What Tuvalu's digital twin reveals: a poignant, almost meta response to climate-induced existential threats that also sheds light on the term "ecological racism." 

    🎯 Why stepping into the AI world varies by individual: It's all about awareness, understanding the bigger picture, and leveraging a human-centered approach for universal benefit. 

    🗳️ How the democratization of AI and active participation in decision-making could be a marker for Alice Schmidt's vision of positive societal impact.

    🤖 Why Technology is Finally Joining the Sustainability Conversation: Claudia is optimistic about the growing convergence between sustainability and technology agendas, indicating a maturing landscape.

    📖 Claudia's 14-year-old son's ChatGPT book, Parents being the Unsung Heroes in AI Education, Neom as a double-sided sword, Betting big on Circularity, Playing the Long Game with Conferences... and much more!


    🔥 ... and of course, we concluded with the 𝙧𝙖𝙥𝙞𝙙 𝙛𝙞𝙧𝙚 𝙦𝙪𝙚𝙨𝙩𝙞𝙤𝙣𝙨 🔥


    ➡️ Send your warm regards to Alice on LinkedIn 

    ➡️ Then do the same with Claudia on LinkedIn as well 

    ➡️ Get to know the two new brains that worked on "Fast Forward:" Jeroen and Florian

    ➡️ Check out the entire article on AI prompting a Sustainable Future   

    ➡️ Reach out to me: antoine@dww.show 

    AI: What is artificial intelligence and should we be worried about it?

    AI: What is artificial intelligence and should we be worried about it?

    OpenAI has released GPT-4, the latest version of its hugely popular artificial intelligence chatbot ChatGPT that it claims is its "most advanced system yet". The new model can answer all sorts of requests and questions using human-like language, pass academic exams, and now even give you recipe suggestions based on a photo of the content of your fridge.

    All this sounds exciting - or scary?

    Host Andy Bell is joined by author and AI expert Nina Schick to find out what artificial intelligence is, how it is being used and the ways it could change how we live our lives in both good and dangerous ways.

    How Did We Get Here? Explaining the News is a podcast from Channel 5 News. Join Andy Bell as he explains the world's biggest news stories through interviews with politicians, experts, and analysts. Produced by Silvia Maresca.

    Guest producer: Jodiane Milton.

    What the Hell Happened at OpenAI with Karen Hao

    What the Hell Happened at OpenAI with Karen Hao

    Recently, OpenAI CEO Sam Altman was ousted and then reinstated in a matter of days. No explanation has been made public, which is unsettling considering just how quickly OpenAI, ChatGPT, and DALL·E have become household names. What's actually happening behind closed doors at OpenAI? Adam is joined by tech journalist Karen Hao to discuss the history of this massively influential company, how they've struggled with the identity of being a non-profit, and how the future of AI is ultimately at the mercy of the capitalist forces that drive it.

    SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconover

    SEE ADAM ON TOUR: https://www.adamconover.net/tourdates/

    SUBSCRIBE to and RATE Factually! on:

    » Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577

    » Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJ

    About Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.

    » SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1

    » FOLLOW us on Twitter: http://twitter.com/headgum

    » FOLLOW us on Instagram: https://instagram.com/headgum/

    » FOLLOW us on TikTok: https://www.tiktok.com/@headgum

    » Advertise on Factually! via Gumball.fm

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    OpenAI Preps For Artificial SuperIntelligence, New AI Tools & Gina Darling Visits | Ep37

    OpenAI Preps For Artificial SuperIntelligence, New AI Tools & Gina Darling Visits | Ep37

    This week… OpenAI lays out safety measures for dealing with Artificial Super Intelligence, Google’s Deepmind solved a previously impossible math problem & then we made Guy Fieri cartoons with Domo’s AI animation software. These are all of equal importance! 

    Plus, Gavin dove into Digi.AI a new AI “companion” app, ChatGPT turned a Chevy dealership’s chatbot into a hilarious nightmare & Google Labs has some incredibly cool new music tools you can play with right now.

    AND THEN…

    It’s an A4H Interview with Twitch Steamer & Podcaster Gina Darling whom Kevin got to know well at G4. We talk about AI companionship, get AI to help her buy gifts for her boyfriends parents and introduce her to AI Gina Darling (surprise!)

    Oh and don’t forget our AI co-host this week, we’re actually visited by AI Santa Claus and his lil head elf Max. Santa tells us about how they’re using AI to automate the North Pole but, unfortunately, he forgot to tell Max and the rest of the elves. 

    It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you.

    Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow

    Join our vibrant community on TikTok @aiforhumansshow

    For more info, visit our website at https://www.aiforhumans.show/

    /// Show links ///

    New Prepared-ness Team at OpenAI

    https://openai.com/safety/preparedness

    Google Deepmind Does New Math

    https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/

    Finals Dev Talks AI

    https://www.gamedeveloper.com/audio/embark-studios-ai-let-devs-do-more-with-less-when-making-the-finals

    GPT-4.5? Nah

    https://twitter.com/AiBreakfast/status/1736392167906574634?s=20

    https://x.com/rowancheung/status/1736616840510533830?s=20

    Sentient Chevy Bot

    https://twitter.com/ChrisJBakke/status/1736533308849443121

    https://www.autoevolution.com/news/chatgpt-powered-customer-support-at-chevrolet-dealership-hilariously-recommended-tesla-226253.html

    Google Labs Music FX

    https://aitestkitchen.withgoogle.com/tools/music-fx

    Domo AI Discord

    https://discord.com/invite/domoai

    Digi.AI AI Companion App

    https://digi.ai/

    Gina Darling

    @GinaDarlingChannel

    https://www.twitch.tv/missginadarling

    The Spill It Podcast: https://www.youtube.com/@ShowBobas