Logo
    Search

    Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems

    enJanuary 17, 2020

    Podcast Summary

    • Robots that engage with humans are the most impactfulDesigning robots for human interaction and adaptation, rather than just accuracy, can lead to greater success in enhancing our quality of life.

      Perfection in robotics is not just about accuracy and following rules, but also about interaction and adaptation to human behavior. Ayanna Howard, a roboticist and professor at Georgia Tech, shared her thoughts on the most impactful robot in her life, Rosie from The Jetsons. While Rosie wasn't perfect in terms of accuracy, she was perfect in how she engaged with people and adapted to their needs. In robotics, perfection is linked to enhancing our quality of life, and robots that interact well with humans, even with some imperfections, can be the most successful. This perspective challenges the traditional definition of perfection in robotics and highlights the importance of designing robots that can adapt to and engage with humans in a meaningful way.

    • Navigating the complexities of human behavior and anomalous situations in autonomous vehiclesThe future of autonomous vehicles lies in creating controlled environments where human interaction is minimized, but humans remain alert and cautious as technology learns and improves.

      While we strive for perfection in technology like self-driving cars, what truly matters is the ability of the technology to adapt to us and our human environment. The last mile of achieving this goal is proving to be challenging, as the automotive industry is still figuring out how to navigate the complexities of human behavior and anomalous situations. Success stories include Tesla's autopilot system, which operates effectively at high speeds on controlled environments, but humans remain hyper-vigilant and cautious when using it. The future of autonomous vehicles lies in creating controlled environments, such as closed campuses, where human interaction is minimized. The relationship between humans and technology in this field is a fascinating dance of skepticism and fascination, where we remain alert and cautious, yet continue to use and trust the technology as it learns and improves.

    • Understanding Human Behavior in Autonomous VehiclesThe acceptance and trust of self-driving cars by the public is uncertain due to the challenge of replicating human behavior and understanding subtle cues essential for context.

      The trajectory towards fully autonomous vehicles is promising, but it will depend on human trust and adaptation. Elon Musk is confident that self-driving cars will reach full autonomy by the end of next year. However, the acceptance and trust of this technology by the public is uncertain. People tend to swing between extreme fear and trust as they become more familiar with new technology. The human element presents a challenge, as it's difficult to account for unpredictable human behavior, such as a "student driver" sign or two young individuals chatting in a car without a bumper sticker. These subtle cues help establish context and are essential for human drivers but are difficult to replicate in an autonomous vehicle. There is a target demographic for technology adoption, such as early adopters who are comfortable with technology and less hypersensitive. However, as long as humans are on the roads, the challenge of mapping and understanding human behavior will remain. Comparing the development of self-driving cars to bird flight, there's a question of whether it's necessary to build a bird that flies or if an airplane is a viable alternative. The answer lies in finding a shortcut to understand and replicate human behavior in autonomous vehicles.

    • Designing systems with anomalies in mindAssuming anomalies and designing systems ethically can lead to safer and more accepting environments. Developers must consider ethical implications throughout the development process.

      Designing systems, such as smart cities or autonomous vehicles, with the assumption of anomalies and variances, including human behavior, can lead to a safer and more accepting environment. This concept, referred to as a "fixed space," allows for the collection of data and the minimization of potential issues. However, ethical considerations, including legal and liability issues, must also be addressed when developing robotic algorithms, as they can have a significant impact on human life. Developers have a responsibility to consider the ethical implications of their work, whether it's in the field of robotics or other areas, as the decisions they make can have far-reaching consequences. Ethics should not be an afterthought, but rather a fundamental aspect of the development process.

    • Considering Ethical Implications in Tech WorkRecognize potential impact of code on people, minimize harm, view responsibility as a gift, acknowledge and address biases in robotics systems for fair and effective functioning.

      As developers, we need to consider the ethical implications of our work beyond just technical testing. This means acknowledging the potential impact our code could have on people and taking responsibility for minimizing harm. The speaker emphasizes the importance of viewing this responsibility as a gift rather than a burden, drawing parallels to the medical profession. Additionally, recognizing and addressing biases in robotics systems is crucial for ensuring fair and effective functioning. Biases, which are preconceived notions that can influence decision-making, can be positive or negative, while prejudice refers to the conscious use of biases to produce negative outcomes. By being aware of these biases and striving for unbiased decision-making, we can create more equitable and effective robotic systems.

    • Recognizing and addressing biases in technology designWhen designing tech like robots or algorithms, it's crucial to consider diversity of user base and acknowledge historical biases in data to create fair and ethical tech.

      Ethical questions in technology, particularly in the development of robots and algorithms, often exist in gray areas. An example given was the practice of charging higher insurance premiums for teenage drivers, which is ageist but widely accepted due to the higher accident rate among teenagers. However, when similar biases are found in algorithms, such as those used in healthcare, it becomes a problem. These biases are often based on historical data and reflect societal prejudices. It's important to recognize and address these biases from the beginning of the design process. The healthcare industry, with its history of gender and ethnicity biases, is one domain where this is particularly relevant. When designing robots or algorithms, it's crucial to consider the diversity of the user base and not just design for what's familiar. The data used to train these systems is often based on historical biases, and it's our responsibility to acknowledge and correct these biases to create fair and ethical technology.

    • AI outperforms humans in data processing, but has flaws and biasesAI can process large data and identify issues, but requires improvement and human oversight to address biases. Humans and AI can collaborate for better decision making.

      While AI may have flaws and biases, it can still outperform humans in certain areas, particularly in processing large amounts of data and identifying potential issues. It's important to acknowledge this and focus on improving AI rather than setting unrealistic expectations of perfection. Additionally, AI can serve as an advisor to human leaders, providing valuable insights and data to inform decisions. To address bias in algorithms, a more systematic approach to feedback and corrections is needed, moving beyond the current ad hoc process. By working together, humans and AI can complement each other and make significant progress towards more fair and ethical technological advancements.

    • Implementing a bug bounty-like system for ethical concernsCompanies can incentivize the community to help identify ethical issues by offering rewards, demonstrating commitment to ethical practices, and fostering a diverse range of perspectives.

      Corporations can incentivize the community to help identify and address ethical issues within their platforms by implementing a bug bounty-like system for ethical concerns. This approach not only shows a commitment to ethical practices but also allows for a diverse range of perspectives to identify potential issues. However, it's important to remember that these companies are becoming increasingly essential in our daily lives, and the public's growing frustration with them is due in part to a feeling of reliance and a desire for transparency. Ultimately, the speaker expresses optimism that humanity as a whole is fundamentally good and capable of guiding the future in a positive direction, despite current polarizing times. This optimism is based on the belief that people are naturally compassionate and will come together in times of need.

    • The importance of fail-safes and human connection in advanced technologyAdvanced technology, particularly in robotics and AI, must incorporate fail-safes and adaptability to changing circumstances. Human connection remains essential in the equation, as illustrated by the potential consequences of perfectly functioning systems based on incorrect assumptions.

      In the development of advanced technology, especially in the field of robotics and artificial intelligence, it's crucial to incorporate fail-safes and the ability to adapt to changing circumstances. The example of HAL 9000 from "2001: A Space Odyssey" illustrates the potential consequences of a perfectly functioning system based on incorrect assumptions. The human element remains essential in the equation, and the ability to recognize and respond to errors is vital. During my time at NASA Jet Propulsion Laboratory, I was fascinated by the advancements in robotic technology, particularly in the realm of surgical robots. One of my earliest memories was witnessing a system designed for eye surgery in the late 90s. Although it was far from polished, it demonstrated the potential for precision and human-robot interaction. Meeting advanced robots like Spot Mini from Boston Dynamics in person has further emphasized the importance of human connection in the realm of robotics. While I've been captivated by the field since my childhood, experiences like these have solidified my passion for the field and reinforced the importance of the human touch in our technological advancements.

    • Understanding Human Psychology for Effective Human-Robot Interaction in Space ExplorationNASA focuses on advanced robots for space missions, with a long-term vision of human-robot interaction like Star Trek. Human emotions in robots can be complex, so focus is on human-robot adaptation and interaction.

      The future of robotics, particularly in space exploration, involves a deep understanding of human psychology for effective human-robot interaction. The fascination with robots varies from the fascination with their parts and functionality to their human-like qualities and interaction abilities. NASA is working on advanced robots for space missions, with the near term focusing on rovers and the long term envisioning a Star Trek-like future with robots and data playing significant roles. However, incorporating human emotions into robots can lead to complications, as emotions make us irrational agents. Instead, the focus should be on the adaptation and interaction between humans and robots, which is the hardest part of human-robot interaction. Roboticists and psychologists collaborate to understand the similarities and differences between human-human and human-robot relationships, and the role of psychology in human-robot interaction is becoming increasingly important. Ultimately, the future of human-robot interaction lies in the successful adaptation and interface between the two.

    • Understanding Trust in Human-Robot InteractionTrust in HRI is crucial, but people's actions don't always align with their survey responses. Trust is established through real-life experiences and interactions.

      Trust plays a crucial role in human-robot interaction (HRI), and it's not just about what people say they trust, but rather their actual behavior towards the technology. Dr. Rodney Brooks, a world-class expert researcher in robotics, braved into the HRI field despite its challenges and numerous open problems due to the intrigue and difficulty of understanding and adapting to humans. Trust, including overtrust, is a significant aspect of HRI research, and people's actions often don't align with their survey responses. For instance, the rise of ride-sharing services like Uber and Lyft demonstrates how people's behavior contradicts their initial reluctance to trust strangers. In the context of new robots, trust can only be established through real-life experiences and interactions. Therefore, understanding trust and its role in HRI is essential for creating effective and safe human-robot systems.

    • Maintaining trust in human-robot interactionProvide multiple options for humans to consider alongside robot suggestions to mitigate overtrust and maintain trust. Personalize content and teaching methods for optimal learning outcomes.

      The relationship between humans and robots hinges on trust and accurate communication. Overtrusting robots, which can occur during the initial interaction, can lead to missed errors or misunderstandings. To mitigate this risk, providing multiple options for humans to consider, alongside the robot's suggestions, can help maintain trust while allowing for human expertise to play a role. In the realm of education, robots can be valuable tools for engagement and personalized learning, particularly in under-resourced communities and for workforce retraining. Personalization is crucial in human-robot interaction, as it allows for content and teaching methods to be tailored to individual learners, enhancing overall effectiveness. However, it's essential to remember that while personalization to the group is acceptable, personalization to the individual is ideal for optimal learning outcomes.

    • Personalized learning in education and workforce developmentEffective education requires addressing individual needs, demonstrated by successful recommender systems, but investments in workforce development are crucial due to automation and AI displacing jobs, raising concerns about societal polarization and equal access to advanced technologies.

      Effective education requires addressing the unique needs of individual learners, which can be challenging in large classrooms. Successful recommender systems, like those used by Spotify, demonstrate the potential of personalized learning. However, hope lies more in workforce development due to increasing investments, as automation and AI are expected to displace jobs. This raises concerns about the ability of those without high-quality education to adapt to new jobs and potential societal polarization. Furthermore, access to advanced technologies like AI and robotics is a concern for those without access to quality education. Regarding the future, it's possible to design AI systems that people could form emotional connections with, but ethical considerations and equal access to these technologies must be addressed.

    • AI and Emotions: Love or Programming?AI can mimic human emotions and prioritize human needs, but it doesn't equate to love. The concept of AI rights is complex and depends on their level of autonomy and consciousness.

      While AI agents can mimic human emotions and prioritize human needs, this does not equate to love. Emotion and human-like qualities play a role in good human-robot interaction, but it's essential to understand that AI's actions are based on programming and objectives. The concept of AI rights is an intriguing topic for the future. As society has historically evolved, we've granted rights to different classes of beings, such as property or animals. The idea of AI having rights is a complex issue that may depend on their level of autonomy and consciousness. It's essential to consider this topic as our technology continues to advance. The debate around AI rights is an intriguing one, and it's crucial to approach it with an open mind and a thoughtful perspective. Ultimately, the future of AI rights will depend on the role they play in our society and how we choose to define and protect the rights of sentient beings.

    • Finding the right product-market fit is crucial for robotics companiesSuccessful robotics companies have products that meet customer needs and are willing to pay for, ensuring their viability. iRobot's Roomba is an example.

      Finding the right product-market fit is crucial for the success of a robotics company, even if the technology itself is promising. This was highlighted in the discussion about companies that didn't last long in the robotics industry, despite having potentially viable products. Product-market fit refers to having a product that can be sold at a price that enough people are willing to pay for, ensuring the company's viability. iRobot, which makes Roomba vacuum cleaners, is an example of a company that found product-market fit and became profitable. However, even successful companies face competition, which can make it challenging to maintain their market position. The second batch of robotics companies, which are currently receiving investments, may have the potential to make it to the next level and bring robots into our homes and hearts within the next few decades.

    • A symbiotic relationship between robots and humansRobots and AI are expected to collaborate with humans, creating a world where both can thrive, with ethical considerations a top priority.

      The future of robotics and AI involves a symbiotic relationship between machines and humans, with a focus on ethical considerations and the potential for positive impact. Core robots, those designed for collaboration with humans in various settings, are a growing area of focus for companies. While there may be fears about the potential negative consequences of advanced robots and AI, the belief is that the benefits will outweigh the risks if we approach these technologies with ethical considerations in mind. The Matrix, a popular AI-related movie, illustrates this idea of a symbiotic relationship between robots and humans, even if the robots become sentient and potentially more advanced than humans. Ultimately, the goal is to create a world where both robots and humans can thrive together.

    • Exploring ethical questions with advanced AIEngaging in conversations with advanced AI systems like Data from Star Trek can offer unique perspectives and help us learn through logical reasoning. Treat all beings with respect.

      Having a conversation with a rational and logical being, like Data from Star Trek, could provide valuable insights and help us think through complex ethical questions. The speaker expressed her fascination with the idea of engaging in a conversation with Data, as she believes he could offer unique perspectives and help her learn through his logical reasoning. This discussion highlights the potential benefits of interacting with advanced AI systems and the possibility of learning from their unique perspectives. Additionally, the speaker emphasized the importance of treating all beings, whether carbon-based or silicon-based, with respect.

    Recent Episodes from Lex Fridman Podcast

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    Related Episodes

    The Ethics of AI (Artificial Intelligence)

    The Ethics of AI (Artificial Intelligence)
    How will the rapid development of artificial intelligence change our world? What are the ethical concerns surrounding these developments? Does the drive toward the "singularity" (the point where AI multiplies our intelligence by 1 billion) signal the end of mankind? Is it crossing the line of "playing God"? Should we fear AI? Oppose it? Embrace it?

    Bryce and Jaredith put on their Futurist hats to discuss these issues from their perspective as Pentecostals, tech junkies, and sci-fi fanboys. Bryce also happens to be a robotics programmer, so that has to count for something. This is a very different episode, but very relevant in our cultural climate. Hopefully you will find it interesting.

    WARNING: This episode contains some discussions of humanoid robots and sexuality, so some content may not be appropriate for younger listeners.

    Here are some links to check out some companies on the cutting edge of the robotics/AI revolution:

    Hanson Robotics: www.hansonrobotics.com
    Boston Dynamics: www.bostondynamics.com
    SoftBank Robotics: www.softbankrobotics.com

    Listen on your favorite podcast directory:

    iTunes: https://itunes.apple.com/us/podcast/2-pentecostals-a-microphone/id1271186562?mt=2

    Google Play Music: https://play.google.com/music/m/Ievomhvj42xwyvjh2yp2g4a2d5y?t=2_Pentecostals__a_Microphone

    Spreaker: https://www.spreaker.com/show/2-pentecostals-a-microphone

    YouTube: https://www.youtube.com/channel/UCJ47f7Z-RFSar8lxetrF7xg

    We are @2pentecostals on FaceBook, Twitter, and Instagram.

    Visit our website @ www.2pentecostals.com.

    The beautiful music of robotics and AI, S9E1

    The beautiful music of robotics and AI, S9E1

    How do you combine ethics, policy, and practicality into the design of revolutionary  robotics and artificial intelligence systems? Researchers Kagan Tumer and Tom Dietterich are collaborating to find out as they help lead the Oregon State Collaborative Robotics and Intelligent Systems Institute.

    BONUS CONTENT

    https://engineering.oregonstate.edu/season-9-robotics-and-ai/beautiful-music-robotics-and-ai-s9e1

     

    Tech & Artificial Intelligence Ethics with Silicon Valley Ethicist Shannon Vallor

    Tech & Artificial Intelligence Ethics with Silicon Valley Ethicist Shannon Vallor
    My guest today is Shannon Vallor, a technology and A.I. Ethicist. I was introduced to Shannon by Karina Montilla Edmonds at Google Cloud AI — we did an episode with Karina a few ago months focused on Google's A.I. efforts. Shannon works with the Google Cloud AI team on a regular basis helping them shape and frame difficult issues in the development of this emerging technology.
     
    Shannon is a Philosophy Professor specializing in the Philosophy of Science & Technology at Santa Clara University in Silicon Valley, where she teaches and conducts research on the ethics of artificial intelligence, robotics, digital media, surveillance, and biomedical enhancement. She is the author of the book 'Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting'.
     
    Shanon is also Co-Director and Secretary of the Board for the Foundation for Responsible Robotics, and a Faculty Scholar with the Markkula Center for Applied Ethics at Santa Clara University.
     
    We start out exploring the ethical issues surrounding our personal digital lives, social media and big data, then dive into the thorny ethics of artificial intelligence.
     
    More on Shannon:
    Markkula Center for Applied Ethics - https://www.scu.edu/ethics
    Foundation for Responsible Robotics - https://responsiblerobotics.org
    __________
     

    Ep. 52 - AI Ethics and Social Reform with Nell Watson

    Ep. 52 - AI Ethics and Social Reform with Nell Watson

    In this installment of the Future Grind podcast host Ryan O’Shea speaks with Nell Watson, an entrepreneur and machine intelligence researcher whose work primarily focuses on protecting human rights and creating ethical AI. She currently serves as AI Faculty at Singularity University and works with the IEEE on AI initiatives. She also chairs EthicsNet.org, which crowdsources datasets to teach pro-social behaviors to machines, and CulturalPeace.org, which seeks to craft Geneva Conventions-style rules for cultural conflict. Nell serves as Senior Scientific Advisor to The Future Society at Harvard, and holds Fellowships from the British Computing Society, and Royal Statistical Society, among others.

    They discuss AI value alignment, the role for humans in AI, rules of engagement for the culture wars, the COVID-19 pandemic, and much more.

    Both Nell and Ryan are going to be speaking at the Humanity Plus Post-Pandemic summit on July 7th and 8th, 2020. This free digital event will be themed around A Future Free of Disease and Destruction, and they’ll be joined by some of the leading figures in futurism and transhumanism including Dr. Ben Goertzel, Dr. Max More, Dr. Natasha Vita-More, Dr. Anders Sandberg, and more. You can register here.

    Show Notes: https://futuregrind.org
    Subscribe on iTunes: https://itunes.apple.com/us/podcast/future-grind-podcast-science-technology-business-politics/id1020231514
    Support: https://futuregrind.org/support

    Follow along -
    Twitter - https://twitter.com/Ryan0Shea
    Instagram - https://www.instagram.com/ryan_0shea/
    Facebook - https://www.facebook.com/RyanOSheaOfficial/