Logo
    Search

    human-robot interaction

    Explore "human-robot interaction" with insightful episodes like "Selects: Is The Uncanny Valley Real?", "Is A.I. the Problem? Or Are We?", "#287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?", "#97 – Sertac Karaman: Robots That Fly and Robots That Drive" and "#81 – Anca Dragan: Human-Robot Interaction and Reward Engineering" from podcasts like ""Stuff You Should Know", "The Ezra Klein Show", "Modern Wisdom", "Lex Fridman Podcast" and "Lex Fridman Podcast"" and more!

    Episodes (9)

    Selects: Is The Uncanny Valley Real?

    Selects: Is The Uncanny Valley Real?

    In 1970, roboticist Masahiro Mori wrote an essay that said the closer robots come to lifelike, the more they unsettle humans. His theory became the Uncanny Valley, and science has been evaluating it – and what makes something creepy - in recent years. Learn all about it with Josh and Chuck in this classic episode.

    See omnystudio.com/listener for privacy information.

    Is A.I. the Problem? Or Are We?

    Is A.I. the Problem? Or Are We?

    If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.

    Others, of course, will tell you those folks are nuts.

    One of my projects this year is to get a better handle on this debate. A.I., after all, isn’t some force only future human beings will face. It’s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It’s worth understanding.

    Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

    So this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.

    Mentioned: 

    “Human-level control through deep reinforcement learning”

    “Some Moral and Technical Consequences of Automation” by Norbert Wiener

    Recommendations: 

    "What to Expect When You're Expecting Robots"  by Julie Shah and Laura Major

    "Finite and Infinite Games" by James P. Carse 

    "How to Do Nothing" by Jenny Odell

    If you enjoyed this episode, check out my conversation with Alison Gopnik on what we can all learn from studying the minds of children.

    You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    “The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.

    #287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?

    #287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?
    Sven Nyholm is an Assistant Professor of Philosophy and Ethics at Utrecht University. Robots are all around us. They perform actions, make decisions, collaborate with humans, be our friends, perhaps fall in love, and potentially harm us. What does this mean for our relationship to them and with them? Expect to learn why robots might need to have rights, whether it's ethical for robots to be sex slaves, why self-driving cars are being programmed to drive with human mistakes, who is responsible if a self driving car kills someone and much more... Sponsors: Get 83% discount & 3 months free from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Extra Stuff: Buy Humans And Robots - https://amzn.to/3qw9vbp  Follow Sven on Twitter - https://twitter.com/SvenNyholm Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices

    #97 – Sertac Karaman: Robots That Fly and Robots That Drive

    #97 – Sertac Karaman: Robots That Fly and Robots That Drive
    Sertac Karaman is a professor at MIT, co-founder of the autonomous vehicle company Optimus Ride, and is one of top roboticists in the world, including robots that drive and robots that fly. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Sertac's Website: http://sertac.scripts.mit.edu/web/ Sertac's Twitter: https://twitter.com/sertackaraman Optimus Ride: https://www.optimusride.com/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 01:44 - Autonomous flying vs autonomous driving 06:37 - Flying cars 10:27 - Role of simulation in robotics 17:35 - Game theory and robotics 24:30 - Autonomous vehicle company strategies 29:46 - Optimus Ride 47:08 - Waymo, Tesla, Optimus Ride timelines 53:22 - Achieving the impossible 53:50 - Iterative learning 58:39 - Is Lidar is a crutch? 1:03:21 - Fast autonomous flight 1:18:06 - Most beautiful idea in robotics

    #81 – Anca Dragan: Human-Robot Interaction and Reward Engineering

    #81 – Anca Dragan: Human-Robot Interaction and Reward Engineering
    Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot's function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings. Support this podcast by supporting the sponsors and using the special code: - Download Cash App on the App Store or Google Play & use code "LexPodcast"  EPISODE LINKS: Anca's Twitter: https://twitter.com/ancadianadragan Anca's Website: https://people.eecs.berkeley.edu/~anca/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:26 - Interest in robotics 05:32 - Computer science 07:32 - Favorite robot 13:25 - How difficult is human-robot interaction? 32:01 - HRI application domains 34:24 - Optimizing the beliefs of humans 45:59 - Difficulty of driving when humans are involved 1:05:02 - Semi-autonomous driving 1:10:39 - How do we specify good rewards? 1:17:30 - Leaked information from human behavior 1:21:59 - Three laws of robotics 1:26:31 - Book recommendation 1:29:02 - If a doctor gave you 5 years to live... 1:32:48 - Small act of kindness 1:34:31 - Meaning of life

    Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems

    Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems
    Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".  Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:09 - Favorite robot 05:05 - Autonomous vehicles 08:43 - Tesla Autopilot 20:03 - Ethical responsibility of safety-critical algorithms 28:11 - Bias in robotics 38:20 - AI in politics and law 40:35 - Solutions to bias in algorithms 47:44 - HAL 9000 49:57 - Memories from working at NASA 51:53 - SpotMini and Bionic Woman 54:27 - Future of robots in space 57:11 - Human-robot interaction 1:02:38 - Trust 1:09:26 - AI in education 1:15:06 - Andrew Yang, automation, and job loss 1:17:17 - Love, AI, and the movie Her 1:25:01 - Why do so many robotics companies fail? 1:32:22 - Fear of robots 1:34:17 - Existential threats of AI 1:35:57 - Matrix 1:37:37 - Hang out for a day with a robot

    Is the Uncanny Valley Real?

    Is the Uncanny Valley Real?

    In 1970, roboticist Masahiro Mori wrote an essay that said the closer robots come to lifelike, the more they unsettle humans. His theory became the Uncanny Valley, and science has been evaluating it – and what makes something creepy - in recent years.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    Living With Robots: Worst Roomates Ever

    Living With Robots: Worst Roomates Ever

    The robot invasion of our homes has begun, and the future is bright with robotic caregivers. But how to we plan to keep this cohabitation from feeling creepy? In this episode, Julie and Robert discuss "robot skin," thought-anticipating machines and more.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    Machines, Morality and Sexbots

    Machines, Morality and Sexbots

    Can robots be programmed to behave ethically? It's possible that future robots may possess emotions like empathy or guilt. Join Robert and Julie as they interview Dr. Ronald Arkin, a leading expert on the study of robotic consciousness modeling.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.