Logo
    Search

    Podcast Summary

    • AI Transforming Industries and Personal StoriesAI is revolutionizing industries like healthcare, retail, entertainment, and personal computing, while a child's cancer journey showcases the hope technology brings. Understanding that robots' actions depend on programming can help dispel fears of evil robots.

      Artificial intelligence (AI) will play a significant role in shaping the future, transforming various industries such as health care, retail, entertainment, and personal computing. The Technically Speaking podcast by Intel, hosted by Graham Klass, explores these advancements and the minds behind them. Meanwhile, in a personal story, hope and technology intertwined when a child was diagnosed with cancer, and the family's journey to St. Jude brought them closer to a cure. As for the concept of evil robots, it's essential to understand that these machines are often programmed to perform specific functions, and their actions can be considered "evil" or "good" depending on their programming. This idea was discussed in relation to the Terminator and the research on deceptive robots at Georgia Tech. Overall, technology, whether it's AI or robots, has the power to change our world, offering both hope and challenges.

    • Exploring emotions in robotic systems for ethical military decisionsResearchers investigate using guilt emotions to enhance robots' ethical decision-making on battlefield, potentially reducing force and improving humane actions, but ethical concerns remain.

      Researchers are exploring the integration of emotions, particularly guilt, into robotic systems for use on the battlefield. This is believed to enhance the robot's ability to make more ethical decisions and reduce the level of force used. Emotions, specifically moral emotions like guilt, can help robots follow rules and make distinctions, such as understanding the difference between civilians and combatants. This could potentially lead to more humane and proportional military actions. However, the use of emotions in warfare raises ethical concerns, particularly regarding the tolerance of civilian casualties. Despite these challenges, researchers are pushing forward with this technology, believing it could bring significant value to the battlefield.

    • Exploring guilt in robots through IRT modelResearchers identified guilt's components in humans and suggest programming it in robots for moral dilemmas, raising ethical questions and potential complications.

      Researchers Smits and Debach explored the concept of programming guilt into robots using their IRT model. They identified five components of guilt: responsibility, norm violation, negative self-evaluation, covert focus on the act, and inner rumination. Guilt arises when one feels personally responsible for a norm violation, leading to a negative self-evaluation and introspective thoughts focused on the act. The researchers studied these components in human subjects, and their findings suggest that programming guilt in robots could help them understand and respond to moral dilemmas. However, it's important to note that programming guilt in robots raises ethical questions and potential complications. The idea of using teenagers to program robots, as mentioned in the discussion, might not be practical or desirable. Additionally, the concept of robots experiencing guilt brings up complex issues, such as their emotional capabilities and the potential for them to develop human-like emotions and behaviors. Overall, the research highlights the potential for programming moral reasoning and emotions into robots, but also underscores the need for careful consideration and ethical guidelines in doing so.

    • Exploring the emotional connection between humans and technologyRobots can be programmed with cognitive models of guilt and apologies, but understanding our emotional bond with technology is crucial for ethical implications in various contexts.

      Our emotions and connections to technology, even if it's not human, can be just as strong as with other people or objects. During a discussion, it was mentioned how guilt and apologies are related to our own motivations and actions, and this cognitive model was used to program robots. While this isn't an exact algorithm, it's an interesting step towards creating morality in robots. However, it's important to consider the larger context of our relationship with technology as a whole. Dr. Arkin, a robotics expert, emphasized the need to understand this connection and consider the implications for robots in various contexts, including battlefields and our homes. He also pointed out that we have a natural inclination to form bonds with artifacts, as seen with Tamaguchis or movies. This physical embodiment can alter the equation and create an extra level of concern as technology becomes more integrated into our lives. Ultimately, our emotions can be easily manipulated by anything, including technology, and it's crucial to address this as we move forward.

    • Our emotional connection to technology and robotsThe addition of affective components to robots increases human compliance and raises ethical questions about human-robot intimacy and sexuality, which are largely unexplored due to societal taboos and lack of research.

      Our emotional connection to technology, including robots, is a growing phenomenon. Psychologist Sherry Turkle's experience of developing a crush on a lab robot and the need for human-robot ethics in intimacy, as discussed by Dr. Arkin, illustrates this point. The addition of affective components to robots significantly increases human compliance. However, the ethical implications of human-robot intimacy and sexuality are largely unexplored due to societal taboos and lack of academic research and funding. Despite this, there are individuals and industries pushing the boundaries. It's crucial for society and the academic community to address the ethical implications of this growing trend and establish guidelines and regulations.

    • Ethical questions raised by advancing robotics technologyAs technology advances, ethical considerations of robot usage and potential misuse are crucial. Open discussions and considerations of various perspectives are needed to ensure ethical and sensitive use.

      As technology advances, particularly in the field of robotics, it raises complex ethical questions that need to be addressed. The speaker shared examples of robots being repurposed for uses beyond their intended goals, such as turning a healthcare robot into a sex bot. These issues extend beyond just the creation of robots, but also involve access, usage, and potential misuse of the technology. The speaker emphasized the need for open discussions and considerations of various perspectives, including those of Neo-Luddites, to ensure that technology is being used ethically and sensitively. The European community has been addressing these issues earlier and more extensively than the US. The speaker also highlighted the potential for a black market for technology if these ethical considerations are not addressed. The Unabomber's writings on Neo-Luddites offer insights for programmers on the technology they create and the potential consequences of their actions. The industrial revolution and its consequences, as described in the Unabomber manifesto, serve as a reminder of the far-reaching impacts of technology on society.

    • Technology's risks and the Unabomber's concernsBe aware of technology's risks, consider opposing views, and practice critical thinking to make informed decisions.

      Technology, while bringing about significant advancements and increased life expectancy, also poses significant risks to society and the natural world. The Unabomber's manifesto, although extreme and harmful in its execution, raised valid concerns about the potential dangers of emerging technologies like genetics, nanotechnology, and robotics. These concerns, while not universally accepted, should be considered in our ongoing technological development. Additionally, humans have a natural tendency towards confirmation bias, which can hinder our ability to objectively evaluate information and consider opposing viewpoints. It's essential to be aware of these biases and make an effort to seek out diverse perspectives and ambiguous evidence to make informed decisions. Leonard Mlodinow's book "The Drunkard's Walk" sheds light on this phenomenon and emphasizes the importance of critical thinking and open-mindedness in interpreting data.

    • Maintaining a well-rounded understanding through diverse sourcesExpose yourself to various sources and ideas to broaden your perspective and make informed decisions.

      Having a diverse range of information and perspectives is important for maintaining a well-rounded understanding of complex topics. Consuming only one type of media on a particular topic can limit your viewpoint and potentially lead to a narrow or biased perspective. It's beneficial to expose yourself to various sources and ideas to broaden your horizons and make informed decisions. Another topic discussed was the future of social interaction with robots. The question of whether robots will feel guilt or have sexual desires was raised, but no definitive answers were given. The conversation also touched on the importance of ethics in robotics and the potential implications of creating advanced artificial intelligence. Additionally, the podcast mentioned various sponsors and promotions, including Visible, American Express, and St. Jude Children's Research Hospital. The latter emphasized the importance of supporting childhood cancer research and becoming a partner in hope. Overall, the discussion provided food for thought on a range of topics, from the importance of a diverse information diet to the ethical considerations of advanced robotics.

    Recent Episodes from Stuff To Blow Your Mind

    Smart Talks with IBM: AI & the Productivity Paradox

    Smart Talks with IBM: AI & the Productivity Paradox

    In a rapidly evolving world, we need to balance the fear surrounding AI and its role in the workplace with its potential to drive productivity growth. In this special live episode of Smart Talks with IBM, Malcolm Gladwell is joined onstage by Rob Thomas, senior vice president of software and chief commercial officer at IBM, during NY Tech Week. They discuss “the productivity paradox,” the importance of open-source AI, and a future where AI will touch every industry.

    This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

    Visit us at ibm.com/smarttalks

    See omnystudio.com/listener for privacy information.

    Weirdhouse Cinema: The Dungeonmaster

    Weirdhouse Cinema: The Dungeonmaster

    In this episode of Weirdhouse Cinema, Rob and Joe return to the glorious world of 80s Charles Band productions with 1984’s “The Dungeonmaster,” a supernatural dreamscape with eight directors starring Jeffrey Byron, Richard Moll and Leslie Wing. It’s time to reject the devil’s reality and substitute your own! 

    See omnystudio.com/listener for privacy information.

    Related Episodes

    #287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?

    #287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?
    Sven Nyholm is an Assistant Professor of Philosophy and Ethics at Utrecht University. Robots are all around us. They perform actions, make decisions, collaborate with humans, be our friends, perhaps fall in love, and potentially harm us. What does this mean for our relationship to them and with them? Expect to learn why robots might need to have rights, whether it's ethical for robots to be sex slaves, why self-driving cars are being programmed to drive with human mistakes, who is responsible if a self driving car kills someone and much more... Sponsors: Get 83% discount & 3 months free from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Extra Stuff: Buy Humans And Robots - https://amzn.to/3qw9vbp  Follow Sven on Twitter - https://twitter.com/SvenNyholm Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices

    Is A.I. the Problem? Or Are We?

    Is A.I. the Problem? Or Are We?

    If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.

    Others, of course, will tell you those folks are nuts.

    One of my projects this year is to get a better handle on this debate. A.I., after all, isn’t some force only future human beings will face. It’s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It’s worth understanding.

    Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

    So this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.

    Mentioned: 

    “Human-level control through deep reinforcement learning”

    “Some Moral and Technical Consequences of Automation” by Norbert Wiener

    Recommendations: 

    "What to Expect When You're Expecting Robots"  by Julie Shah and Laura Major

    "Finite and Infinite Games" by James P. Carse 

    "How to Do Nothing" by Jenny Odell

    If you enjoyed this episode, check out my conversation with Alison Gopnik on what we can all learn from studying the minds of children.

    You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    “The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices