Logo
    Search

    ethical ai development

    Explore "ethical ai development" with insightful episodes like "Lessons from 'The Time Machine': AI Through the Looking Glass", "EP 148: Safer AI - Why we all need ethical AI tools we can trust", "Megathreat: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat Pt 2" and "TIME Asks: Is AI the End of Humanity?" from podcasts like ""A Beginner's Guide to AI", "Everyday AI Podcast – An AI and ChatGPT Podcast", "Impact Theory with Tom Bilyeu" and "The AI Breakdown: Daily Artificial Intelligence News and Discussions"" and more!

    Episodes (4)

    Lessons from 'The Time Machine': AI Through the Looking Glass

    Lessons from 'The Time Machine': AI Through the Looking Glass

    In this thought-provoking episode of "A Beginner's Guide to AI," Professor Gep-Hardt takes listeners on a captivating journey through the lens of H.G. Wells' classic science fiction novel, 'The Time Machine.' We explore the profound parallels between Wells' visionary tale and the contemporary world of AI, delving into themes of technology-driven class division, the loss of human agency, and the ethical implications of AI development.


    Discover how the novel's depiction of a future split into the Eloi and Morlocks offers a cautionary tale about the potential impact of AI on society. We discuss the risks of over-reliance on AI, the dangers of a power imbalance, and the importance of ethical considerations in AI advancement.


    Join us as we analyze a real-world case study on workforce automation, reflecting on how AI is reshaping the nature of work and the challenges of managing this transition. We also invite AI enthusiasts to contribute to the conversation, encouraging a deeper understanding of AI's complexities.


    This episode is not just an exploration of AI concepts; it's an invitation to ponder the future of technology and humanity. Tune in for an enlightening discussion that bridges the gap between fiction and reality, urging us to contemplate the patterns we create in the ever-evolving landscape of AI.


    This podcast was generated with the help of ChatGPT and Claude 2. We do fact check with human eyes, but there might still be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads.

    EP 148: Safer AI - Why we all need ethical AI tools we can trust

    EP 148: Safer AI - Why we all need ethical AI tools we can trust

    Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Mark Surman and Jordan questions about AI safety
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:05] Daily AI news
    [00:03:15] About Mark and Mozilla Foundation
    [00:06:20] Big Tech and ethical AI
    [00:09:20] Is AI unsafe?
    [00:11:05] Responsible AI regulation
    [00:16:33] Creating balanced government regulation
    [00:20:25] Is AI too accessible?
    [00:23:00] Resources for AI best practices
    [00:25:30] AI concerns to be aware of
    [00:30:00] Mark's final takeaway

    Topics Covered in This Episode:
    1. Future of AI regulation
    2. Balancing interests of humanity and government
    3. How to make and use AI responsibly
    4. Concerns with AI

    Keywords:
    AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interests

    Megathreat: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat Pt 2

    Megathreat: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat Pt 2
    Mo Gawdat is the former chief business officer for Google X and has built a monumental career in the tech industry working with the biggest names to reshape and reimagine the world as we know it. From IBM to Microsoft, Mo has lived at the cutting edge of technology and has taken a strong stance that AI is a bigger threat to humanity than global warming. In the second part episode, we’re looking at the ethical dilemma of AI, the alarming truth about how vulnerable we actually are, and what AI’s drive to survive means for humanity. This is just the tip of how deep this conversation gets with Mo. You are sure to walk away with a better view of the impact AI will have on human life, purpose and connection. How can we best balance our desire for progress and convenience with the importance of embracing the messiness and imperfections of the human experience?  Follow Mo Gawdat: Website: https://www.mogawdat.com/  YouTube: https://www.youtube.com/channel/UCilMYYyoot7vhLn4Tinzmmg  Twitter: https://twitter.com/mgawdat  Instagram: https://www.instagram.com/mo_gawdat/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Get $300 into your brokerage account when you invest $5k within your first 90 days by going to https://bit.ly/FacetImpact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Sign up for a one-dollar-per-month trial period at https://bit.ly/ShopifyImpact. Are You Ready for EXTRA Impact? If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. Subscription Benefits: Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more New episodes delivered ad-free Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: Legendary Mindset: Mindset & Self-Improvement Money Mindset: Business & Finance Relationship Theory: Relationships Health Theory: Mental & Physical Health Power Ups: Weekly Doses of Short Motivational Quotes  Subscribe on Apple Podcasts: https://apple.co/3PCvJaz Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    TIME Asks: Is AI the End of Humanity?

    TIME Asks: Is AI the End of Humanity?
    TIME Magazine's current cover story is titled "The End of Humanity: How Real Is The Risk?" It's a marker of how much the AI risk and safety conversation has gone mainstream. On this episode, NLW reads two pieces:   AI Is Not an Arms Race - Katja Grace The Darwinian Argument for Worrying About AI - Dan Hendrycks The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/