Logo

    Heartbeat DeepFake Detection, Robot Drug Tests, Ethics as a Service

    enSeptember 10, 2020
    What did researchers discover about detecting deepfakes?
    How is Google addressing AI ethics through new services?
    What technology does IBM integrate for drug synthesis?
    Why are ethical considerations important in AI development?
    What impact does IBM's Robo RXN have on drug discovery?

    • Detecting deepfakes through heartbeats and addressing ethical concerns in AIResearchers discovered a method to detect deepfakes with high accuracy using heartbeats, while Google plans to offer AI ethics services to help companies navigate ethical issues. Ethical considerations and potential harms are crucial as AI technology advances.

      While AI technology continues to advance, it's important to address the ethical implications and potential risks. Last week, researchers found a way to detect deepfakes with 97% accuracy by analyzing heartbeats through PPG cells on a person's face. However, language models like GPT-3 can mimic human language but lack understanding, raising concerns about trustworthiness. Google plans to launch new AI ethics services to help companies navigate ethical issues, learning from its past experiences with ethical dilemmas. Meanwhile, TikTok's sale attempt may face complications due to new restrictions on AI technology exports from China. Overall, it's crucial to prioritize ethical considerations and mitigate potential harms as AI technology continues to evolve.

    • US-China tech dispute and new rules limiting tech exportsScientists discovered a new way to detect deep fake videos using heartbeat detection, offering a promising solution in the ongoing cat-and-mouse game between creators and detectors.

      The US-China tech dispute has escalated with new rules limiting the export of certain technologies, including those used by TikTok, which could lead to increased pressure on both sides. Meanwhile, in the world of AI research, scientists have found a new way to detect deep fake videos using heartbeat detection. This innovative approach, which involves interpreting residuals of biological signals, shows that sometimes, using prior knowledge and intuition can lead to effective neural net techniques. The cat-and-mouse game between deep fake creators and detectors continues, but this research offers a promising solution. It's a reminder that the field of AI is constantly evolving, and staying informed about the latest developments is crucial. Stay tuned for more in-depth discussions on these topics and others.

    • Advancements in deepfake detection and identifying source modelsResearchers are making progress in detecting deepfakes and identifying the models responsible. Knowledge graphs, which extract and organize internet facts, are gaining importance in AI for enhancing reliability and consistency.

      Researchers have made strides in not only detecting deepfakes but also identifying the source models responsible for generating them. This is significant as it allows for more accurate identification and mitigation of deepfakes. Additionally, the use of knowledge graphs, which involve extracting and organizing facts from the internet, is becoming increasingly important in the field of AI. Companies like Diffbot are building AI systems that continuously crawl the web and create knowledge graphs, providing true and useful information for various purposes. Google is also investing in this technology, aiming to provide knowledge graphs for every query. While knowledge graphs may not be a new concept, they are gaining renewed attention due to their potential to enhance the reliability and consistency of AI systems. Google is even offering to help other companies navigate the ethical considerations of AI through a new service. These developments highlight the ongoing advancements and evolving priorities in the field of AI.

    • Google's Ethics as a Service and IBM's AI-driven Chemical SynthesisGoogle enters the market with Ethics as a Service, aiming to help organizations navigate ethical considerations in AI implementation. IBM demonstrates a complex system for chemical and drug synthesis, combining robotics, AI, and cloud computing to supercharge the process.

      Google is entering the market with an Ethics as a Service (EaaS) offering, aiming to help organizations navigate ethical considerations in AI implementation. This move comes after facing controversies and changing products with AI. While the specifics of the service are vague, it may include classes and consulting. The idea of outsourcing ethics raises concerns, but Google's expertise in AI ethics and potential impact on smaller organizations and governments is valuable. The service could be particularly useful for startups lacking resources to consider ethical implications thoroughly. The article does not mention if the service will be free or not. Meanwhile, in a separate development, IBM demonstrated a complex system for chemical and drug synthesis, combining robotics, AI, and cloud computing. This system is expected to supercharge the process, reducing time and costs significantly. The integration of these technologies is a significant step towards industrial-scale, automated chemical and drug synthesis. Both of these developments represent significant strides in their respective fields, with Google focusing on ethical considerations and IBM on efficiency and automation in chemical and drug synthesis. The intersection of technology and ethics is becoming increasingly important as AI continues to become more prevalent in various industries.

    • IBM's AI and robotics in drug discoveryIBM's AI and robotics integration in drug discovery aims to cut discovery and verification time in half, potentially revolutionizing the field during the pandemic.

      IBM is utilizing AI and robotics to revolutionize the field of drug discovery and chemical research. The AI component is used to predict chemical reactions and suggest potential experiments, while the robotics component automates the setup and running of these experiments. This integration aims to reduce typical drug discovery and verification time by half. IBM's efforts in this area, specifically with their Robo RXN system, demonstrate how robotics can accelerate science and pave the way for future advancements. Despite IBM's past struggles with commercialized AI services, the potential impact of this technology in the field of drug discovery is significant, especially during the ongoing COVID-19 pandemic. Other companies like Daphne Kohler's startup are also exploring similar AI-driven drug discovery approaches. Overall, the combination of AI and robotics in this context represents a more embodied AI, going beyond typical software applications, and is a symbol of what may come in various sectors in the next few decades.

    • Experts Discuss the Role of AI in Crisis ManagementAI and automation are vital tools in managing crises, but we must be cautious of unintended consequences and ensure responsible development and implementation.

      The experts discussed on this week's episode of Skynet Today's Let's Talk AI Podcast agree that AI and automation have become essential tools in managing and responding to crises, including pandemics. However, they also warn that we must be cautious and prepared for potential unintended consequences, such as job displacement and ethical dilemmas. They emphasized the importance of responsible AI development and implementation, as well as the need for ongoing education and collaboration between various stakeholders. In the end, they concluded that while AI can help us navigate current challenges, it's crucial to approach it with a long-term perspective and a commitment to ethical and sustainable use. To stay informed about the latest developments in AI and related topics, be sure to check out the articles we discussed on today's episode and subscribe to our weekly newsletter at skynettoday.com. Don't forget to subscribe to our podcast and leave us a review if you enjoyed the show. Tune in next week for more insights and discussions on AI and its impact on our world.

    Was this summary helpful?

    Recent Episodes from Last Week in AI

    # 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

    # 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

    Our 182nd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and Jeremie Harris.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Sponsors:

    - Agent.ai is the global marketplace and network for AI builders and fans. Hire AI agents to run routine tasks, discover new insights, and drive better results. Don't just keep up with the competition—outsmart them. And leave the boring stuff to the robots 🤖

    - Pioneers of AI, is your trusted guide to this emerging technology. Host Rana el Kaliouby (RAH-nuh el Kahl-yoo-bee) is an AI scientist, entrepreneur, author and investor exploring all the opportunities and questions AI brings into our lives. Listen to Pioneers of AI, with new episodes every Wednesday, wherever you tune in.

    In this episode:

    - OpenAI's move into hardware production and Amazon's strategic acquisition in AI robotics. - Advances in training language models with long-context capabilities and California's pending AI regulation bill. - Strategies for safeguarding open weight LLMs against adversarial attacks and China's rise in chip manufacturing. - Sam Altman's infrastructure investment plan and debates on AI-generated art by Ted Chiang.

    Timestamps + Links:

    • (00:00:00) Intro / Banter
    • (00:05:15) Response to listener comments / corrections
    Last Week in AI
    enSeptember 17, 2024

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    Related Episodes

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures 

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless. And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a  bit  of a ride. Because ultimately, I hope you agree that the topics I share are always very important.


    Mo Gawdat, who you may remember from episode #91 Solving Happiness is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.


    Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.


    During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence.  During  the  pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now,  having contributed to AI's development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.


    Today we discuss:

    Pandemic of AI and why the handing COVID is a lesson to learn from

    The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence 

    How machines started creating and coding other machines 

    The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us

    Machines will become emotional sentient beings with a Superconsciousness 


    To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our  destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to  say it) love, compassion, connection and respect.


    Full show notes for this and all other episodes can be found on The Doctor's Kitchen.com website



    Hosted on Acast. See acast.com/privacy for more information.


    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io