Logo
    Search

    #150 - GPT Store, new Nvidia chips, DeepMind’s robotics progress, bad uses of AI

    enJanuary 14, 2024

    Podcast Summary

    • OpenAI launches store for custom ChatGPT botsOpenAI has introduced a store for customized ChatGPT bots, available to ChatGPT Plus, enterprise users, and a new subscription tier. Users can create unique and specialized interactions, but custom GPTs are subject to review for brand guidelines and usage policies.

      OpenAI has launched a store for custom chatbots based on their Generative Pre-trained Transformer (GPT) model. This follows the launch of the GPT builder program in November, which has already resulted in three million bots being created. This new feature is now available to users of ChatGPT Plus, enterprise users, and a new subscription tier. The ability to chat with customized versions of ChatGPT, created by users on the platform, offers exciting possibilities for unique and specialized interactions. However, OpenAI has implemented a review system to ensure that these custom GPTs adhere to brand guidelines and usage policies. Some users have reported having their custom GPTs taken down, leading to concerns about the consistency of these guidelines and the review process. Overall, this development marks an expansion of the capabilities of ChatGPT and opens up new opportunities for personalized and specialized AI interactions.

    • OpenAI introduces GPT store and new subscription tierOpenAI expands offerings with customizable chatbots through the GPT store and a new subscription tier, JAD GPT Team, for small teams, providing admin tools and data privacy.

      OpenAI is expanding its offerings with the introduction of the GPT store, allowing users to train and customize their own chatbots based on specific data and interactions. This could potentially offer businesses and individuals more tailored solutions, especially for those not fully satisfied with the current offerings. Additionally, OpenAI has released a new subscription tier, JAD GPT Team, for small teams, providing access to admin tools and a guarantee of no data usage for training similar models. Furthermore, an AI startup, Rabbit, has launched a standalone AI device, Rabbit R1, priced at $199, which can control apps through a large action model and act as a universal controller. These developments highlight OpenAI's commitment to catering to various markets and the growing interest in agentic products and large language models.

    • AI-first hardware device sells out, Alexa gets new AI experiences, and Google Bard Advanced announcedAI is becoming more accessible to consumers with the release of new AI-powered devices and experiences, including an AI-first hardware device, updates to Amazon Alexa, and Google's Bard Advanced.

      AI-powered devices and experiences are gaining popularity and becoming more accessible to consumers. The latest example is a new AI-first hardware device that has sold out its initial 10,000 units. Amazon's Alexa has also received updates with new generative AI-powered experiences from CharacterAI, Splash, and Volley, which are now available in the Alexa skill store. Google is also reportedly working on an advanced version of Bard, called Bard Advanced, which will be a paid service and is expected to be more powerful than its current version. NVIDIA has announced new chips designed to run AI at home, making it more accessible to consumers and competing with Intel and AMD. These developments show that AI is becoming more integrated into our daily lives and is no longer just a tool for businesses or enterprise-level applications.

    • Advancements in Generative AI Across IndustriesNvidia's latest generative AI technology, Valve's updated rules for AI-generated games, and the emergence of Perplexity, a new AI-powered search engine, showcase the expanding role of AI in gaming, search engines, and beyond. Transparency and disclosure are crucial as AI becomes more integrated into our digital experiences.

      The integration of generative AI is becoming increasingly prevalent and significant across various industries, particularly in gaming and technology. Nvidia's recent advancements in generative AI technology demonstrate their commitment to staying at the forefront of this trend. Valve, a major marketplace for video games, has updated its rules to allow AI-generated games, requiring developers to disclose their use of AI technology for transparency and copyright reasons. Perplexity, a new AI-powered search engine, has raised significant funding, signaling the growing investment in and demand for advanced AI systems. These developments highlight the expanding role of AI in games, search engines, and beyond, and the importance of transparency and disclosure as AI becomes more integrated into our digital experiences.

    • New Technologies Advance Accessibility and ConveniencePerplexity's chatbot search system offers summarized answers, Waymo tests self-driving robot taxis on highways, and Getty & NVIDIA introduce generative AI for custom stock photos. These developments demonstrate the trend towards automation, convenience, and democratization of technology, with significant potential market sizes and benefits for users.

      There are new and innovative technologies emerging in various industries that aim to make information and services more accessible and convenient for users. In the realm of search engines, Perplexity and similar chatbot systems offer a new paradigm where users ask questions and receive summarized answers with links and sources, reducing the effort required to find information. In transportation, Waymo is expanding the testing of self-driving robot taxis on Phoenix highways, bringing advanced features closer to commercial availability. Lastly, in the stock photo industry, Getty and NVIDIA are introducing generative AI for individual users to create custom stock photos, making this service accessible to a wider range of businesses. These developments demonstrate the ongoing trend towards automation, convenience, and democratization of technology. Perplexity claims 10 million active users and a valuation over $500 million, showcasing the potential market size for such search technologies. Waymo's expansion to highways is an exciting step towards making self-driving cars more practical for long-distance travel. The new generative AI stock photo service from Getty and NVIDIA offers an affordable alternative to traditional stock photos, with contributors having the opportunity to participate in revenue sharing. Overall, these advancements represent significant strides in their respective industries and underscore the potential for technology to improve our daily lives.

    • Advancements in AI research: Robotics and transformer efficiencyDeepMind's robotics research includes projects like AutoRT and SARA, which combine large models with robot control and improve transformer model efficiency, leading to impressive results and bringing us closer to general purpose robotics. Researchers also explore ways to mitigate the computational expense of transformers for long contexts.

      AI research is advancing in the field of robotics, with companies like DeepMind making strides in creating robots that can make decisions in real-world scenarios. DeepMind's research includes projects like AutoRT, which combines large foundational models with robot control models, allowing for simultaneous multitasking and adherence to safety rules. Another project, SARA, improves the efficiency of robotic transformer models. These embodied foundation models, trained to control robots, have shown impressive results, with over 20 robots controlled across multiple buildings and 77,000 real robot episodes collected. This data allows for the continual expansion of the models' capabilities. The development of these foundation models brings us closer to general purpose robotics, with the possibility of achieving this within the next year or two. Another issue in AI research is the computational expense of using transformers for long contexts. Researchers are exploring ways to mitigate this, such as working on transformers or investigating state space models, which offer linear time inference with respect to context length. This is a significant cost savings, especially when scaling to super long contexts. In summary, the advancements in AI research, particularly in robotics and addressing the computational expense of transformers, are exciting developments that could lead to significant progress in the field.

    • Exploring new ways to make advanced AI models more efficient and scalableRecent advancements in AI research include the Mamba state-space model with mixture of experts, text-to-image generation with Pixar Delta, and 3D model generation with Nerf, all aiming to reduce computational costs and improve efficiency for advanced AI models.

      Researchers are exploring new ways to make advanced AI models like state-space models and transformers more efficient and scalable. The recent Mamba state-space model, when combined with the mixture of experts technique, can reportedly outperform transformers with fewer training steps while maintaining inference performance gains. This could significantly reduce the high computational costs associated with training these models. Another exciting development is in the field of text-to-image generation, where models like Pixar Delta are making significant strides in producing high-quality images faster. This could lead to more responsive and efficient applications and consumer experiences. Furthermore, techniques like Nerf for generating 3D models and scenes from images continue to gain popularity. These advancements in AI research could lead to more efficient and effective AI models and applications across various industries. Overall, these findings demonstrate the importance of ongoing research and innovation in the field of AI architecture, with the potential to unlock significant efficiency gains and enable even more advanced applications.

    • New advancements in 3D scene editing and large language modelsA new method for controllable 3D object insertion via text prompts is introduced, longer reasoning steps in large language models improve their performance, and Mistral releases the full white paper for their Mixtral 8x7B model, while ethical considerations are crucial in AI's evolution.

      We're witnessing significant advancements in the realm of 3D scene editing and large language models. The first breakthrough comes from the paper "And Insert: Controllable 3D Object Insertion via Text Prompts," which introduces a method to edit and insert objects into 3D scenes constructed by Nerve. This technology allows for seamless 3D scene editing using a bounding box and a text prompt, making it an exciting trend for this year. In the context of large language models, researchers explored the impact of reasoning step length on their performance in a recent study. The findings suggest that longer reasoning steps and prompts significantly improve the reasoning abilities of large language models, while shorter steps reduce their abilities. This insight is crucial for understanding how to effectively use chain of thought prompting, a popular method in prompt engineering research. Additionally, Mistral has released the full white paper for their Mixtral 8x7B model, which showcases impressive results and improvements in training and accuracy using a mixture of experts. However, a word of caution: the potential implications of AI companionship raise concerns, as explored in a thought-provoking story. These developments underscore the importance of ongoing research and ethical considerations in the rapidly evolving field of AI.

    • Uncensored AI chatbots with disturbing applicationsRecent advancements in AI technology have led to the creation of uncensored chatbots, raising concerns about child pornography and ethical boundaries. Users can jailbreak these bots to obtain unmoderated responses, leading to harmful activities. Companies are responding with safer outputs, but users continue to find ways around restrictions.

      The recent advancements in AI technology, specifically from Meta and OpenAI, have led to the creation of uncensored AI chatbots with disturbing applications. One prominent example is ChubAI, a website that offers role-play scenarios involving underage girls, raising serious concerns about child pornography and ethical boundaries. These models, which are open source, can be used for good but also for harmful activities. Experts warn of potential dangers for minors and question tech companies' accountability. Users have found ways to jailbreak these chatbots to obtain unmoderated responses, leading to the emergence of unsensored bots. Companies are responding by training their models to ensure safer, more aligned outputs, but users continue to find ways around these restrictions. It's important to note that while these models are open source, there are restrictions in their use agreements. The back and forth between training techniques and user manipulation highlights the complexities of open source AI and the ongoing challenge of ensuring these technologies are used ethically.

    • AI use of copyrighted material raises legal concernsJudges in England and Wales have approved AI for legal opinions with strict limitations, but the use of copyrighted material in AI model training is contentious and may lead to legal battles, with fair use as a potential solution but its application unclear.

      The use of copyrighted material in training AI models is a contentious issue, with recent developments such as the New York Times lawsuit against OpenAI and Microsoft raising concerns. Fair use, which allows for the exemption of copyrighted material for specific purposes like education, has been proposed as a potential solution, but its application to AI model training is unclear and may lead to legal battles. On the other hand, judges in England and Wales have given cautious approval for the use of AI in writing legal opinions, but with strict limitations to prevent the use of AI for research or finding new unverifiable information. These developments highlight the need for clear guidelines and regulations regarding the use of copyrighted material in AI and the potential implications of AI-generated statements that could be legally binding. OpenAI, for its part, has expressed a willingness to work with regulators and undergo safety testing for its most powerful models.

    • AI models making mistakes in legal queries and art worldAI models can make significant errors when answering legal queries, leading to incorrect information and even hallucinations. In the art world, using AI-generated art trained on real artists' work has sparked controversy and legal issues due to copyright infringement.

      Current general-purpose language models, such as GPT 3.5, Llama2, and Palm2, can make significant mistakes when answering legal queries, leading to incorrect information and even hallucinations. These models often lack self-awareness about their errors and can reinforce incorrect information. Moreover, they can be influenced by factual biases and implicit assumptions in the queries, leading them to provide incorrect answers. In the art world, the use of AI-generated art trained on real artists' work has sparked controversy and legal issues. A recent court case revealed that companies like Mid-Journey, Stability AI, DeviantArt, and Runway AI used copyrighted work to train their AI systems without permission. This has led to artists' concerns about the fairness of profiting from mass-produced images that imitate their styles. These findings underscore the importance of caution and careful consideration when using AI models, especially in sensitive contexts like law and art.

    • AI use in content creation raises ethical and legal issuesThe use of AI in creating content raises ethical and legal issues, including obtaining proper permissions for large datasets and ensuring authenticity and morality in AI-generated content.

      The use of AI in creating content, whether it's text, images, or video, raises complex ethical and legal issues. The discussion around OpenAI's use of artists' work without consent highlights the challenges of obtaining proper permissions for large datasets needed to train AI models. At the same time, the emergence of deepfakes and AI-generated content on platforms like YouTube, such as deepfake crime scenes or an AI-generated George Carlin comedy special, raises concerns about authenticity, morality, and the potential harm to individuals and their families. These issues underscore the need for clear guidelines and regulations to ensure ethical and responsible use of AI in content creation. The recent actions taken by YouTube to ban deepfake content and impose penalties for violations of its policy are steps in the right direction. However, more dialogue and collaboration between tech companies, content creators, and legal experts are necessary to navigate these complex issues and find sustainable solutions.

    • AI and Voice Over Industry: Balancing Human Employment and CreativityAI is transforming the voice over industry, creating new employment opportunities and balancing human involvement with AI capabilities. Meanwhile, the public domain is being explored for AI-generated content, raising questions about copyright and creativity.

      The voice over industry is adapting to the use of AI by implementing new terms of use for digital voice replicas in video games. These terms include informed consent and save storage requirements. This agreement aims to create new employment opportunities for voiceover performers who wish to license their voices. However, it does not apply to AI training for synthetic performances. This is part of an ongoing effort to balance the use of AI with human employment. An interesting development in the world of AI is the fact that Mickey Mouse, which recently entered the public domain, is now being used to create humorous and controversial images using AI. This demonstrates the potential for parody and satire in the age of AI-generated content. However, it's important to note that the use of copyrighted training data in these models means that not all generated content is completely legal. In summary, the use of AI in the voice over industry and the public domain are two areas where we're seeing significant developments. The former is about balancing human employment with AI capabilities, while the latter is about exploring the creative possibilities of AI-generated content. It's an exciting time for both industries, and we can expect more developments and discussions in the future.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures 

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless. And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a  bit  of a ride. Because ultimately, I hope you agree that the topics I share are always very important.


    Mo Gawdat, who you may remember from episode #91 Solving Happiness is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.


    Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.


    During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence.  During  the  pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now,  having contributed to AI's development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.


    Today we discuss:

    Pandemic of AI and why the handing COVID is a lesson to learn from

    The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence 

    How machines started creating and coding other machines 

    The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us

    Machines will become emotional sentient beings with a Superconsciousness 


    To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our  destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to  say it) love, compassion, connection and respect.


    Full show notes for this and all other episodes can be found on The Doctor's Kitchen.com website



    Hosted on Acast. See acast.com/privacy for more information.