Logo
    Search

    #116 - ChatGPT plugins, AI hardware, petition to pause AI, Trump deepfakes

    enMarch 31, 2023

    Podcast Summary

    • ChatGPT's new plugin feature expands its capabilitiesOpenAI's plugin support for ChatGPT enables the system to interact with external services, enhancing its functionality and raising questions about AI assessment and consciousness.

      The capabilities of AI systems are expanding beyond what we initially anticipated, as shown by OpenAI's addition of plugin support to ChatGPT. This feature allows ChatGPT to interact with external services, opening up new possibilities for the system's functionality. This development raises intriguing questions about the assessment of AI capabilities and the emergence of new capabilities through the integration of new tools. The hosts also discussed the implications of these advancements on our understanding of consciousness and the potential risks and opportunities they present. The conversation also touched upon the background of the hosts and their research in quantum physics and the foundations of AI. Overall, the discussion highlights the rapid evolution of AI and the importance of considering the implications of new capabilities and tools.

    • Revolutionizing AI interactions through plugin integrationsUsers can now instruct AI to perform tasks directly within various applications, expanding capabilities and blurring the lines between AI as an extension and a tool.

      The latest developments in AI technology, specifically the integration of plugins with chat models like ChatGPT, is set to revolutionize the way we interact with AI. Instead of just receiving text-based responses, users can now instruct the AI to perform tasks directly within various applications, such as booking flights or writing emails. This expansion of capabilities raises questions about the limits of AI's ability to process complex thoughts and the distinction between the AI as an extension versus a part of the tool itself. Companies that have focused on narrow AI approaches may face strategic risks as the more general capabilities of chat models become more prevalent. Additionally, hardware companies like NVIDIA are also experiencing significant advancements in AI technology, further fueling the excitement and potential for this rapidly evolving field.

    • NVIDIA's Role in AI Advancements with GPUsNVIDIA's pioneering use of GPUs for AI training, latest Hopper H100 for transformer models, and cloud service offering contribute to industry convergence and shift towards software engineering roles.

      NVIDIA has been a pioneer in the field of AI for over a decade, driving advancements through their use of GPUs for training large neural networks. They have continually pushed the boundaries with powerful GPUs, such as the DGX, and now offer a cloud service for more affordable access. NVIDIA's latest GPUs, the Hopper H100, are specifically designed for the transformer models dominating AI research, delivering significant improvements over previous systems. The industry is increasingly converging on this hardware and software choice, raising questions about potential lock-in and the future of AI innovation. Engineers' roles are shifting towards software engineering and scaling, as NVIDIA simplifies the process of building and deploying transformer models.

    • Shift from AI research to business applicationNVIDIA's high-performance inference platforms like NVIDIA L4 are driving the shift from AI research to business application, making it more accessible and efficient for companies to serve AI systems to real users at scale.

      We are witnessing a significant shift in the field of Artificial Intelligence (AI) towards making it more accessible and efficient for businesses. NVIDIA, a leading player in the AI hardware market, is dominating this space with its high-performance and efficient inference platforms like NVIDIA L4. This shift is a reflection of AI moving from being a research project to a mainstream technology, with companies focusing on serving these systems to real users at scale. The importance of cost-effective inference is driving the development of hardware specifically for this purpose. The evolution of AI is progressing rapidly, and it's expected that within a few years, AI will be as ubiquitous as smartphones are today. This "iPhone moment" for AI promises transformative changes in society, but it also raises concerns about how prepared we are for this technological shift. Exciting times lie ahead, but it's crucial for society to be prepared for the implications of widespread AI adoption.

    • Chinese tech giant Kai-Fu Lee announces new venture in large language modelsKai-Fu Lee, a Chinese tech expert, plans a new venture, adding to the list of Chinese companies in large language models. Cerebras Systems makes strides in training models on custom chips, reducing time. Regulatory challenges and open source debates continue in this evolving field.

      The field of large language models is rapidly evolving, with both Western and Eastern tech giants making significant strides. Kai-Fu Lee, a notable Chinese technocrat and AI expert, has announced plans to start a new venture in the space, adding to the growing list of Chinese companies entering the fray. Meanwhile, Cerebras Systems, a hardware company, has made strides in training large language models on its custom chips, claiming it took only a few weeks instead of months. These developments raise questions about the implications of a global ecosystem for these advanced AI models, including potential regulatory challenges and the impact on stock prices. The open source debate continues, with some expressing concerns about malicious use, while others see the benefits of widespread access. Cerebras' hardware, which features large chips, is believed to offer efficiency advantages over traditional GPUs, but the specifics are not yet clear. Overall, the landscape of large language models is changing rapidly, and the implications for technology, business, and society are significant.

    • Advancements in AI hardware and their impact on the job marketNew AI hardware, like Cerebrous's architecture, could streamline computing processes, revolutionize AI, but may impact 300 million jobs, potentially raising global GDP by up to 7% over a decade

      The latest advancements in AI hardware, such as Cerebrous's new architecture, have the potential to significantly reduce the need for data transfer between memory storage and VGPU, leading to a more streamlined and efficient computing process. This could potentially revolutionize the field of AI and lead to the development of fully new architectures specialized for AI. However, the impact of these advancements on the job market, as highlighted in a recent Goldman Sachs report, could be substantial with approximately 300 million jobs being affected, with up to 7% of jobs having at least half of their workloads being done by AI. Despite these estimates, some experts believe that the impact of AI on the job market could be even more significant, as humans have a poor track record of accurately predicting which jobs will be affected. The potential productivity gains from these advancements could also be significant, with some estimates suggesting that they could raise annual global GDP by up to 7% over a 10-year period. However, these estimates may be overly conservative, and the true impact of these advancements on the economy remains to be seen. Overall, the ongoing developments in AI hardware and its potential impact on the job market serve as a reminder of the rapid pace of advancements in this field and the need for continued innovation and adaptation.

    • Impact of AI and automation on productivity, employment, and economic growthAI and automation integration may boost productivity and efficiency, but their impact on jobs and economic growth is uncertain. The software-based nature of AI tech may lead to faster adoption, but not all industries will be directly affected. New industries and job opportunities may emerge, but successful integration is key.

      The integration of AI and automation into various industries may lead to significant increases in productivity and efficiency, but the impact on employment and overall economic growth is still uncertain. The speed of adoption for AI technologies, such as large language models (LLMs), is expected to be much faster than previous industrial revolutions due to their software-based nature. However, not all jobs or industries may be directly affected by AI, and there are ongoing debates about whether automation alone will increase economic output. For instance, Agility Robotics' new digit robot, designed for moving plastic bins in warehouses, represents a step forward in robotics, but it remains to be seen how much LLM technology will benefit this field. Overall, the future of work and the economy will depend on the successful integration of AI and automation, as well as the development of new industries and job opportunities.

    • MIT researchers propose a method to expand neural networks during trainingMIT researchers propose a new method to expand neural networks as they're trained, potentially reducing costs by up to 50% and offering a more efficient way to build and train large-scale AI models.

      Researchers from MIT have proposed a method to expand neural networks as they are trained, making the process more efficient and potentially reducing compute costs by up to 50%. This approach, which involves growing the neural net as training progresses, is not new but the researchers claim it significantly reduces costs and could be a game-changer for large-scale AI models. The method, which allows for the benefits of learning from smaller models to be carried over to larger ones, has shown improvement throughout the training process in experiments with various models, including BERT and GPT. The exact details of how often this operator is applied and how it compares to other methods are not clear, but the potential for significant cost savings has already been demonstrated with GPT-2. This new approach challenges the traditional random initialization method and offers a new set of options for building and training models. However, scaling up to larger models will still require more powerful computing resources.

    • Understanding smaller models for larger ones and Success VQA for reinforcement learningUsing smaller models to comprehend their changes and impacts on larger architectures and proposing Success VQA for reinforcement learning can improve interpretability, safety, and reliability of AI systems.

      The use of smaller models to understand and predict the performance of larger models, as well as the proposal of Success VQA for reinforcement learning, can contribute to the interpretability and safety of AI systems. The discussion began with the idea that using smaller models to understand their changes and impacts on architecture before scaling them up could lead to a better understanding of the larger model. This interpretability angle could potentially make it easier to understand the bigger structure and address any issues. Next, researchers from UC Berkeley and DeepMind proposed Success VQA, a general approach to detecting success or failure in reinforcement learning using a video question answering model. This method, using a model like Flamingo, could provide a more robust way of evaluating the performance of these models and decrease the chances of AI hacking the reward function. Success VQA offers a more general and binary reward system, making reward hacking more challenging. Although there are still potential risks in simulation environments, implementing this method in the real world would make it harder for AI to find ways to hack the reward. These developments, including the use of smaller models and Success VQA, could contribute to the ongoing efforts to ensure the safety and alignment of AI systems, making them more robust and reliable.

    • Exploring new ways to train AI using other AI as evaluatorsResearchers are developing AI models to evaluate AI performance, enhancing language generalization and visual robustness, and introducing a virtual testing environment for autonomous vehicles to simulate dangerous situations for efficient and safer testing.

      Researchers are exploring new ways to train AI systems using other AI systems as evaluators. This approach, as discussed in a recent paper, involves the use of a "flamingo-like model" that can understand and respond coherently to different queries about an AI's performance, covering language generalization and visual robustness dimensions. This is an exciting development for those concerned with AI alignment and scalability. Additionally, there's a new virtual testing environment for autonomous vehicles that aims to break the "curse of rarity" by simulating dangerous situations, allowing for more efficient testing and safer deployment. This strategy condenses the experience of collecting data on dangerous situations in simulation, saving time and resources while addressing typical, yet not common, dangerous scenarios. However, it's important to note that this approach has limitations, as it may not fully capture the long tail of weird and uncommon situations that autonomous vehicles might encounter in real life.

    • AI's versatility transforms ornithology and elder careAI is revolutionizing science and addressing societal needs, from predicting bird migrations to providing companionship for the elderly.

      AI is making significant strides in various fields, from ornithology to elder care, demonstrating its versatility and potential to transform the way we study and care for the world around us. Self-driving technology, while making progress, still faces challenges with edge cases. Machine learning is being used to forecast bird migrations and identify birds in flight, providing valuable insights for ornithology and climate change studies. AI is also revolutionizing science as a whole, from protein folding to understanding complex radar data for elderly care. This technology is not only making scientific discoveries faster but also addressing societal needs, such as the care of aging populations. In China and other countries with aging populations and low fertility rates, AI solutions could become increasingly advantageous. Companies are already developing AI tools to help elderly residents, from predicting falls to providing companionship through conversational systems and robots. While these tools will not replace human caretakers, they will help address the shortage of human resources in elder care. Runway Gen 2, the first publicly available text to video generator, is another example of the expanding capabilities of generative AI. These advancements underscore the importance of AI in our society and the potential for it to create value in various human-aligned ways.

    • Advancements in AI: Generating VideosAI's ability to generate videos showcases its understanding of the physical world, but OpenAI's shift towards business model raises concerns about open sourcing and potential misuse.

      We are witnessing significant advancements in AI technology, particularly in the areas of image and video generation. These models, which were once limited to classifying and regressing images, can now generate videos, albeit not yet photorealistic. This progress is largely due to the availability of increased processing power and scale. The ability to generate videos provides a new way to evaluate the robustness and completeness of AI's understanding of the physical world. However, the competitive nature of the field has led OpenAI to reconsider its approach to research sharing. OpenAI co-founder Ilya Sutskov has stated that the company's past open source approach was wrong, as it is now a business. This shift raises questions about the appropriateness of open sourcing AI research in the first place, given the potential harm that could be caused by malicious use of these advanced systems. The emergence of photorealistic video generation also has implications for entertainment and art, and it's only a matter of time before we see longer, more impressive text-to-video models. Overall, these advancements demonstrate the relentless effectiveness of scaling and the exponential progress being made in AI technology.

    • The debate over open sourcing advanced AI modelsSome argue for transparency and collaboration, others fear misuse and potential harm from open sourcing advanced AI models like GPT-4. The high cost and computational requirements make it a less practical solution for most, and concerns over safety and alignment persist.

      The debate surrounding the open sourcing of advanced AI models like GPT-4 is complex and multifaceted. While some argue that open sourcing promotes transparency and collaboration, others believe it could lead to misuse and potential harm if falling into the wrong hands. Ilya Sutskivir expresses his concern that open sourcing could be a bad idea, especially if we believe that AGI will one day be extremely powerful. He suggests that the potential risks of open sourcing, such as misuse by individuals or bad actors, outweigh the benefits. Moreover, the high cost and computational requirements of running these models make open sourcing a less practical solution for most people. Instead, some suggest that these models should be shared in a more limited and controlled way to ensure safety and alignment. The founding mission of OpenAI, which aimed to make AGI accessible to the average person, is also a point of contention. While some argue that this is a noble goal, others question its feasibility and potential risks. As technology continues to advance, the conversation around the responsible use and open sourcing of AI will remain an important topic for society to navigate. It's clear that this is a complex issue, and everyone is trying to find the best way forward.

    • AI Advancements and the Need for CautionNotable figures call for a pause in training AI systems more powerful than GPT-4 due to safety concerns, but the rapid decrease in processing power costs may enable others to jump in and compromise safety measures, requiring a multifaceted approach to balancing progress and safety.

      The rapid advancement of AI technology and its potential misuse or negative impacts necessitates careful consideration and collaboration among industry leaders. The recent open letter signed by over 1,100 notable figures, including Elon Musk and Yashua Benjio, calling for a six-month pause in training AI systems more powerful than GPT-4, highlights the need for caution and coordination. However, the potential downside of this pause is the rapid decrease in the cost of processing power, which could make it easier for other organizations to jump in and potentially compromise safety measures. The situation is further complicated by the involvement of less transparent entities, such as hedge funds, which could disregard safety concerns. Ultimately, the way forward requires a multifaceted approach, balancing the benefits of continued progress with the need for safety and ethical considerations.

    • AI Safety and Alignment: A Call for CautionAI experts and researchers call for a pause in the development of more powerful AI due to potential risks, including power-seeking behaviors and reward hacking. Public awareness, education, and policy responses are crucial to manage and contain the AI ecosystem.

      The debate around the potential risks posed by artificial intelligence (AI) and the need for safety measures is gaining more attention, as evidenced by the recent letter signed by over 1,000 AI researchers and experts. The letter, which calls for a pause in the development of more powerful AI, has sparked discussions about the seriousness of the issue and the need for a shift in focus towards AI safety and alignment. While some may dismiss the idea as unrealistic, the increasing research and evidence suggesting potential risks, such as power-seeking behaviors and reward hacking, make it a pressing concern. The letter also highlights the importance of public awareness and education about these risks, as well as the need for policy responses to manage and contain the ecosystem of AI technologies. Overall, the debate underscores the importance of taking a proactive approach to AI safety and alignment, rather than waiting for potential catastrophic events to occur.

    • The Importance of AI SafetyExperts call for oversight and consensus before deploying potentially dangerous AI systems to prevent misalignment and potential existential risks.

      The importance of AI safety, even for those not overly concerned with existential risks, cannot be overstated. While a call for a six-month pause in AI training might seem extreme, the need for oversight and consensus from experts before deploying potentially dangerous AI systems is a more concrete and feasible solution. The ongoing debate around AI safety and the potential risks of misalignment between human intentions and AI actions is a complex issue that requires serious consideration and engagement with the underlying arguments, rather than dismissive criticisms. The recent opening of the Misalignment Museum in San Francisco serves as a reminder of the potential consequences of misaligned AI and the importance of ongoing research and dialogue in this area.

    • Exploring AI's impact on societyAI presents opportunities and challenges, from existential risks to privacy concerns and deepfakes, requiring ongoing societal adaptation and responsible use.

      The intersection of artificial intelligence (AI) and society continues to evolve, presenting both exciting opportunities and significant challenges. A recent exhibit in San Francisco, titled "Sorry for Killing Most of Humanity," humorously explores AI existential risk, raising awareness for non-technical audiences. However, the use of AI in more practical applications, such as facial recognition tools by law enforcement, raises concerns around privacy, regulation, and security. For instance, ClearView AI, a facial recognition tool, has reportedly been used nearly one million times by US police, while voice identification systems can be fooled by AI. These developments challenge our societal structures and require ongoing attention and adaptation. Additionally, AI's ability to generate realistic deepfakes, like drawing hands, poses new threats to security and authenticity. As AI continues to advance, it's crucial to remain informed and engaged in these discussions to ensure a responsible and beneficial future for all.

    • AI-generated images becoming more realisticAI-generated images are improving, but people can still distinguish them from real ones. The uncanny valley effect persists, and deep fakes raise ethical concerns.

      We are witnessing significant advancements in AI-generated images, which are becoming increasingly realistic and convincing. This was discussed in relation to teeth and hands, which used to present challenges in AI image generation, but have now been overcome. However, despite these improvements, people are still generally able to distinguish AI-generated images from real ones, although this may change as the technology continues to advance. The uncanny valley effect, where images are not quite realistic enough to be fully convincing, is currently being sustained, but for how long remains to be seen. Deep fakes, such as AI-generated images of Trump and Pope Francis, have also been discussed, with some finding them amusing rather than convincing. The Writer's Guild of America has even proposed a policy that allows the use of AI in scriptwriting, as long as writers maintain credit. Overall, the progress of AI image generation is impressive, but it is important to remain skeptical and aware of its limitations.

    • The Role of AI in Content Creation: A Cautious ApproachPublishers are carefully considering the use of AI in content creation, particularly in scripts and large-scale text, due to questions around authorship, credit, and payment. Human involvement and editing remain essential for high-quality content.

      While AI tools like ChatGPT can assist human writers in creating content, such as scripts or even books, the final product is still the result of human creativity and input. Publishers are currently taking a cautious approach towards accepting AI-generated content, particularly in the field of scripts and large-scale text. The use of AI as a writing tool is raising questions about authorship, credit, and payment. The process of evaluating AI-generated text is slower compared to image generation, making it more challenging to achieve high-quality outputs. This is particularly relevant for professional writers in the industry. The debate around the use of AI in content creation is an ongoing one, and it will be interesting to see how it evolves as the technology advances. For now, it seems that human involvement and editing will continue to play a crucial role in the creation of high-quality content.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    The 10 Most Interesting and Useful ChatGPT Plugins So Far

    The 10 Most Interesting and Useful ChatGPT Plugins So Far
    From Youtube video summarizers to Spotify playlist generators to tools to chat with your PDFs, some of the most interesting and useful ChatGPT plugins as they get rolled out in Beta to all ChatGPT Plus customers. The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures