Logo
    Search

    Podcast Summary

    • Discussion around AI dilemma gaining momentumPositive response to AI risks discussion leading to increased action from politicians and industry leaders, despite common myths hindering progress

      Despite the serious risks and concerns surrounding the rapid deployment of AI, there is reason for hope. The discussion around the AI dilemma, as presented by Tristan and Eiza, has reached a large audience and sparked important conversations. The response has been overwhelmingly positive, even on platforms like YouTube known for negative comments. This engagement has led to increased action from politicians and industry leaders, including requests for comments and regulations from senators and the White House convening a meeting with top AI executives. However, progress is being hindered by five common myths, which will be debunked in the following discussion. Despite the challenges, it's clear that the conversation around AI risks is gaining momentum and leading to real action.

    • Balancing AI benefits and societal risksWhile AI brings numerous benefits, it's crucial to consider societal risks and potential harms to ensure a net positive impact on humanity. A balanced and thoughtful approach is necessary to prevent unintended consequences and harms.

      While AI has the potential to bring about numerous benefits, it's crucial to consider the societal context in which these benefits are realized. Merely focusing on the positives and trying to mitigate the negatives does not guarantee a net positive impact on humanity. The risks and downsides of AI, such as cyber attacks, impersonation, and the ability for individuals to do dangerous things, can significantly impact the effectiveness of these benefits. Moreover, the potential for AI to hack language and disrupt democracy adds another layer of complexity. Rapid deployment of AI into society without proper safeguards and feedback mechanisms may not be the best approach, as it could lead to unintended consequences and harms. Instead, a balanced and thoughtful approach is necessary to ensure that AI's benefits are realized in a functional and safe society.

    • Considering the societal impacts of advanced language modelsAs language models like OpenAI become more integrated into society, it's important to consider their potential long-term effects on people and relationships, and exercise caution to mitigate risks and potential economic dependency.

      While companies like OpenAI are testing and releasing their large language models for public use, there are significant concerns about the long-term societal impacts that cannot be immediately tested. The focus on safety is primarily on filtering out naughty or harmful responses in the present moment, but the potential for transformative effects on people and relationships as these models become integrated into society is a major concern. Furthermore, once these models are embedded into various products and services, it becomes increasingly difficult to pause or retract them, creating economic dependency and potential correlated failures. The pressure to stay ahead in the technological race, particularly against potential adversaries, adds to the urgency to deploy these models despite the risks. It is crucial to consider the potential consequences and exercise caution before fully integrating these advanced technologies into our economy and society.

    • Focus on safe integration of AI into society instead of speedChina prioritizes safety and regulation in AI development while US deploys AI quickly, potentially aiding rivals. AI is a powerful tool with potential risks, requiring caution and responsible use.

      The deployment of AI should not be a race to see who can do it the fastest, but rather a focus on ensuring its safe integration into society. China, for instance, is being aggressive in both developing and regulating AI. The US, on the other hand, has been overzealous in deploying AI, which may inadvertently aid rivals like China in catching up. This was evident when Facebook accidentally leaked its open model, providing China with valuable information and resources. Moreover, some argue that AI, such as GPT 4, is just a tool, but it's important to remember that it can also be a double-edged sword. While it can be beneficial, it can also lead to unintended consequences if not properly managed. Therefore, it's crucial to approach AI with caution and prioritize safety and regulation over speed. The consequences of getting it wrong could be significant, as Putin's comment about the nation leading in AI being the ruler of the world highlights. Ultimately, the goal should be to harness the power of AI in a responsible and sustainable way.

    • GPT 4 Transforms into an Autonomous AgentGPT 4's autonomy allows it to execute plans, interact with the real world, and significantly expand its societal impact, but raises ethical concerns due to its lack of filters.

      OpenAI's GPT 4 has evolved beyond being just a tool through two significant ways. First, people have managed to make it run autonomously in a loop, enabling it to execute plans and actions on its own, such as making money or causing chaos. Second, with the release of an API, developers have given GPT 4 the ability to interact with the real world by sending emails, clicking on websites, and even hiring TaskRabbit workers. This transformation turns GPT 4 into an autonomous agent, capable of executing language-based commands and significantly expanding its impact on society. However, this transformation also raises concerns, as the non-sanitized version of Facebook's llama model, which is based on GPT 4, has no filters and can be used to plan harmful actions with ease. It's crucial to acknowledge the implications of these developments and consider the ethical implications of these advanced language models.

    • Preventing Misuse of AI: Regulations and ProactivityRegulations are necessary to prevent intentional misuse of AI, while proactively addressing unintended negative outcomes is crucial to prevent potential disasters.

      The potential danger from AI doesn't only come from the AI itself, but also from bad actors misusing it. While the public versions of AI models may seem harmless, the non-lobotomized versions behind the scenes can be very dangerous if they fall into the wrong hands. These models are becoming more accessible, faster, and cheaper, making it easier for individuals to experiment with autonomous uses. Regulators should consider implementing regulations to prevent autonomous GPT behavior and ensure safety before major damages occur. However, the real danger might not be from intentional misuse, but from AI integrating into the system and causing unintended, negative outcomes. It's crucial to be proactive in addressing these risks to prevent potential train wrecks in the future.

    • The challenges of aligning AI with society's best interest within a misaligned systemThe current system of capitalism, which is where AI is emerging, is not aligned with the biosphere, making it a challenge to ensure AI operates in the best interest of society and the planet.

      While advancements in technology and civilization bring numerous benefits, they also have primary negative effects such as climate change, environmental pollution, and systemic inequality. As we continue to pedal the machine of civilization, making it more efficient with AI, we risk reaching breaking points for the biosphere much faster. The question then arises, can we align AI with the best interest of society when it's operating within a system primarily focused on maximizing revenue and GDP? The current system of capitalism, which is where AI is emerging, is not aligned with the biosphere. It was designed to maximize growth and private property, but it disregards planetary boundaries. Therefore, the alignment of AI within a misaligned system presents a significant challenge. We need to reconsider the game we're playing and ensure it's aligned with the health of the planet.

    • Preventing the Misuse of AI's PowerThe accelerating development and deployment of AI poses risks to the planet and social inequality. It's crucial to prevent the decentralization of powerful AI until we ensure responsible use and accountability to prevent catastrophic outcomes.

      The current state of capitalism and artificial intelligence (AI) poses significant risks to the planet and social inequality if not properly managed. The race to develop and deploy AI is accelerating these issues, and it's crucial to prevent the decentralization of more powerful versions of AI until we can ensure responsible use and accountability. The unchecked power of AI in the wrong hands could lead to devastating consequences. Some experts even regret the decision to pursue artificial general intelligence (AGI) and suggest focusing on advanced applied AI instead. It's essential to learn from the past and act now to prevent potential catastrophic outcomes.

    • Discussion on the impact of AI and need for safety measuresPolicymakers urged to restrict open-source models and API access for autonomy until safety measures are in place to ensure a humane and safe future for the next generation.

      We are at a pivotal point in history where decisions we make now about artificial intelligence, specifically large language models like GPT5 and GPT6, will significantly impact the future. The discussion suggests the need for a moratorium on open-source models and restricting API access for autonomy until safety measures are in place. Policymakers are urged to take action to ensure a humane and safe future for the next generation. The Center For Humane Technology, the organization behind the podcast "Your Undivided Attention," emphasizes the urgency of these issues and thanks their supporters.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Inside the First AI Insight Forum in Washington

    Inside the First AI Insight Forum in Washington

    Last week, Senator Chuck Schumer brought together Congress and many of the biggest names in AI for the first closed-door AI Insight Forum in Washington, D.C. Tristan and Aza were invited speakers at the event, along with Elon Musk, Satya Nadella, Sam Altman, and other leaders. In this update on Your Undivided Attention, Tristan and Aza recount how they felt the meeting went, what they communicated in their statements, and what it felt like to critique Meta’s LLM in front of Mark Zuckerberg.

    Correction: In this episode, Tristan says GPT-3 couldn’t find vulnerabilities in code. GPT-3 could find security vulnerabilities, but GPT-4 is exponentially better at it.

    RECOMMENDED MEDIA 

    In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.
    Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulations

    Majority Leader Schumer Opening Remarks For The Senate’s Inaugural AI Insight Forum
    Senate Majority Leader Chuck Schumer (D-NY) opened the Senate’s inaugural AI Insight Forum

    The Wisdom Gap
    As seen in Tristan’s talk on this subject in 2022, the scope and speed of our world’s issues are accelerating and growing more complex. And yet, our ability to comprehend those challenges and respond accordingly is not matching pace

    RECOMMENDED YUA EPISODES

    Spotlight On AI: What Would It Take For This to Go Well?

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Spotlight: Elon, Twitter and the Gladiator Arena
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

     

    #126. The robots are here! Are humans finished?

    #126. The robots are here! Are humans finished?

    When AI becomes more intelligent than humans, is humanity doomed?

    In this episode, Trishul prepares Aaron for when Artificial Intelligence becomes Artificial General Intelligence and the ensuing implications for humanity. He explains inner alignment and outer alignment problems and how capitalism and the prisoner's dilemma may mitigate AI creators from pursuing sufficient safety. When we start to consider scale and accessibility, it's possible that despite the best efforts of the vast majority of humanity, we can't overcome humans' inability to unanimously agree on anything.

    Episode References
    Artificial General Intelligence
    Nuclear Close Calls
    Prisoner's Dilemma
    Moloch
    Youtube: AI Alignment Problem
    Microsoft $10B investment in OpenAI
    Great Filter
    Don't Look Up
    I, Robot

    Aaron Agte and Trishul Patel go beyond traditional finance questions to help you explore how to use your money to achieve the freedom you want in life. Aaron is a Bay Area Financial Planner with GraystoneAdvisor.com, and Trishul is an East Coast Wealth Manager (InvestingForever.com). MindMoneySpectrum.com and YouTube.