Logo

    AI in the US State and Federal Governments with the hosts of the AI Today Podcast

    enJuly 28, 2021
    What is the purpose of the AI Today podcast?
    Who are the hosts of the AI Today podcast?
    What methodology does the podcast emphasize for AI projects?
    How is AI used in the federal government?
    What is the role of the GSA Center of Excellence?

    • Exploring the Reality of AI Adoption in Various SectorsThe AI Today podcast offers insights into the practical implementation of AI in various sectors, providing a balanced perspective and addressing budgets, resources, skill sets, and knowledge gaps.

      The AI Today podcast, which has been running for four years with over 200 episodes, provides unique insights into the adoption and education of artificial intelligence (AI) in various sectors, including government at international, federal, state, and local levels. The hosts, Kathleen and Ron, have interviewed numerous thought leaders from both the public and private sectors to gain a balanced perspective on AI. They started the podcast as analysts, wanting to understand the reality behind the hype and what organizations are actually implementing in terms of AI technologies. The podcast offers valuable information on budgets, resources, skill sets, and knowledge gaps in the AI industry. By interviewing a diverse range of guests, the AI Today podcast provides a much-needed reality check in the ever-evolving world of AI.

    • Real-world challenges of AI implementation in governmentDespite the hype, AI implementation in government faces challenges like data governance, security, ownership, and efficiency.

      While there is a significant amount of hype surrounding AI and its progress, the reality of AI implementation in practice, particularly in government agencies, can be quite different. The Bureau of the Fiscal Service, which manages the inflows and outflows of government funds, is an example of an agency dealing with massive amounts of data, making AI and machine learning crucial for extracting value and addressing challenges such as fraud, efficiency, optimization, and automation. However, the implementation of AI in government faces challenges including data governance, data security, data ownership, and related areas. These issues are especially relevant in the current discussion around data ownership. Overall, the podcast aims to provide informed perspectives on AI and dispel hype by discussing real-world applications and challenges.

    • Implementing AI projects: Follow a well-defined methodologyDefine the problem, identify data needs, prepare the data, build the model, evaluate it, and operationalize it using a methodology like CRISP-DM to ensure project success

      When implementing AI projects, whether it's through vendors or in-house, it's crucial to have a well-defined methodology in place. This methodology should cover the entire pipeline, from defining the problem to operationalizing the model. A commonly used methodology is CRISP-DM (Cross Industry Standard Process for Data Mining), which has been in use since the late 1990s. This methodology emphasizes the importance of starting with the problem, identifying the required data, preparing the data, building the model, evaluating it, and finally operationalizing it. Neglecting the early stages of data preparation can lead to failed projects. It's essential to remember that without proper data preparation, all efforts in model building and evaluation will be in vain. Therefore, prioritizing a methodology and ensuring its implementation is vital for successful AI projects.

    • Addressing the high failure rate of AI projects with CPMAICPMAI, an iterative version of CRISP-DM, provides specific guidance for machine learning projects, focusing on data preparation, feature augmentation, and model operationalization, increasing chances of success in AI projects.

      The failure rate of AI projects is high, estimated to be around 78%, often due to insufficient attention paid to data quality. To address this issue, an iterated version of the CRISP-DM methodology, called CPMAI, was developed. CPMAI provides more specific guidance for machine learning projects, focusing on data preparation, feature augmentation, and model operationalization, which are unique challenges in the AI context. By using a well-defined methodology like CPMAI, organizations can increase their chances of success in AI projects. The federal and state governments, which have been approaching AI initiatives in an ad hoc manner, could greatly benefit from implementing a structured methodology like CPMAI. Several government agencies, including the IRS, US Postal Service, and Department of Energy, have already begun using this methodology with positive results.

    • Government Emphasizes Proven Methodologies for AI ImplementationFederal and state governments are investing in AI, with the federal government pushing for methodology certification and documentation, while state and local governments rely on purchasing solutions due to resource limitations. The GSA acts as a 'center of excellence' to promote efficient spending and prevent unnecessary costs.

      Both at the federal and state/local levels, governments are investing in artificial intelligence (AI) but are emphasizing the importance of vendors following proven methodologies for AI implementation. The federal government, with its larger budget, is pushing for methodology certification and documentation from vendors, while state and local governments often rely on purchasing AI solutions due to talent and resource limitations. The General Services Administration (GSA) serves as a "center of excellence" for the government, overseeing contracts and promoting efficient spending. They have a center specifically for AI, which aims to prevent unnecessary spending and ensure effective implementation. Overall, the emphasis is on ensuring transparency and accountability in AI adoption by government entities.

    • AI in Federal Government: Improving Data Efficiency and AccuracyThe GSA Center of Excellence is leading the way in implementing AI in the federal government, focusing on data-centric issues and removing humans from certain processes through technologies like NLP and RPA.

      The use of AI, particularly in the federal government, is focused on data-centric issues, specifically data availability and quality. The GSA Center of Excellence is leading the way in establishing best practices for implementing AI, with a goal of removing humans from the loop in certain processes. While robotic process automation (RPA) is a common starting point, it's important to note that it's not true AI. Instead, true AI applications include natural language processing (NLP), which is being used in various federal agencies for tasks such as document analysis and chatbot communication. For example, USCIS has a chatbot named Emma that can communicate in Spanish and English, and the USPTO is using machine learning for patent search. The Bureau of Labor Statistics has even used NLP for injury classification, moving away from manual coding and survey-based data collection. Overall, the implementation of AI in the federal government is focused on improving efficiency and accuracy, particularly in areas where data processing is a significant challenge.

    • State governments face unique challenges in implementing AI and MLCalifornia's CEO, Joy Bonaguro, is fostering collaboration and data sharing to address unique challenges in implementing AI and ML in state governments, while federal governments focus on cutting-edge research and exploration.

      Artificial intelligence (AI) and machine learning (ML) are making significant impacts at both the federal and state levels of government. While the federal government may focus on cutting-edge research and exploration, such as using ML for analyzing satellite data or implementing conversational patterns for customer service, state governments face unique challenges. California, for instance, is a large and complex entity with a significant population and numerous systems. State governments are budget-constrained and have limited control over local jurisdictions, which often means crucial data is locked up at the county and city levels. However, California's CEO, Joy Bonaguro, is working to address this by fostering collaboration and data sharing between various entities. Despite the differences, both federal and state governments are harnessing the power of AI and ML to improve services, enhance decision-making, and address pressing issues.

    • Governments Prioritizing Self-Service, Automation, and Collaboration for Data Needs During COVID-19Governments are adapting to budget constraints by prioritizing self-service, automation, and collaboration with small companies, students, and colleges to extract more value from their data during the COVID-19 pandemic.

      The COVID-19 pandemic has significantly impacted the data systems and processes of both large and small governments. The focus has been on health information, unemployment, business shutdowns, and work from home environments. States, in particular, are facing budget constraints and need to prioritize their resources. As a result, there's a growing trend towards self-service, automation, and collaboration with small companies, students, and colleges to extract more value from their data. For instance, North Dakota, with its unique circumstances, is also making strides in this area. The CDO panel at the AI and Government community event provided valuable insights from the perspectives of Connecticut, Virginia, and Arkansas. Overall, governments at all levels are recognizing the importance of data in addressing the challenges of the pandemic and beyond.

    • The importance of effective data management and utilization in government23 states need to consider hiring and investing in a chief data officer for data-driven decision making, focusing on data rather than applications or systems demonstrates its power in public health and safety contexts.

      Despite geographical and population differences among states, the need for effective data management and utilization is a common challenge and priority for all levels of government. While there may be unique obstacles, the efforts of chief data officers and other officials to address these issues and leverage technology demonstrate the importance of data in driving informed decision-making. With the increasing awareness and value placed on data, it's crucial for the remaining 23 states without chief data officers to consider hiring and investing in this role to keep up with the data-driven landscape. The focus on data rather than applications or systems is a testament to its power and potential to help make better decisions, especially in the context of public health and safety. Overall, the conversations with various government levels offer valuable insights into the ongoing efforts to harness the power of data and technology for the benefit of their constituents. For more in-depth discussions on this topic, listeners are encouraged to explore the interviews on the VA Today podcast.

    • Impact of AI on various industries and its transformative powerAI enhances customer service, revolutionizes healthcare, automates finance tasks, requires human-AI collaboration, and raises ethical concerns.

      Kathleen and Ron shared valuable insights about the current state and future potential of AI in various industries during this episode of Scanit Today's Let's Talk AI Podcast. They discussed the impact of AI on customer service, healthcare, and finance industries, among others. Kathleen emphasized the importance of human-AI collaboration, while Ron highlighted the potential for AI to automate repetitive tasks and improve efficiency. They also touched upon the ethical considerations and challenges associated with AI implementation. Overall, the conversation underscored the transformative power of AI and its growing presence in our daily lives. To learn more about the topics discussed, listeners can check out the articles on Scanit Today's website and subscribe to their weekly newsletter. Don't forget to listen to all of their episodes and subscribe to both Scanit Today's and AI Today's podcasts. Remember to leave reviews and ratings to help more people discover these informative podcasts.

    Was this summary helpful?

    Recent Episodes from Last Week in AI

    # 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

    # 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

    Our 182nd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and Jeremie Harris.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Sponsors:

    - Agent.ai is the global marketplace and network for AI builders and fans. Hire AI agents to run routine tasks, discover new insights, and drive better results. Don't just keep up with the competition—outsmart them. And leave the boring stuff to the robots 🤖

    - Pioneers of AI, is your trusted guide to this emerging technology. Host Rana el Kaliouby (RAH-nuh el Kahl-yoo-bee) is an AI scientist, entrepreneur, author and investor exploring all the opportunities and questions AI brings into our lives. Listen to Pioneers of AI, with new episodes every Wednesday, wherever you tune in.

    In this episode:

    - OpenAI's move into hardware production and Amazon's strategic acquisition in AI robotics. - Advances in training language models with long-context capabilities and California's pending AI regulation bill. - Strategies for safeguarding open weight LLMs against adversarial attacks and China's rise in chip manufacturing. - Sam Altman's infrastructure investment plan and debates on AI-generated art by Ted Chiang.

    Timestamps + Links:

    • (00:00:00) Intro / Banter
    • (00:05:15) Response to listener comments / corrections
    Last Week in AI
    enSeptember 17, 2024

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    Related Episodes

    SolarWinds Tech Predictions for 2022: Data Governance and Risk Aversion

    SolarWinds Tech Predictions for 2022: Data Governance and Risk Aversion

    Join our Head Geeks in Part 2 of our 2022 Tech Predictions TechPod series where they share their thoughts on the rise of Chief Data Officers and data governance principles, risk aversion, and the need for tech pros to fine-tune nontechnical skill sets for career advancement. 

    This podcast is provided for informational purposes only.

    © 2022 SolarWinds Worldwide, LLC. All rights reserved.

    Legal Challenges for In-House Corporate Counsel

    Legal Challenges for In-House Corporate Counsel
    Guest Host Michael Cohen will feature International Crossroads guest Ross Veltman, Executive Director of the Silicon Valley Association of General Counsel, with regular co-hosts Mitchel Winick and Stephen Wagner, to discuss the in-house legal role, and the current issues America’s leading chief legal officers in California face across the vast national and multinational landscape. For information about SVAGC, go to www.svagc.org.

    The Department of Transportation

    The Department of Transportation

    The next department in the series is also part of the Great Society, the Department of Transportation (DoT).  Aughie explains how the many subagencies of the department work together in various ways to support public transportation, from the Federal Aviation Administration to the Federal Highway Administration. Aughie also reminds listeners that this department is a particularly good example of cooperative federalism. Discussion ends with secretaries and criticisms of the department.

    Two cultures: can policy makers and academic institutions ever work together effectively?

    Two cultures: can policy makers and academic institutions ever work together effectively?
    David Cleevely was appointed the Centre for Science and Policy’s Founding Director in 2008. As co-founder of networking organisations such as Cambridge Network, Cambridge Wireless and Cambridge Angels, David brought a unique perspective to the age-old issue of the exchange of insights between Policy Makers and Academic Institutions. The result was a unique organisation based on networking between peers, rather than attempting to formulate policy directly. Now, ten years on, David is standing down after completing his term as the inaugural Chair of CSaP’s Advisory Council. To honour David’s foundational contribution, we are inviting people who have played a significant part in the CSaP’s development over the past ten years to join with us in celebrating David’s achievements.

    Spotlight: How Biden’s Regulatory Blunders Are Crushing American Ingenuity

    Spotlight: How Biden’s Regulatory Blunders Are Crushing American Ingenuity

    Administration regulators have tightened water-use rules, pushed for energy-efficiency standards and its war on fossil fuels continues. All these unnecessary rules from Washington are making life less pleasant, more irritating and more expensive! Steve Forbes on how Biden's regulatory blunders are crushing American ingenuity and on why government interference is only making things worse.

    Steve Forbes shares his What’s Ahead Spotlights each Tuesday, Thursday and Friday.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io