Logo

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    enAugust 31, 2023
    What drives the US-China competition in AI?
    How does China aim to surpass the US in AI?
    Why is measuring technological lead misleading?
    What challenges does China face in advanced technology areas?
    How can the US and China coordinate on AI development?

    Podcast Summary

    • US-China AI Competition: Economic Growth and Geopolitical InfluenceThe US and China are competing in AI for economic growth and productivity, with China focusing on areas like computer vision due to government investment and surveillance. The US is currently ahead in AI development, but the stakes are high for both countries, potentially impacting military and geopolitical influence.

      The US-China competition in AI is driven by economic growth and productivity, with China aiming to catch up and potentially surpass the US in certain areas like computer vision due to government investment and emphasis on surveillance. However, the overall assessment is that the US is currently ahead in AI development, and the stakes are high for both countries, with potential implications for military and geopolitical influence. Experts Jeff Ding and Karen Hao provide insights into China's progress in AI, highlighting areas where China may be ahead or behind, and discussing the significance of AI in the US-China competition. The conversation challenges the notion that China is an inflated threat and sheds light on the complexities of comparing the capabilities of the two countries in this field.

    • China's Challenges in Large Language ModelsDespite progress, China faces hurdles in large language models due to data and resource limitations, while US holds an edge. Relationships between Chinese companies and govt add complexity.

      While China is making strides in AI research, it faces significant challenges in the field of large language models due to a lack of high-quality data and computing resources. The US, with its abundance of English language data and advanced computing hardware, holds an advantage in this area. Chinese researchers are playing catch-up and are still not able to match the performance of US models. Furthermore, the relationship between Chinese companies and the government is complex, with some companies enjoying close ties and others facing scrutiny. These factors, among others, complicate the notion that China is simply racing ahead in AI development, challenging the assumption that the US need not regulate its own AI sector.

    • Chinese tech companies navigate complex relationship with governmentChinese tech firms resist data handover, align with govt during crises, face strict info control regs, but may struggle to keep up with US in large language models advancements

      Chinese tech companies engage in a complex relationship with their government, often pushing back against requests for data and information while also complying with regulations that prioritize information control. An example of this dynamic was seen when a company refused to hand over data to local authorities, instead delivering it on physical paper. However, there are also instances of alignment between companies and the government, particularly during times of economic crisis. The Chinese Communist Party's control over information is a major concern, leading to stringent regulations on technology platforms and foundation models. These regulations aim to ensure that these technologies are implemented in a way that aligns with the party's interests and censorship goals. The Chinese government's reaction to new technologies that could shift the information ecosystem is often swift and fearful, as seen in the current compliance checks being conducted on foundation models. Despite these regulations, some argue that China may not be able to keep up with the rapid advancements in large language models, leaving the US with an opportunity to maintain its technological edge.

    • China's AI progress vs US edgeThe US maintains an edge in AI due to resources and talent, but China's progress could accelerate due to US deployment. Regulations in both countries could ensure safe and trustworthy AI adoption.

      The balance between stability concerns and the drive for competitiveness, economic growth, and military might in China's pursuit of AI technology is complex. Despite data, talent, and resource limitations, China's economic slowdown, and government reservations about the potential societal instability brought by advanced AI models, the US still maintains an edge due to its greater resources and talent concentration. Regulation in the US could actually lead to more sustainable development of AI over time, while also potentially slowing China's progress in the short term. Contrary to popular belief, the US's faster deployment of AI technology may actually accelerate China's progress rather than hinder it. However, there are valid concerns that China's rapid advancement in AI should alarm the US, as evidenced by the increasing number of papers being published in China in this field. Smart, prudent regulations in both countries could ensure the safe and trustworthy adoption of AI, ultimately leading to a more sustainable and beneficial impact on society.

    • Chinese researchers driven by societal improvement, not just competitionDespite lagging in AI research output and graduates, China's focus on societal improvement drives its researchers, but retaining talent remains a challenge.

      While China currently lags behind the US in terms of the number of papers published and the number of engineers and researchers graduating in the field of AI, it is important to remember that the motivation and focus of Chinese researchers are not solely driven by geopolitical competition or nationalism. Instead, they are driven by a desire to improve various aspects of society, such as education and healthcare. However, China's ability to tap into and retain this talent is a challenge they are facing. Additionally, it's crucial to consider multiple perspectives and sources when evaluating China's progress in AI. The more one engages with people on the ground and reads Chinese language reports, the less likely one is to view China as on the verge of surpassing the US as an AI superpower. Ultimately, the competition between the US and China in AI is complex, and it's essential to avoid oversimplifying the situation based on incentives or biases.

    • Measuring a country's tech lead by scientific papers alone is misleadingHistorically, countries with leading innovations struggled to spread them throughout their economies, while China, a middling power, excels in diffusion, making it a formidable competitor. Be wary of perceived tech gaps.

      Measuring a country's technological lead solely based on the number of scientific papers they produce may be misleading. Instead, it's essential to consider their capacity to diffuse technology throughout their economy, as highlighted in Jeff's seminal paper. The historical examples given, such as the US during the early 20th century and the Soviet Union during the Cold War, illustrate that while these countries produced leading-edge innovations, they struggled to spread these innovations effectively throughout their economies. Conversely, China, despite ranking as a middling scientific and technological power, excels in the diffusion process, making it a formidable competitor. The speaker also emphasized that the belief in the illusory technological gaps, as seen in past cases like the missile gap, should be approached with skepticism.

    • US's open ecosystem fuels tech advancementsThe US's open ecosystem and experimental environment enable advanced tech implementation, but significant investment in large language models and chip access are challenges.

      The US's ability to diffuse and implement advanced technologies, such as AI, is a key factor in its success. This is due to the strong open source ecosystem and the environment that allows companies to experiment without excessive government intervention. However, a potential bottleneck is the significant investment required to develop large language models. The lack of access to chips is currently a greater concern for investors than censorship or regulatory issues. China, which is currently behind in chip production, may struggle to compete without finding a way to overcome these challenges. The effectiveness of export controls on chips is also questionable, as some Chinese labs have reportedly found ways to access high-end chips through rentals and cloud computing.

    • Race for dominance in semiconductor industryBoth US and China recognize importance of semiconductor industry for future tech advancements, but globalization and IP theft pose challenges for China, while US faces similar constraints.

      Both the US and China recognize the importance of dominating the semiconductor industry for future technological advancements, but the complexity and globalization of the industry make it challenging for either country to go it alone. China has invested heavily in its semiconductor industry, but reliance on foreign components and the risk of intellectual property theft pose significant challenges. The US, too, is investing heavily to reshore chip manufacturing. However, the industry's globalization and the potential for intellectual property theft create a significant constraint, especially for China. The US military's integration of new technologies is a methodical process, and there may not be a significant incentive for China to steal AI technology when much of it is open source. The frontier AI developments, however, may be more challenging to steal as they involve tacit knowledge and experience. Ultimately, the race for dominance in the semiconductor industry is a complex and ongoing battle, with both the US and China facing significant challenges.

    • China's challenges in advanced tech despite alleged blueprint theftDespite stealing blueprints, China faces challenges in effectively utilizing advanced tech like stealth fighters and AI due to the lack of tacit and managerial knowledge. International coordination is crucial to ensure safe and beneficial AI development, with the US and China leading the way.

      Despite China's efforts to catch up in advanced technology areas like stealth fighter development and AI, there are significant challenges due to the lack of tacit and managerial knowledge. This was discussed in an article about China's inability to build effective stealth fighters despite alleged blueprint theft. As AI technology continues to advance at an unprecedented pace, there's a need for international coordination to ensure its safe and beneficial use. The US and China, the world's leading powers in AI development, could set aside differences and find ways to coordinate. There are already some efforts towards this, such as international dialogue forums and technical committees focusing on AI controllability. Historical precedents, like the cooperation between the US and the Soviet Union during the Cold War on nuclear safety technology, could serve as guiding points. The stakes are high, and the potential risks associated with AI development necessitate a collective, coordinated approach.

    • Collaboration between US and China on AI could be more effective at the track 2 levelDespite political tensions, cooperation between US and Chinese engineers, researchers, and CEOs on AI development could lead to a safer and more humane technology in the long run.

      While high-level cooperation between the US and China on AI regulation and climate commitments may be challenging due to low trust and political baggage, coordination at the track 2 level between engineers, researchers, and CEOs could be more effective and productive. China is seen as a fast second mover in AI development, and the US going slower could potentially slow down China's progress as well. The nuanced understanding of the problem is that there are different types of AI and each country has unique strengths and challenges. The US may have an edge in language models, while China leads in computer vision. Ultimately, the optimism stems from the potential for coordination among the technology builders, which could lead to a humane and safe development of AI in the long run.

    Recent Episodes from Your Undivided Attention

    AI Is Moving Fast. We Need Laws that Will Too.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The CHT Framework for Incentivizing Responsible AI Development

    Further Reading on Air Canada’s Chatbot Fiasco 

    Further Reading on the Elon Musk Deep Fake Scams 

    The Full Text of SB1047, California’s AI Regulation Bill 

    Further reading on SB1047 

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Can We Govern AI? with Marietje Schaake

    A First Step Toward AI Regulation with Tom Wheeler

    Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

    Esther Perel on Artificial Intimacy (rerun)

    Esther Perel on Artificial Intimacy (rerun)

    [This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

    RECOMMENDED MEDIA 

    Mating in Captivity by Esther Perel

    Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

    The State of Affairs by Esther Perel

    Esther takes a look at modern relationships through the lens of infidelity

    Where Should We Begin? with Esther Perel

    Listen in as real couples in search of help bare the raw and profound details of their stories

    How’s Work? with Esther Perel

    Esther’s podcast that focuses on the hard conversations we're afraid to have at work 

    Lars and the Real Girl (2007)

    A young man strikes up an unconventional relationship with a doll he finds on the internet

    Her (2013)

    In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

    RECOMMENDED YUA EPISODES

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    The Three Rules of Humane Tech

    Digital Democracy is Within Reach with Audrey Tang

     

    CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.

    The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill

    More information on the Google antitrust ruling

    More Information on KOSPA

    More information on the SOPA/PIPA internet blackout

    Detailed breakdown of Internet lobbying from Open Secrets

     

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Can We Govern AI? with Marietje Schaake
    The Race to Cooperation with David Sloan Wilson

     

    CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

     

    The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

     

    This Moment in AI: How We Got Here and Where We’re Going

    This Moment in AI: How We Got Here and Where We’re Going

    It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.

    Info Sheet on KOSPA: More information on KOSPA from FairPlay.

    Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

    AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

    Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. 

    Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

     

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Can We Govern AI? with Marietje Schaake 

    The Three Rules of Humane Tech

    The AI Dilemma

     

    Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.

     

     

    Your Undivided Attention
    enAugust 12, 2024

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Spotlight on AI: What Would It Take For This to Go Well?

    Spotlight on AI: What Would It Take For This to Go Well?

    Where do the top Silicon Valley AI researchers really think  AI is headed? Do they have a plan if things go wrong?  In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. 

    NOTE: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.

    RECOMMENDED MEDIA 

    Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summit

    This week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck Schumer

    Takeaways from the roundtable with President Biden on artificial intelligence

    Tristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligence

    Biden, Harris meet with CEOs about AI risks

    Vice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people’s rights and safety at risk

    RECOMMENDED YUA EPISODES 

    The AI Dilemma

    The AI ‘Race’: China vs the US with Jeffrey Ding and Karen Hao

    The Dictator’s Playbook with Maria Ressa

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech News: The Supreme Court Hands Epic a Setback

    Tech News: The Supreme Court Hands Epic a Setback

    The US Supreme Court declined a recent request from Epic regarding the company's ongoing legal battles with Apple. Disney is getting ready to hike subscription fees (again) on Disney+ and other streaming services. And some astronomers worry that slamming spacecraft into oncoming asteroids might not actually remove the risk of catastrophe.

    See omnystudio.com/listener for privacy information.

    Industry Leaders Make Big Push for Small AI

    Industry Leaders Make Big Push for Small AI

    In this edition of the Embedded Insiders, Brandon and Rich discuss the semantics of AI and intelligent technology and what qualifies as a smart system these days.

    Have marketing engines turned these into over used terms? Are they even being used correctly? 

    Later, the Insiders take a deeper dive into the practicalities of Edge AI with Bill Pearson, the Vice President of the Internet of Things group at Intel. Together, they investigate market challenges to realize the ROI on initial AI deployments and the hardware/software gaps preventing developers from launching commercial grade solutions faster.

    Finally, Alex Harrowell, a Senior Analyst at Enterprise AI speak with Perry in this week’s Tech Market Madness to discuss a new report from Omdia Research titled, “Artificial Intelligence for Edge Devices Report.”


    For more information, visit embeddedcomputing.com

    Investing in an AI-driven future

    Investing in an AI-driven future

    As new AI technology such as ChatGPT gives us a glimpse into the future, what should investors and advisors know to stay ahead of the curve? • Learn more at thriventfunds.comFollow us on LinkedIn • Share feedback and questions with us at podcast@thriventfunds.com • Thrivent Distributors, LLC is a member of FINRA/SIPC and a subsidiary of Thrivent, the marketing name for Thrivent Financial for Lutherans.

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io