Logo

    U.S. Public Opinion about AI with Professor Paul Brewer and co-authors

    enSeptember 02, 2020
    What is the main focus of the survey conducted by researchers?
    How do media portrayals influence public opinion about AI?
    What are some fears people have regarding AI?
    Why do researchers emphasize accurate media coverage of AI?
    What potential benefits of AI do most Americans recognize?

    Podcast Summary

    • Media Shapes Public Perception of AIMedia portrayals significantly influence people's perceptions of AI, with some expressing optimism and others fearing job loss or privacy invasion. Accurate and nuanced media coverage is essential to counteract negative stereotypes and foster informed public discourse.

      The Media Messages and US Public Opinion About Artificial Intelligence survey paper reveals that people's perceptions of AI are influenced by media portrayals. The researchers, led by Professor Paul Brewer from the University of Delaware, conducted a survey to understand the general opinions, roles, hopes, and fears of AI in society. Before the project, Paul, James Bigaman, and Ashley Pencil shared their personal experiences with AI, influenced by movies and media. Paul, being older, recalled iconic films like 2001: A Space Odyssey and Terminator, which often depict AI as malfunctioning or threatening humans. The researchers conducted the survey due to the increasing presence of AI in society and the need to understand public opinion. The survey results showed that media messages significantly shape public perception of AI, with some expressing optimism and others fearing job loss or privacy invasion. The researchers emphasized the importance of accurate and nuanced media coverage to counteract negative stereotypes and foster a more informed and productive public discourse on AI.

    • Understanding the complex reality of AI beyond media hypeThis study delves deeper into the news coverage of AI to provide a more informed perspective from a researcher's point of view, separating hype from reality, and addressing timely issues in American society.

      While there is widespread fear and hype surrounding artificial intelligence (AI) due to media portrayals, the reality is more complex. The motivation behind a recent study on AI was to delve deeper into the news coverage and understand the nuances of AI, particularly in light of its increasing presence in society and the controversy surrounding its use in areas like law enforcement. The researchers were intrigued by the contrast between the sensational headlines and the more nuanced perspectives of experts like Stephen Hawking, Bill Gates, and Elon Musk. They wanted to provide a more informed perspective on AI from a researcher's point of view and separate the hype from the reality. Additionally, the study was motivated by the timely issues surrounding AI in American society, such as concerns about police brutality and the use of facial recognition technology. Both the researchers and the podcast aim to provide a more nuanced and informed discussion on AI, beyond the headlines.

    • Understanding the Impact of Media on Attitudes Towards AIA nationally representative survey of 2,000 participants revealed mixed feelings towards AI, with concerns and potential benefits. Researchers identified a gap in understanding the relationship between media use and attitudes towards AI and aimed to shed light on this issue.

      The researchers conducted a nationally representative survey on artificial intelligence (AI) through the National Opinion Research Center, with 2,000 participants. They designed the survey by looking at previous studies on AI attitudes and identified a gap in understanding the relationship between media use and attitudes towards AI. The survey included questions about media habits, technology use, and technology exposure, which were weighted to reflect the US population. Additionally, an experiment was embedded in the survey, grounded in past literature on emerging technologies and AI. The survey results reveal mixed feelings towards AI, with concerns but also potential benefits. The researchers aim to shed light on the role of media and technology use in shaping public attitudes towards AI.

    • Americans hold complex views on AI's potential and risksMost Americans see AI as a double-edged sword, with potential benefits and fears, and call for regulation while expressing mistrust in the government's ability to manage it. Many misunderstand AI due to media depictions, leading to fear-based perceptions.

      The American public holds nuanced and complicated views on artificial intelligence (AI), with a significant portion expressing both hope and fear. The survey results showed that most Americans believe AI has the potential to improve everyday life but also pose a threat to human existence. There is a desire for regulation, yet mistrust in the federal government's ability to manage AI development. Open-ended responses revealed that many people associate AI with robots and technology from media depictions, indicating a lack of understanding of the true capabilities of AI. This misunderstanding can lead to fear-based perceptions, such as the fear of robots taking over the world. It is essential to bridge the gap between reality and perception to foster a more informed and balanced public discourse on AI.

    • Media and Education Shape Public Perception of AIWhile media and personal experiences influence public perception of AI, a significant number of people lack sufficient knowledge about the technology. Accurate and informative reporting on AI can help bridge the knowledge gap and foster a more nuanced understanding of the technology.

      People's perceptions and understanding of artificial intelligence (AI) are significantly influenced by the information and media they consume. Those who follow technology news, watch science fiction shows, or use AI technologies like Siri and Alexa, tend to hold more favorable attitudes towards AI. Additionally, education plays a crucial role in shaping public opinion on AI. However, a concerning finding is that a large percentage (67%) of respondents admitted they don't know much about AI, despite having formed opinions on the technology. This knowledge gap highlights the importance of accurate and informative media reporting on AI to help shape public perception and reduce fear or misconceptions. As more people interact with AI technologies and see positive applications, there is a possibility that the popular image of AI may shift away from science fiction portrayals towards more realistic and beneficial uses. Nonetheless, ongoing education and active efforts are necessary to bridge the knowledge gap and foster a more informed and nuanced understanding of AI.

    • Frames and Images Influence Perception of AIThe way AI is described and depicted through frames and images can significantly impact public perception. Presenting AI as a helpful tool for social progress with benign images increases support, while negative frames and scary images decrease it.

      The way people perceive artificial intelligence (AI) is influenced by both the frames used to describe it and the accompanying images. The frames can be categorized as either presenting AI as a helpful tool for social progress or a Pandora's box leading to problems. The study found that respondents who received the social progress frame and benign images were more supportive of AI than those who received the Pandora's box frame and scary images. Additionally, the combination of the Pandora's box frame and scary movie AI images had a stronger negative impact on opinions towards AI than the benign images and social progress frame. These findings highlight the importance of understanding the impact of multimodal communication on public perception of emerging technologies like AI.

    • Media portrayal of AI through sensationalized imagesWhile sensationalized images of AI in news stories may attract readers, they often misrepresent reality and can negatively influence public opinion. Researchers are studying how opinions towards AI change and what factors influence those changes, including media representation.

      The use of sensationalized images, such as the Terminator, in news stories about AI is a common issue raised by AI researchers when it comes to journalistic coverage. However, editors and journalists often prioritize clickable headlines and intriguing images to attract readers. As more people become familiar with AI through positive experiences, such as Siri and self-driving cars, there may be a shift towards more realistic representations in the news. But, it's important to note that this is a snapshot in time and attitudes towards AI are still evolving. Researchers are currently conducting a study to explore how opinions change and what influences those changes, including the use of manipulated images in the context of facial recognition technology.

    • Public Opinion on AI Applications During COVID-19During COVID-19, using AI for disease diagnosis is the most accepted application, while self-driving cars are the least. Public opinion on facial recognition discrimination remains to be seen.

      Public opinion towards the use of AI, particularly in healthcare and facial recognition, may have shifted since the onset of the COVID-19 pandemic and the subsequent increase in news coverage and events related to AI. The March 2020 survey revealed that using AI to help diagnose diseases was the most popular application, while self-driving cars were the least popular. People were not overly concerned about facial recognition discriminating based on race or sex at the time. However, given the recent publicity on the topic, it will be interesting to see if public opinion has changed. Researchers are excited to explore these changes and will be conducting follow-up surveys to gather more data. The results of the March 2020 survey, including detailed charts and illustrations, are publicly available in a report for anyone to read and understand.

    • Media coverage on AI: Shaping perceptions vs. realityMedia often sensationalizes AI, leading to a skewed perspective. AI applications, like predicting consumer trends in fashion, can be distorted by media's focus on novelty and drama.

      While public perception of AI is shaped by media messages, it often doesn't align with the reality of the technology as understood by experts. The study of media coverage on AI revealed the tendency for sensationalism, drama, and novelty, leading to a skewed perspective on the technology. This distortion can have implications for various industries, such as fashion, where AI is being used to predict consumer trends and optimize supply chains. The research also highlighted the uncanny valley phenomenon, where human-like robots or helpers become appealing but strange once they get too close to human-like appearance. Overall, the study deepened the speaker's understanding of AI and its applications, as well as the role of media in shaping public opinion.

    Recent Episodes from Last Week in AI

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    Related Episodes

    Episode #130 ... Dewey and Lippman on Democracy

    Episode #130 ... Dewey and Lippman on Democracy
    Today we talk about a famous debate from the early 20th century.  Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Rupert Murdoch: News vs the truth

    Rupert Murdoch: News vs the truth

    What happened inside Fox News in those critical weeks following Donald Trump’s election defeat in 2020?


    Tortoise is a news start-up devoted to slow journalism. To access more of our journalism and get invites to exclusive events you can join Tortoise as a member.Visit www.tortoisemedia.com/friend and use the code Slow60 for a special offer today.





    Hosted on Acast. See acast.com/privacy for more information.


    Why do newspaper endorsements still matter?

    Why do newspaper endorsements still matter?

    Have the newspapers decided who they are going to back at the next general election and if they have will it actually have any impact? The New Statesman’s media correspondent, Will Turvill, joins Rachel Cunliffe to discuss his research into the main papers’ editorials to understand what they might say at the next election and why it still matters.


    They talk about how endorsements can set the broadcast media agenda, if papers follow readers or lead them – and why Murdoch was unhappy about the “Sun Wot Won It” headline in 1992.


    Subscripe to Morning Call



    Hosted on Acast. See acast.com/privacy for more information.


    The Power of Language in Cultural Discourse | Oli London

    The Power of Language in Cultural Discourse | Oli London

    In this episode of the Kathy Barnette Show, Kathy sits down with Oli London to shed light on the intricacies of identity in the digital age. London speaks candidly about his journey through various identity changes, the influence of social media, and the societal challenges faced by those grappling with identity issues. They also dive into the roles of media and elite in molding public opinion, and how the art of debating is disappearing. 

    Guest Bio:

    Oli London is known for his journey through various identity changes, including transracial and gender transitions. He is a singer, an activist, and the author of the book Gender Madness: One Man's Devastating Struggle with Woke Ideology and His Battle to Protect Children. Listeners can check out more from Oli at his website https://www.oli-london.com/, on IG @londonoli, and YouTube @OliLondon  

    Resources: 

    Please visit our great sponsors: My Pillow: https://mypillow.com/kathy. It's the bedding sale you don't want to miss. Refresh your rest with My Pillow

    Kathy’s book: Nothing to Lose, Everything to Gain: Being Black and Conservative in America

    Oli’s book: Gender Madness: One Man's Devastating Struggle with Woke Ideology and His Battle to Protect Children 

    Vivek’s book: Woke, Inc.: Inside Corporate America's Social Justice Scam

    Show Notes: 

    • [0:00] Welcome back to The Kathy Barnette Show. Kathy introduces guest, Oli London to the listeners 

    • [0:40] Nothing to Lose, Everything to Gain: Being Black and Conservative in America

    • [1:30] “I prefer to be true to myself, even at the hazard of incurring the ridicule of others,  rather than be false and incur my own abhorrence.”

    • [5:30] Oli speaks on his views on race and identity  

    • [10:40] Finding acceptance of self within, not from changing gender identity 

    • [11:30] The impact of social media on youth and identity 

    • [16:30] Cultural shifts and societal values from traditional achievements to superficial online presence

    • [23:00] Discussing the power of language and societal conditioning  

    • [30:30] Mainstream media is stripping away the ability to independently think 

    • [34:45] The art of debating  

    • [37:30] Finding middle ground in polarized topics 

    • [46:30] Are our leaders deliberately destroying our country? 

    • [50:00] Oli speaks on his book Gender Madness 

    • [54:00] Chloe Cole’s Story | A Mission to End Child Gender Transition Procedures

    • Thanks for listening to this episode of The Kathy Barnette Show. Don't forget to subscribe for more insightful conversations, share this episode with those interested in understanding the deeper aspects of our government, and provide your feedback for future topics.