Logo
    Search

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    enMay 03, 2024

    Podcast Summary

    • AI transforming workflows and enhancing productivityListeners shared stories of using AI for content creation, automating tasks, and increasing productivity. However, they also raised concerns about privacy and ethical dilemmas.

      AI is a versatile tool with a wide range of applications in various workplaces. As shared in the podcast, listeners sent in numerous stories about their experiences using AI at work, revealing a mix of fear, delight, and productivity gains. From using AI for content creation to automating repetitive tasks, these stories highlight the potential of AI to transform workflows and enhance productivity. However, they also underscore the importance of understanding and addressing the concerns and challenges that come with AI adoption, such as privacy concerns and ethical dilemmas. Overall, the stories illustrate the need for ongoing exploration and dialogue around the role of AI in the workplace, and the importance of embracing its potential while being mindful of its limitations.

    • Using AI to replicate CEO rolesAI can generate strategic feedback like a CEO, but it cannot replace authentic leadership or set organizational tone.

      AI is being explored as a tool to help streamline decision-making processes in businesses, even potentially replicating the role of CEOs. Alec Beckett, from Nail Communications, shared a creative example where they trained a custom GPT with a CEO's strategic plan and speeches to create a synthetic version of the CEO. This AI provided valuable, strategic feedback to their hesitant client, encouraging faster decision-making. However, the implications of this technology raise questions about the accuracy and authenticity of AI-generated feedback, as well as the potential for new workplace dynamics. Managers and CEOs often engage in synthesizing and predicting patterns, which AI excels at, but they also have responsibilities that AI cannot replicate, such as leadership and setting the tone for an organization. The use of synthetic CEOs could provide median CEO feedback, but it also opens up new avenues for workplace conflict and raises ethical concerns about the authenticity of AI-generated feedback.

    • AI automation in jobs: Increased productivity, potential displacementAI can increase productivity but it may also lead to job displacement. Workers who effectively use AI to augment their skills may have an advantage in the job market for a while.

      AI is increasingly being used in various industries and jobs to automate tasks and increase productivity. However, this automation raises concerns about the potential displacement of human workers. An extreme example is a freelance writer, Jane Endicott, who has managed to automate every single part of her job using bots. While her productivity has significantly increased, there's a risk that her employers might eventually replace her with the bots entirely. This is a common concern as AI continues to advance and become capable of handling more complex tasks. However, there's also a possibility that workers who can effectively use AI to augment their skills and focus on creative aspects of their jobs may have an advantage in the job market for a while. Another listener, Rick Robinson, shared how he uses AI to navigate difficult situations with colleagues by using a DISC profile assessment tool. Despite the potential benefits of AI, there are valid concerns about the impact on the workforce and the importance of maintaining a human touch in creative industries.

    • Using AI to navigate workplace conflictsReflect on individual personalities before acting, consider using AI for conflict simulation, and be thoughtful and intentional in workplace interactions.

      Using AI as a tool to better understand and navigate workplace conflicts can be an effective strategy. The discussion revolves around a listener, Rick, who developed an AI bot using GPT to provide suggestions on handling difficult situations based on coworkers' personalities. While the AI component is intriguing, the speaker emphasizes that the most crucial part of this approach is taking a moment to reflect and consider the individual's personality traits and potential reactions before acting. This mindful approach can help prevent conflicts from escalating and lead to better outcomes. Additionally, the speaker suggests the potential use of AI for conflict simulation, such as testing out responses in a virtual environment before posting in a group chat like Slack. Overall, the conversation highlights the importance of being thoughtful and intentional in workplace interactions and the potential benefits of using AI as a tool to support these efforts.

    • Using AI to streamline report card narrativesAI can reduce teacher workload by generating draft narratives, but educators should still review and edit them for personalized feedback and guidance to students. Potential concerns include biases in AI grading systems.

      AI is proving to be a valuable tool in education, particularly in streamlining time-consuming tasks such as writing report card narratives. James Deck, a high school design and technology teacher, shared his experience of using AI to generate draft narratives for his students, reducing his workload from 20 to 30 hours to just 2 to 5 hours. This tool has been well-received by his colleagues, making a significant impact on their workload. However, it's essential that educators using AI for generating evaluations still put thought and effort into reviewing and editing these drafts. The narratives should not replace the important role of teachers in providing personalized feedback and guidance to their students. A potential concern is the use of AI in grading standardized tests, such as in Texas, where computers are being used to grade written answers. While this may initially seem like an efficient solution, there is a risk that the AI system could be biased against certain student demographics. This highlights the importance of ensuring that AI tools are thoroughly tested and monitored for potential biases before they are implemented on a large scale. In conclusion, AI has the potential to revolutionize education by automating time-consuming tasks and freeing up teachers' time for more meaningful interactions with their students. However, it's crucial that educators continue to provide personalized feedback and guidance, and that AI tools are used responsibly and ethically.

    • AI in the workplace: Equity, oversight, and balanceAI tools can be beneficial but also lead to inequitable outcomes, lack of human oversight, and feelings of workplace overload. It's important to consider the role of AI in the workplace and maintain a balance between human and AI capabilities.

      While AI tools like ChatGPT can be useful for ideation and drafting, their increasing use in the workplace can lead to inequitable outcomes and feelings of workplace overload. The discussion highlighted concerns about biases in AI grading, lack of human oversight, and the potential for AI to replace human jobs. A listener's experience of being constantly compared to ChatGPT in a meeting was described as a dystopian and demotivating situation. Another listener shared how the increasing demands of their job and lack of support have led them to rely on AI tools to manage their workload. These stories underscore the need for thoughtful consideration of the role of AI in the workplace and the importance of maintaining a balance between human and AI capabilities.

    • Trust issues between workers and management over AI's impact on workloadCompanies should focus on building trust by communicating AI's role and distributing workload fairly, while training and education can help alleviate concerns and foster a positive working environment.

      While AI can be a valuable tool for reducing workloads and helping workers manage tight deadlines, there is a concern that it may lead to increased expectations and workloads from managers. A study by Accenture revealed a significant disparity between workers and bosses regarding the potential impact of AI on stress and burnout. Workers fear that AI will lead to more work, while bosses view it as a productivity enhancer. This trust issue between workers and management is a significant concern and may hinder the successful implementation of AI in the workplace. Companies should focus on building trust by ensuring clear communication about AI's role and the distribution of workload. Additionally, training and education for both workers and management about the capabilities and limitations of AI can help alleviate concerns and foster a positive working environment.

    • Open dialogue about AI in the workplaceEncourage open and inclusive discussions about AI, involving everyone from individual contributors to the CEO, to effectively integrate AI into jobs and mitigate risks in rapidly changing industries.

      It's important for companies to have open and inclusive discussions about the use of AI in the workplace. Instead of implementing top-down rules, a more organic and collaborative approach should be taken where everyone from individual contributors to the CEO has a voice. This can lead to a better understanding of the capabilities and limitations of AI, and how it can be effectively integrated into jobs. Hank Green, a legendary content creator, shares similar sentiments about the importance of open dialogue, especially in the rapidly changing world of online video and media creation. He emphasizes the need for creators to adapt and diversify their platforms to mitigate the risks of potential changes or bans. Overall, the conversation highlights the value of open communication and collaboration in navigating the complexities of technology in the workplace and creative industries.

    • TikTok's algorithm limits creator diversity and monetizationThe algorithm can decrease reach and views for creators trying new content or monetizing audience, leaving them feeling trapped and undervalued.

      TikTok's algorithm can negatively impact creators who try to diversify their content or monetize their audience. The algorithm is sensitive to changes in engagement and can significantly decrease the reach and views of such content. Creators are often left feeling trapped in a box, making the same content to reach new audiences, while the content that connects with their existing audience has less reach but is more valuable for building long-term relationships. Additionally, TikTok's lack of transparency regarding earnings and inconsistent payouts has left many creators feeling undervalued, leading to a relatively quiet response from them regarding the platform's potential bans or other issues.

    • Understanding the Impact of TikTok on Culture and InteractionsTikTok's unique features and attention economy fuel rapid cultural creation, shaping how people see the world and interact with others. However, concerns over censorship and algorithmic control highlight the importance of balancing individual creativity and platform control.

      TikTok is more than just a social media platform – it's a significant part of people's lives, shaping how they see the world and interact with others. The rapid cultural creation on TikTok, fueled by the attention economy and the platform's unique features, has led to spontaneous self-organization and the emergence of new trends and creative projects. However, the ephemeral nature of content on TikTok, combined with concerns over censorship and algorithmic control, highlights the importance of understanding the role these platforms play in our lives and the potential implications for free speech and cultural expression. The lesson of TikTok may be that the speed and spontaneity of culture creation are becoming increasingly important, but it also raises questions about the balance between individual creativity and platform control.

    • TikTok's power in content discoveryTikTok's algorithm promotes content during the moment it matters, giving creators a chance even with low views. Investing deeply in work and building a loyal following are keys to success in this new media landscape.

      The ability of algorithms to identify and promote content during the moment it matters is a game-changer, particularly on platforms like TikTok. This is because TikTok excels at discovery, giving content a chance even with low views, unlike YouTube which requires more human engagement. The creator economy is evolving, and journalists are being encouraged to adopt influencer strategies to stand out. However, not all approaches to content creation are equal. Those who invest deeply in their work and build a loyal following have a higher potential for success in this new media landscape. The debate between individual creators and big institutions continues, but it's clear that trust in individuals is on the rise. Ultimately, the future of media will likely be a blend of both, with individuals driving engagement and institutions providing resources and stability.

    • The Role of Trust in Social Media: Perspectives from Hank Green and Nilay PatelHank Green and Nilay Patel discuss the importance of trust in social media, with differing perspectives on the role of individuals and structures. They agree on the significance of trust-building institutions and individuals in a world of automated content creation, and express unease about AI's potential impact on fact-checking and content creation.

      While there may be disagreements on various aspects of the social media landscape, such as the role of algorithms and the impact of AI on journalism, there is a shared concern for the importance of trust and the potential consequences of these advancements. Hank believes that people make conscious choices when engaging with content, while Nilay sees it as a structural issue. Both agree on the significance of trust-building institutions and individuals in a world where content creation is increasingly automated. Regarding AI, there's a sense of unease about its role in replacing human content creators and fact-checkers, with a recognition that it can be a helpful tool when used responsibly. Ultimately, the discussion highlights the need for a nuanced understanding of the complex relationship between technology, content creation, and trust.

    • Navigating the magic and challenges of the internetImprove understanding of algorithms, promote education and awareness to combat misinformation and deepfakes, and focus on long-term solutions despite technology's rapid evolution

      The Internet, while presenting numerous challenges such as misinformation and the manipulation of algorithms, still holds pockets of delight and magic where people create and share good content. The solution to these issues lies not in suppressing speech, but in improving our collective understanding of how algorithms should function and being better as a society. The rapid evolution of technology, however, makes it challenging to keep up and find long-term solutions. The conversation also touched upon the issue of deepfakes and their potential impact on national issues, but the focus shifted to the emergence of deepfakes on a smaller scale, affecting individuals and communities. This highlights the need for ongoing education and awareness to combat these issues effectively.

    • Deepfakes in schools: A new challengeDeepfakes, though not authentic, can cause outrage and backlash. Greater awareness, education, fact-checking and investigating origins are crucial.

      AI deepfakes are becoming more accessible and are being used to create problems, even in smaller settings like schools. A recent incident at Pikesville High School in Baltimore County, Maryland, involved deepfake audio recordings of the school's principal making inflammatory racist and anti-Semitic comments. The recordings caused outrage and backlash, but it was later discovered that they were not authentic. Instead, they were created by the athletic director as retaliation for an investigation into his mishandling of school funds. The ease of creating and spreading deepfakes highlights the need for greater awareness and education about this technology, as well as the importance of fact-checking and investigating the origins of such content.

    • Deepfakes in Local Communities: A Threat to Trust and AuthenticityDeepfakes, increasingly accessible and convincing, pose a significant threat to individuals and communities, particularly in areas with scarce journalism resources. Ease of use and low cost raise concerns about potential misuse, as seen in the case of a principal falsely incriminated by a deepfake audio.

      As AI technology advances, creating deepfakes has become increasingly accessible and convincing, posing a significant threat to individuals and communities, particularly in areas where journalism resources are scarce. The case of Dazon Darian exploiting this technology to falsely incriminate a principal is a stark reminder of this issue. The deepfake audio of the principal was so realistic that it was difficult to distinguish it from an authentic recording, even for experts. The technology has advanced significantly in just a few years, allowing for realistic synthetic voice creation with minimal voice samples and insertion of natural speech cadence and background noise. This ease of use and low cost raises concerns about the potential misuse of deepfakes in local communities, where trusted authorities to determine authenticity may be lacking. This incident serves as a warning of the need for heightened awareness and skepticism towards potentially manipulated evidence.

    • Is this story too good to be true?Be skeptical of emotionally charged stories without credible context. Support local journalism for trustworthy information.

      In an era of advanced AI technology and deep fakes, it's increasingly challenging to distinguish between real and manipulated media. To avoid falling victim to misinformation, ask yourself two questions: Is this story confirming a belief I want to hold? And does it evoke a strong emotional response? Be skeptical of media that triggers these feelings without merit. Additionally, consider the context – does the content align with the person or situation? Unfortunately, deep fakes make it easier for liars to get away with deceit, a phenomenon known as the "liar's dividend." To combat this, support local journalism, which provides trustworthy information and a deeper understanding of community context. Tech companies also have a role to play in preventing the misuse of their tools for malicious purposes.

    • Preventing Misuse of Synthetic Voice TechnologyTech companies should require permission to clone voices, use audio watermarks, and implement content moderation and financial transaction flagging to prevent misuse of synthetic voice technology. Individuals should establish secret codes with loved ones to verify identity in case of potential voice impersonation scams.

      As voice synthesizing technology advances, it's crucial for tech companies to implement measures to prevent misuse, such as requiring permission to clone someone's voice or using audio watermarks. Additionally, content moderation and flagging suspicious financial transactions can help prevent scams. It's essential for individuals to establish secret passphrases or questions with their loved ones to verify their identity in case of potential voice impersonation scams. This Mother's Day, consider having a conversation about this topic and creating a secret code word or phrase to ensure the safety of your family. Tech companies and individuals must work together to mitigate the risks associated with synthetic voice technology.

    • Sending Tips to The New York TimesFollow ethical journalistic practices when sharing information with reputable news organizations to maintain transparency and uphold journalistic integrity.

      The New York Times has a specific email address for submitting tips or information, which in this case was requested with a deep picture of a person named Kevin making an inflammatory statement. However, it's important to note that the Times explicitly asked not to send such emails, emphasizing the importance of respecting journalistic integrity and ethical standards. This interaction highlights the importance of following guidelines when submitting information to reputable news organizations. It's crucial to ensure that any information shared is accurate, relevant, and obtained ethically. In the digital age, where misinformation can spread rapidly, it's more important than ever to uphold journalistic values and maintain transparency. So, while it may be tempting to share juicy or inflammatory information, it's essential to consider the potential consequences and ensure that any actions align with ethical journalistic practices.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    'Another Body' Review - Late Night Thoughts

    'Another Body' Review - Late Night Thoughts
    Welcome to The CB Media Network In this video, Khalil reacts to the documentary ‘Another Body’, which releases in theatres mid-November. If you are a fan of true crime, technology or cgi, this is the documentary for you! Other Podcast Check out Movie Madness with Khalil Jamal: https://ciut.fm/shows-by-day/movie-madness/ Subscribe If you haven't already, make sure to subscribe to CB Media Network for more instant reactions, news breakdowns and amazing podcasts! Click the bell icon to receive notifications whenever we upload new videos. Connect with Us Instagram: @comicboys_ X: @comicboys_ Website: https://cbmedianetwork.wixsite.com/thecbmn About CB Media Network We're your backstage pass to the world of cinema and entertainment. Our team is dedicated to bringing you exclusive interviews with actors, directors, and industry insiders. But that's not all—we're here to expand your cinematic horizons, sharing unique perspectives and uncovering hidden gems that may have slipped under your radar. What's in Store Interviews with actors, directors, and film industry experts. Spotlighting movies you may have never considered. Exploring unique perspectives on beloved classics. Reviews, recommendations, and behind-the-scenes insights. A dash of humor and a whole lot of cinephile enthusiasm! #AnotherBody #AI #Documentary #Deepfake #Pornography #Review #CBMediaNetwork #Movie #Scifi #Thriller #Robot #Crime

    Israel Vows Response, China Growth Fades & Criminalising Deepfakes

    Israel Vows Response, China Growth Fades & Criminalising Deepfakes

    Your morning briefing, the business news you need in just 15 minutes.

    On today's podcast:

    (1) Top Israeli military officials reasserted that their country has no choice but to respond to Iran's weekend drone and missile attack, even as European and US officials boosted their calls for Israel to avoid a tit-for-tat escalation that could provoke a wider war.

    (2) China announced faster-than-expected economic growth in the first quarter – along with some numbers that suggest things are set to get tougher in the rest of the year.

    (3) Federal Reserve Bank of New York President John Williams has told Blomberg the central bank will likely start lowering interest rates this year if inflation continues to gradually come down. 

    (4) Goldman Sachs's back-to-basics approach is paying off as it posted profits that vaulted past expectations.

    (5) The UK will criminalise the creation of sexually explicit deepfake images as part of plans to tackle violence against women.  

    See omnystudio.com/listener for privacy information.

    Folge 43: Bei Risiken und Nebenwirkungen fragen Sie meine künstliche Intelligenz

    Folge 43: Bei Risiken und Nebenwirkungen fragen Sie meine künstliche Intelligenz
    Ein Foto von der Festnahme Donald Trumps und der Papst in einem dicken weißen Daunenmantel - das sind unter anderem die Werke von künstlicher Intelligenz, die so echt aussehen, dass sie kaum von der Realität zu unterscheiden sind. KI wird aber nicht nur für die Generierung von Bildern verwendet, sondern auch in sehr vielen weiteren Lebensbereichen, wie z. B. in der Medizin, Pharmazie, in der Arbeitswelt, sowie in unserem normalen Alltag. KI hat unser Leben in kürzester Zeit stark beeinflusst und ist nach wie vor präsent. Doch ist KI eine Bereicherung für uns oder überwiegen doch die Gefahren? In dieser Folge sprechen Lena und Perdita darüber, wie KI allgemein funktioniert und welchen Einfluss sie auf uns hat, aber auch wie wir Menschen die KI prägen. Dabei gehen sie auf die Vor- und Nachteile von KI-Nutzung ein, also welche Chancen sie bietet und welche Gefahren z. B. durch sogenannte Deepfakes entstehen können. Weitere Infos und Links zur Recherche: https://www.futter-fuers-hirn.de/podcast/

    How an AI pope pic fooled us

    How an AI pope pic fooled us
    An AI-generated image of Cool Pope in immaculate drip went viral over the weekend and most everyone thought it was real. The Verge’s James Vincent explains how we should navigate our new internet reality. This episode was produced by Amanda Lewellyn, edited by Matt Collette, fact-checked by Avishay Artsy and Siona Peterous, engineered by Paul Robert Mounsey, and hosted by Sean Rameswaram.  Transcript at vox.com/todayexplained   Support Today, Explained by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices