Logo
    Search

    The Times Sues OpenAI + A Debate Over iMessage + Our New Year’s Tech Resolutions

    enJanuary 05, 2024

    Podcast Summary

    • New York Times sues OpenAI and Microsoft for copyright infringement during Kevin Ruse's vacationThe New York Times sued OpenAI and Microsoft for using their articles without permission in AI model training, disrupting Kevin Ruse's vacation and sparking a debate on AI use of copyrighted material.

      During the holiday season, Kevin Ruse, a tech columnist at The New York Times, received unexpected news while on vacation at a bird sanctuary. The New York Times was suing OpenAI and Microsoft for copyright infringement, specifically for using millions of copyrighted New York Times articles in the training of AI models without permission. This news came as a surprise to Ruse as he was enjoying his break, holding cups of seeds for wounded birds and watching them land on him to eat. Despite the disappointing news, Ruse had a productive vacation, finishing several books, including "The Wager" and "The Spy and the Traitor." Meanwhile, Eric Mijikovsky, the CEO of Beeper, joined the podcast to discuss his company's hack of iMessage, allowing Android users to briefly experience green bubbles. The lawsuit highlights the ongoing debate around AI use of copyrighted material and the potential consequences for companies involved.

    • New York Times Sues OpenAI and Microsoft Over AI ModelsThe New York Times has taken legal action against OpenAI and Microsoft for using its copyrighted materials in ChatGPT and Co-Pilot, marking the first time a major US news org has sued over copyright issues in AI. The lawsuit raises concerns about AI models functioning as substitutes for authentic journalism and potential compensation for publishers.

      The New York Times has filed a lawsuit against OpenAI and Microsoft over the use of its copyrighted materials in the development and output of their AI models, specifically ChatGPT and Co-Pilot. This marks the first time a major American news organization has taken legal action against AI companies over copyright issues. The Times is arguing that the companies have created products that function as substitutes for the Times and may draw audiences away from authentic journalism. The lawsuit also raises concerns about the training and ongoing output of AI models using copyrighted materials, and whether publishers should be compensated for their use. The Times had reportedly been in negotiations for a licensing deal with OpenAI and Microsoft, but those talks appear to have stalled. The lawsuit is a significant development in the ongoing debate about the use of copyrighted materials in AI models and the potential impact on traditional media companies.

    • Impact of AI models on NYT's brand and reputationAI models like ChatGPT may generate inaccurate or made-up info, diluting NYT's brand built on authority, trust, and accuracy. NYT argues these models don't learn like humans and may not qualify for fair use.

      The New York Times is concerned about the impact of AI models like ChatGPT on its brand and reputation. The Times argues that when these models generate inaccurate or made-up information and attribute it to the New York Times, it dilutes the value of the brand, which is built on authority, trust, and accuracy. The Times also believes that these models are not learning in the same way humans do, but rather reproducing and compressing copyrighted information with the intention of building a product that competes with its journalism. Fair use, a doctrine in copyright law that allows limited use of copyrighted material without permission, may not apply in this case as the New York Times argues that its journalism is a creative work that requires real effort to produce, rather than just a list of facts. The implication is that AI models should not be able to read and learn from copyrighted material without permission or compensation.

    • The Debate Over AI-Generated Content from Copyrighted MaterialsThe New York Times argues that AI-generated content from copyrighted materials could harm demand and revenue, while OpenAI and Microsoft believe their systems create new content under fair use. The debate revolves around transformative nature and market impact.

      The use of AI to generate content from copyrighted materials is a complex issue with valid arguments on both sides. The New York Times argues that such AI-generated content could harm the demand for the original work, potentially leading to a loss in revenue. On the other hand, OpenAI and Microsoft argue that their AI systems do not create exact copies of copyrighted works, but rather learn from them to generate new content. They believe this falls under fair use and cite the Google Books case as an example. The debate revolves around the transformative nature of the AI-generated content and its potential impact on the market for the original work. However, it's important to note that both parties have declined to comment extensively on the matter, and the specifics of the lawsuit, such as the extent of copying and the exact contents of the disputed works, are still under discussion.

    • New York Times v. OpenAI/Microsoft: Copyright and AIThe New York Times lawsuit against OpenAI/Microsoft over AI model use of copyrighted material could set a precedent for the industry, potentially forcing significant changes in how AI companies handle copyrighted works.

      The ongoing lawsuit between the New York Times and OpenAI/Microsoft raises important questions about the use of copyrighted material in training artificial intelligence models. The New York Times argues that the models are not transformatively using their content, but instead memorizing and regurgitating it verbatim. OpenAI and Microsoft counter that the models are not typically used in such a way and that they are making efforts to prevent this behavior. However, the debate goes beyond just the Snowfall example and hinges on the broader issue of whether training AI models on copyrighted works falls under fair use. The outcome of this lawsuit could set a precedent for how AI companies handle copyrighted material in their models, with potential consequences for the industry as a whole. The case is expected to take months to resolve, with possible outcomes ranging from a settlement to a ruling in favor of the New York Times that could force AI companies to significantly alter their practices.

    • Possible financial consequences for publishers from AI companies using their data without consentPublishers could face negotiations or 'link taxes' if AI companies use their content without consent, potentially impacting the open web principle and journalism revenue

      The future of AI companies using publisher data without consent could lead to significant financial consequences for publishers, potentially resulting in a "link tax" or negotiation requirement for the use of their content. This precedent was set with the deals between publishers and tech giants like Google and Meta over the past decade. Publishers felt they were losing ad revenue due to these companies' superior advertising engines, leading to regulations that forced negotiation for the right to display links. If similar regulations apply to AI companies, they may have to negotiate with publishers to show links, impacting the open web principle. Even if publishers don't win this current copyright case, the potential loss of journalism revenue and subsequent decrease in journalism production is a significant concern. Unlike social media and search engines, where publishers benefited from increased exposure and potential ad revenue, it's unclear if publishers gain the same value from having their data used to train AI systems.

    • AI and Copyrighted Material: A Pressing Issue for Publishers and AI CompaniesEurope should engage in discussions about AI use of copyrighted material, as the current fair use model may not be sufficient and copyright laws may not fully address AI technology's unique aspects. Potential solutions include ad-supported models, paying data sources, or metered usage, but the legal landscape is still unclear.

      As AI technology continues to evolve, particularly in the generative AI industry, the relationship between AI companies and publishers regarding copyrighted material is a pressing issue that requires societal decision-making. The current model of claiming fair use may not be sufficient, and existing copyright laws may not fully address the unique aspects of AI technology. Europe, in particular, should engage in this discussion, as the outcome will impact us all. If the New York Times or similar entities succeed in their lawsuits against AI companies, it could potentially disrupt the business model for the generative AI industry. Ad-supported models could be a viable solution, but open-source communities and smaller entities might face challenges. Paying every data source for usage could be impractical due to the vast amount of websites involved, leading to metered usage and potentially small payments. However, it's essential not to jump to extreme conclusions before the legal landscape becomes clearer.

    • AI-generated content sparks legal and ethical debatesThe New York Times AI-generated article raises questions about ownership, compensation, and potential lawsuits. The divide between iMessage and Android users highlights the need for clear guidelines and regulations regarding AI and interoperability.

      The use of AI in generating content, such as text-to-image models or news articles, raises complex legal and ethical questions. The New York Times case, where AI was used to generate a news article, has sparked debates about ownership, compensation for contributors, and potential lawsuits. Publishers are closely watching this case to understand its implications for their own organizations. Moreover, in the realm of consumer technology, the divide between iMessage users with blue bubbles and Android users with green bubbles continues to create tension. Apple's decision to keep iMessage exclusive to its platform has led to exclusion and bullying of Android users in group chats. A new app, Beeper, aims to unite various chat applications, but its impact on this long-standing issue remains to be seen. These discussions highlight the need for clear guidelines and regulations regarding AI-generated content and interoperability between different platforms. As technology continues to evolve, it is crucial to ensure that it benefits all users equitably and respects their rights and privacy.

    • Apple's Walled Garden and Interoperability DebateApple's tight control over iMessage sparks debate on interoperability with other platforms, with regulators scrutinizing tech companies' practices in 2024.

      Tech companies, specifically Apple, face increasing pressure to open up their walled gardens and allow interoperability with other platforms. This issue came to the forefront when Beeper, a company that aimed to reverse engineer iMessage to enable Android users to send messages on the platform, was met with swift action from Apple. The debate around this topic is expected to gain significant attention in 2024, as regulators worldwide scrutinize tech companies' practices regarding app stores, payment systems, and communication bubbles. The speaker shared their personal experience with the fragmentation of messaging platforms and how they wanted to solve the problem with Beeper. However, they acknowledged that they never intended to integrate iMessage from the start, having used WhatsApp instead. The speaker also highlighted the growing interest from regulators in addressing these walled gardens and the potential implications for companies like Apple and Google. Eric Mejakovsky, the co-founder of Beeper, joined the conversation to discuss the project's history and the skirmish with Apple. He shared his motivation for creating Beeper to address the proliferation of chat apps and the desire to unify communication channels. The conversation touched upon the challenges of navigating the complex landscape of tech regulations and the potential impact on the industry.

    • Discovering a way to send iMessages from Android leads to creation of Beeper MiniA 16-year-old's discovery enabled iMessage on Android, but Apple's response led the team to continue improving messaging experiences for other networks.

      The creation and release of Beeper Mini, a new app designed to bring iMessage functionality to Android devices, was made possible by the discovery of a 16-year-old named James Gill who had figured out a way to send iMessages from Android. Apple had previously shut down similar attempts, but this time the team behind Beeper saw an opportunity to improve the iMessage experience for Android users without requiring Apple to make any changes. They believed that Apple would appreciate their efforts, as iMessage is the default texting app on iPhones and a significant portion of the market. However, instead of a thank-you note, Apple responded by taking action against Beeper Mini. The team had known they were poking a bear, but they were committed to providing a better encrypted messaging experience for Android users. Despite the challenges, they continued to support 15 different chat networks, including iMessage, on their original Beeper app.

    • Apple blocks inter-platform communication with Beeper appApple's prioritization of iMessage for its users may limit interoperability and communication between iPhone and Android users, potentially as a strategic move to discourage Android users from switching to iPhones.

      Apple's prioritization of its messaging app, iMessage, for its own users has led to a situation where inter-platform communication between iPhone and Android users is compromised. Beeper, an app aimed to enable encrypted messaging between these platforms, was met with resistance from Apple, who blocked the app from working with iMessage. Apple's justification for this action was based on security concerns, but some argue it's a strategic move to keep Android users from accessing iMessage and potentially discouraging them from buying iPhones. This situation highlights the potential tension between a company's desire to offer exclusive features to its own user base and the need for interoperability and communication between different platforms.

    • Apple's iMessage and its Impact on CompetitionApple's iMessage's Impact on Competition stems from its default status on iPhones and deep integration into the ecosystem, making it difficult for competitors to replicate the same user experience. Regulations like the EU's Digital Markets Act aim to address this by mandating interoperable interfaces, reducing the dominance of walled gardens.

      The debate around Apple's iMessage and its impact on competition primarily revolves around the fact that iMessage is the default messaging app on iPhones, which cannot be changed. Apple defenders argue that users have alternatives like WhatsApp and Signal for interoperable messaging. However, the sticking point is that iMessage's deep integration into the iPhone's ecosystem makes it difficult for competitors to replicate the same level of user experience. This issue is further compounded by Apple's control over its App Store and default settings. The European Union's Digital Markets Act aims to address such issues by mandating large tech companies to open interoperable interfaces for their networks and services. This is a step towards reducing the dominance of walled gardens in the tech industry. Ultimately, the future of user experiences depends on the choices we make as consumers. If we want seamless interoperability and communication across different platforms, we need regulations that encourage competition and innovation. The recent trend of regulatory interventions in the tech industry suggests that we may be past the peak of walled gardens. However, companies will continue to fight back, and it remains to be seen how these regulations will shape the tech landscape in the coming years. As consumers, we must consider the experiences we want and advocate for policies that foster a more open and interoperable digital world.

    • Reflecting on Technology Use and Setting Goals for the New YearConsider reflecting on past experiences with technology resolutions or goals and set achievable goals for the upcoming year to improve focus and productivity.

      The ease of communication in the future might be due to a corporate monopoly. Meanwhile, when it comes to personal goals for the new year, the hosts discussed their preferences between resolutions and goals. The hosts, Casey and Kevin, shared their experiences with their past resolutions and goals, particularly related to technology use. Casey's goal last year to use her phone less did not go as planned, while Kevin's goal to use his phone more held steady. For the upcoming year, Casey aims to limit background noise from technology, such as YouTube or emails, while having a video playing, to improve focus and productivity. The hosts encourage listeners to reflect on their relationships with technology and set achievable goals for the new year.

    • Mindfully engaging with YouTube and other platformsDisabling auto-play and intentional video selection on YouTube can reduce mindless consumption and promote more meaningful engagement. Explore alternative activities for balanced screen time.

      Limiting passive consumption of content on YouTube and other platforms can help regain control over time and attention. The speaker shared how they found themselves mindlessly watching videos, even when not fully engaged, leading to hours spent on the platform. To address this, they suggested disabling the auto-play feature on YouTube, which requires intentional selection of the next video, creating a speed bump in the consumption process. This simple change can help reduce mindless consumption and allow for more intentional engagement with content. Additionally, the speaker encouraged exploring other activities, such as reading books or taking walks, as alternatives to passive screen time. Overall, the conversation emphasized the importance of being mindful of digital habits and making conscious choices to prioritize attention and time.

    • Shift focus to 'more delight, less fright' with technologyIntentional use of phone with joyful apps and images, rather than anxiety and negativity, leads to healthier relationship with technology.

      Instead of focusing on reducing screen time or feeling guilty about phone use, aiming for "more delight, less fright" can lead to a healthier relationship with technology. This approach involves intentionally filling your phone with apps, widgets, and images that bring joy and positivity, rather than anxiety and negativity. By shifting the emotional experience of using your phone, you may find yourself using it more mindfully and appreciatively. This concept was inspired by the idea of noticing and cultivating delight in everyday life, as advocated by author Catherine Price. Try creating a "Delights" album on your phone, filling it with images that bring you joy, and setting it as your home screen to start your day with a positive and delightful experience.

    • Organize apps by emotional impact for better digital well-beingOrganize apps into screens based on joy, productivity, anxiety, and negative emotions for improved digital well-being. Be honest with yourself and trust your instincts to make conscious choices about apps and technologies.

      Organizing your phone's apps into screens based on their emotional impact can help reduce anxiety and improve your digital well-being. The speaker shares her strategy of keeping apps that bring joy and productivity on the first screen, while moving those that cause anxiety or negative emotions to the second screen. She also emphasizes the importance of setting realistic and achievable tech goals, trusting your instincts, and allowing for flexibility. By being honest with yourself and listening to your instincts, you can make conscious choices about the apps and technologies you use, and fill your digital space with things that bring you delight rather than stress.

    • Collaboration in Video ProductionEffective video production requires a team effort with various roles contributing to the process. Engage with audience for better connection.

      Effective video production requires a team effort. In the discussion, Ryan Manning and Dylan Ferguson were recognized for their work on the Hardfork YouTube channel. They were joined by Paul Schumann, Pweewing Tam, and Jeffrey Miranda, who contributed to the production process. This highlights the importance of collaboration and the various roles necessary to bring a video project to life. Additionally, the team encouraged viewers to share their resolutions with them, emphasizing engagement and interaction with their audience. Lastly, a light-hearted joke was made about the inconvenience of receiving text messages from Android users. Overall, this conversation underscores the value of teamwork, creativity, and audience connection in video production.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    The Valley Current®: Are BigCPA firms now competing against BigLaw?

    The Valley Current®: Are BigCPA firms now competing against BigLaw?

     To what degree will artificial intelligence accelerate the merging of legal and financial services? There is already a lot of overlap between the two given the nature of the industries and both being highly regulated by the government. Overseas there is already news of CPA firms hiring more lawyers and creating law firms under the CPA brand, but here in the US there are laws that keep the two separate. Could the incorporation of AI tools be the catalyst that forces these two industries to expand their services and incentivize flexible regulation? Host Jack Russo asks CPA, Steve Rabin if the United Sates will follow international trends and start seeing law firms competing directly with CPA firms.

     

     Click here to read more about the role of AI in legal services: https://www.law360.com/pulse/articles/1701718/kpmg-legal-services-head-relishes-role-with-ai

    And this article on auditng the auditors: https://www.wsj.com/articles/we-audit-the-auditors-and-we-found-trouble-accountability-capital-markets-c5587f05

    „Noch ist der Mensch kreativer als KI.“

    „Noch ist der Mensch kreativer als KI.“
    In dieser Podcast-Folge geht es um Künstliche Intelligenz, ChatGPT und um das Zusammenspiel von Mensch und Maschine. Welche Veränderungen bringen Systeme dieser Art in unser Leben und unser Arbeiten. Welche Tätigkeiten, welche Jobs werden Maschinen und KI übernehmen? Welche Branchen werden am stärksten betroffen sein? Darüber sprechen wir mit Holger Volland. Er ist Informationswissenschaftler und arbeitete als Internetpionier bei pixelpark, einer der ersten Multimediaagenturen, in Berlin und New York und als Vizepräsident der Frankfurter Buchmesse. Heute ist er Vorstandsvorsitzender des brand eins-Verlags in Hamburg, der das gleichnamige, monatlich erscheinende Wirtschaftsmagazin herausgibt. Holger Volland ist Kommunikations- und Digitalexperte, Dozent, Keynote-Speaker und Autor der Bücher „Die kreative Macht der Maschinen. Warum Künstliche Intelligenzen bestimmen, was wir morgen fühlen und denken“ und „Die Zukunft ist smart. Du auch? 100 Antworten auf die wichtigsten Fragen zu unserem digitalen Alltag.“ Wir fragen ihn, ob wir gerade eine technische Revolution erleben, ob ChatGPT eine gigantische Plagiatsmaschine ist und ob der Vergleich zu einem „stochastischem Papagei“ angemessen ist. Wir gehen der Frage nach, was KI für Konsequenzen in dem Bereich Autorenschaft, Urheberschaft und geistiges Eigentum nach sich zieht. Was kann uns KI abnehmen, wie kann es unser Leben vereinfachen und uns helfen, uns auf unsere genuin menschlichen Fähigkeiten zu konzentrieren? Wie kreativ ist KI? Wir klären, was wir Menschen besser können und sprechen über die Gefahren, die in Deep Fakes liegen. Wie könnte ein Faktenchecker aussehen? Wird es eine Art Wasserzeichen für geprüfte, wahre Inhalte und Informationen geben (können)? Welche zentralen Regeln brauchen wir im Umgang mit KI? Welche Risiken, welche Chancen bietet KI?

    Prime cause contro ChatGPT per violazione del copyright

    Prime cause contro ChatGPT per violazione del copyright
    Il 10 luglio sulla rivista digital e chortle.co.uk è apparsa la news secondo cui una comica, Sarah Silverman, avrebbe citato in giudizio le aziende che stanno dietro due dei più potenti motori di intelligenza artificiale per aver violato una sua opera.
    >> Leggi anche l'articolo: https://tinyurl.com/bdeva7eh
    >> Scopri tutti i podcast di Altalex: https://bit.ly/2NpEc3w

    Episode 79 - Artificial Intelligence Regulation Around the World

    Episode 79 - Artificial Intelligence Regulation Around the World

    The launch of ChatGPT has forced organizations and institutions across the economy to ask themselves what generative AI means for them in terms of risks and opportunities. Governments and regulators find themselves asking similar questions. Artificial intelligence “going mainstream” because of ChatGPT accelerated regulatory efforts and shaped the policy and political conversation on AI.  

     The Director of the Center for Data Innovation Daniel Castro joins us on the podcast to discuss how countries around the world are going about regulating artificial intelligence and stimulating innovation. He provides his insight on the different approaches to AI in Europe, Asia and North America.