Logo
    Search

    Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

    enNovember 03, 2023

    Podcast Summary

    • Dentist Visit Due to Accidentally Broken Tooth on Candy 'Dots'Casey visited the dentist due to a broken tooth from eating 'Dots' candy during Halloween, expressing disappointment and frustration, and suggesting a ban.

      The speaker, Casey, had to visit the dentist after accidentally breaking a tooth on a candy called "dots," which he believes should be regulated due to its hard and sticky texture. The incident occurred during Halloween, and Casey expressed his disappointment and surprise as the candy was the only one in stock at the store. He also shared his frustration about the annual panic surrounding Halloween candy safety and called on the Biden administration to consider banning dots. Meanwhile, Kevin Ruse, the tech columnist for The New York Times, shared his exciting experience of visiting the White House to discuss the new president's executive order on artificial intelligence.

    • White House Orders Regulation of Next-Gen AI ModelsThe White House issued an executive order mandating disclosures and safety tests for next-gen AI models, signaling increased government oversight and involvement in the industry.

      That the White House recently put into place an executive order aimed at regulating artificial intelligence (AI), particularly the creation of next-generation models. This order, which includes industry leaders and government officials, requires companies to inform the federal government when they train such models and disclose safety tests. The industry was caught off guard by this, as the threshold for these requirements is a significant amount of computing power. The order also addresses potential harms of AI, such as discrimination, bias, fraud, and disinformation, and requires government agencies to prevent AI from encouraging these issues. Overall, this executive order signals to the AI industry that Washington is paying close attention and will be involved from the early stages.

    • The Debate over Reporting Large AI Models to the GovernmentOpen source AI advocates argue for safer, democratized tech and transparency, while closed approach supporters prioritize rigorous testing and safety measures. The controversy centers around a government order requiring reporting of large AI models for safety testing, which some fear could be a regulatory capture barrier for smaller competitors.

      The ongoing debate in the tech industry revolves around the controversial executive order regarding reporting large AI models to the government. The order has sparked controversy due to the potential implications for open source AI versus a more closed approach. Open source proponents argue that it leads to safer, better tech, democratization, and transparency. On the other hand, closed approach advocates prioritize rigorous testing and safety measures, fearing potential harm. The controversy intensified with the requirement to report if training a model larger than a certain size, which some view as a potential step towards regulatory capture and a barrier for smaller competitors. However, it's important to note that the order doesn't restrict the development or open source nature of large models, but rather requires reporting for safety testing and potential hazards.

    • The Debate on AI Regulation: Closed vs Open AccessIndustry leaders call for AI regulation due to safety concerns, but some view it as a cynical move. The government's involvement is inevitable due to AI's widespread applications and potential risks.

      The debate surrounding AI regulation is intensifying, with some advocating for a closed approach to prevent potential risks, while others argue for open access to foster innovation. The concern over AI safety is not new, with industry leaders like Dario Amodeo and Sam Altman expressing their worries for years. The recent push for regulation, however, has raised questions about the true intentions behind these calls. Some argue that it's a cynical move to enrich themselves, while others believe it's a necessary step to ensure safety. The government's involvement was inevitable due to AI's widespread applications and potential impact. During a visit to the White House, the interviewee discussed the open source issue with Arthur Prabhakar, who heads the Office of Science and Technology Policy. Prabhakar acknowledged the dual nature of AI, with it democratizing technology and proliferating potential risks. The debate will continue, and the path forward will involve ongoing discussions and learning from various perspectives.

    • Balanced Approach to AI RegulationThe Biden White House aims to address potential harms and benefits of AI through a balanced regulatory approach, recognizing both positive use cases and potential risks.

      The Biden White House is approaching AI regulation with a balanced perspective, acknowledging both the potential harms and benefits. While there is optimism about the upsides, such as microclimate forecasting and renewable energy development, there is also recognition of the potential risks, like bio-weapons and cybersecurity. This approach is a pleasant surprise compared to some proposed regulations that overlook positive use cases, like the proposed elimination of Section 230 of the Communications Decency Act. However, it remains to be seen how effective this executive order will be in practice, and whether it goes far enough to address existential threats from AI. The speaker's view on AI has not necessarily changed, but their perspective on the regulatory response may have shifted, recognizing the need for balance and the potential for AI to bring about positive change.

    • Addressing potential risks of advanced AI technologiesGovernments should proactively address potential harms and risks of advanced AI technologies by identifying risks and developing mitigations now.

      While it's challenging to predict the exact future implications of advanced AI technologies, it's crucial for governments to begin addressing potential harms and risks proactively. Regulation against theoretical future harms is difficult, and history shows that significant regulations often arise in response to catastrophic events. However, the recent executive order in the US is a step in the right direction, focusing on developing content authenticity standards and addressing potential bioweapon risks. Europe, which is more advanced in AI regulation, might set the standard for the rest of the world. The key is to start identifying potential risks and developing mitigations now, before the situation becomes more serious.

    • Artists Sue Stability AI Over Copyright InfringementA recent ruling allows a copyright infringement claim against Stability AI to proceed, while most claims against Midjourney and DeviantArt were dismissed due to unregistered copyrights. The case highlights the need for clear guidelines and regulations in the rapidly evolving field of AI and intellectual property rights.

      The legal battle between artists and AI companies over copyright infringement continues, with a recent ruling allowing a claim against Stability AI to move forward. The case, brought by a group of artists including cartoonist Sarah Anderson, alleges that Stability AI's stable diffusion image generator directly infringed on their copyrighted works. However, most of the claims against other companies, Midjourney and DeviantArt, were dismissed due to unregistered copyrights. This ruling marks a significant development in the ongoing debate over who owns the rights to creative works used to train AI models. The core issue of whether artists have been wronged in a way that can result in financial compensation remains to be resolved. The legal battle underscores the need for clear guidelines and regulations in the rapidly evolving field of AI and intellectual property rights.

    • AI-generated content and copyrightEstablished copyright principles apply to AI-generated content, but human involvement and authorship are contentious issues. Simple outputs might not be eligible for copyright, while complex outputs could be considered human-authored.

      The legal status of AI-generated content, particularly in the realm of copyright, is a complex and evolving issue. According to Rebecca Tushnet, a copyright law expert and professor at Harvard Law School, the established copyright principles can be applied to AI-generated content, but there are nuances and debates surrounding human involvement and authorship. For instance, simple outputs like a banana image generated by an AI might not be eligible for copyright protection due to the lack of human authorship. However, more complex prompts that result in distinct outputs, such as a banana dressed as a 1940s detective, might be considered more human-authored and thus eligible for copyright. The question of whether to count the prompt as human authorship is a contentious one, with some arguing that it should be enough to secure copyright. Another issue is what happens to the rejected outputs. These debates underscore the need for ongoing discussions and potential legal changes to address the unique challenges posed by AI-generated content in the copyright landscape.

    • Copyright implications of AI models on copyrighted materialThe legal landscape around AI models using copyrighted material is evolving, with some arguing it's a violation and others fair use. Google Books Project serves as an analogy, but the output of AI models may look like copyrighted works without compensation. The responsibility for preventing unauthorized use lies with creators and platforms.

      The copyright implications of training AI models on copyrighted material is a complex issue, with some arguing that it's a violation of copyright, while others believe it falls under fair use. The Google Books Project serves as an analogy, where Google makes copies of existing works to create an index, but doesn't reproduce them exactly. However, the output of AI models, such as stable diffusion, may look like copyrighted works without the original creator receiving any compensation. The question of whether these models truly transform the input or simply collage it remains a point of contention. The legal landscape is still evolving, and it will be important to monitor how courts handle these cases. Ultimately, the responsibility for preventing unauthorized use of copyrighted material rests with the creators and the platforms, and it's crucial to continue the conversation around safe design and fair use in the age of AI.

    • Use of artists' work in AI training and copyright lawThe use of artists' work in AI training can be a complex copyright issue, with outcomes uncertain and expensive to litigate. Fair use may apply, but licensing everything is not required.

      The use of artists' work as training data for AI models can be a complex issue when it comes to copyright law. While some argue that this use falls under fair use, artists can still pursue copyright claims based on direct infringement. However, the outcome of these cases is uncertain and expensive to litigate. The law has historically not required companies to license all data used in their models, even if they choose to do so. Fair use can be a costly legal battle, but it doesn't necessarily mean that licensing everything is the only solution. The debate around the use of copyrighted material in AI training continues, and it's essential for companies and artists to stay informed about the latest developments in this area.

    • Actions for artists to protect their work in AI eraArtists can consider opt-outs, ethical considerations, and moral arguments to protect their work in AI era. Traditional licensing deals may not be feasible, and compensation models have limitations. Size and scope of AI models impact their protection from lawsuits.

      While there are ongoing debates about the use of AI systems in creating art and the legal protections for artists, there are some actions artists and creators can take. Voluntary opt-outs, similar to Google's respect for robot exclusion headers, can limit the use of one's work without legal requirements. However, the effectiveness and fairness of these opt-outs are subject to debate. Regarding compensation, traditional licensing deals may not be feasible for individual artists due to the high costs involved. Instead, ethical considerations and moral arguments may play a more significant role in the future of protecting artists' rights. However, skepticism exists regarding the effectiveness of models that compensate artists, as the majority of the revenue may not reach the artists themselves. The size and scope of AI models can also impact their protection from potential lawsuits, as larger models with more data may be less susceptible to infringement claims. The ongoing Westlaw case highlights the limitations of licensing deals for individual creators, as the people who wrote the summaries at Westlaw do not see any additional compensation even if the company prevails in the lawsuit. Ultimately, the use of AI in creating art raises complex legal and ethical questions, and while there are no clear-cut answers, creators and artists can take actions to protect their work and advocate for their rights.

    • Discussing AI liability and copyright protectionsAI companies argue they're just tools, but users are responsible for infringing material. Copyright protections may not effectively address new technologies, and economic structures need reconsideration for artist compensation.

      While AI companies argue they are just providing tools and not responsible for how users utilize them, the liability for creating and distributing infringing material, such as counterfeit money or porn, ultimately falls on the user. The conversation also touched on the idea that artists and writers have long relied on copyright protections, but the rise of new technologies may not be effectively addressed by handing out more rights without considering broader economic structures and how artists are compensated. The conversation ended with a light-hearted segment discussing Halloween candies and a game called Hat GPT.

    • Exploring the unpredictable world of AI and technologyFrom Hat GPT to AI-generated Seinfeld, technology's evolution and AI's capabilities continue to surprise us, raising important questions about their implications.

      Technology and AI are continuously evolving, with new developments and applications emerging frequently. This was highlighted in a discussion about Hat GPT, a game where news stories about technology are drawn from a hat and AI generates plausible language about them until one person gets tired. The group also talked about an AI-generated Seinfeld live stream that had been running for months, with one character endlessly walking into a refrigerator. They found it intriguing that the show, which was famously about nothing, had evolved into almost nothing and yet was more popular than ever. Another topic was a barge-based compute cluster called the Blue Sea Frontier, which was being used as a way to avoid government regulations on AI reporting. The group also discussed Joe Biden's growing concerns about AI after seeing AI-generated images and learning about voice cloning technology. Overall, the discussion underscored the importance of staying informed about the latest tech trends and the potential implications of AI.

    • AI in news: Oversight needed to prevent insensitive resultsMicrosoft news aggregator generated insensitive poll, highlighting need for ethical AI use in news

      The use of AI in generating news content, including polls, requires careful oversight to prevent inappropriate or insensitive results. This was highlighted in a recent incident where a Microsoft news aggregator generated an insensitive poll regarding the death of a young woman, which was published next to an article about her on The Guardian. The president's reaction to a fictional AI entity in the movie "Mission Impossible" also serves as a reminder of the potential impact of media portrayals of AI on public perception and policy-making. While the potential benefits of AI in news aggregation and content generation are significant, it is crucial to ensure that the technology is used responsibly and ethically to avoid causing harm or offense.

    • The role of humans in AI and technologyWhile AI and technology offer innovative solutions, human decisions and actions play a crucial role in their implementation and impact. Human oversight and accountability are essential in the age of AI and technology.

      While AI and technology can offer innovative solutions, ultimately, human decisions and actions play a crucial role in their implementation and impact. The Microsoft example of using AI-generated polls in news articles highlights this, as it was a human decision to implement these polls, not an AI one. In the case of Cruise's driverless cars, a human error led to a controversial incident, resulting in regulatory action and the pause of operations. Regulators are enforcing stricter scrutiny on self-driving cars, and this incident may be a significant setback for their widespread adoption. Additionally, Waymo's new addition of barf bags in their rides could be a response to potential passenger behavior or a precautionary measure. These examples underscore the importance of human oversight and accountability in the age of AI and technology.

    • Unexpected twists and turns in everyday experiencesBe adaptable and prepared for the unexpected in all situations, even the seemingly mundane.

      Even in seemingly mundane situations, unexpected twists and turns can occur. During the discussion, the speaker expressed confusion about a smooth ride, unsure if they should expect turbulence or not. This highlights the importance of being adaptable and prepared for the unexpected. Additionally, the team discussed the acquisition of a hat for the show, with a budget set aside for it. The speaker joked about not wearing hats due to his spiky hair, but ultimately agreed to wear one for the show. The conversation also included various production team members and their roles in creating the podcast. Overall, this exchange showcases the unexpected nature of everyday experiences and the importance of being flexible and prepared.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    Can You Copyright or Trademark a Logo Designed by AI?

    Can You Copyright or Trademark a Logo Designed by AI?

    The legal world is buzzing about AI and its use for all kinds of things, including generating logos, text, and other things people would normally want to register for copyright or trademark protection. I'm particularly nerding out over these issues, because my master's degree project involved training of artificial intelligence systems. Rights to AI-generated content, and to content made on creative platforms, aren't always easy to understand, and they have a big impact on how you can use it and if and how you can protect it.

    There's no doubt AI is incredibly useful for generating content, though there is still no substitute for a real human author or artist. But what rights do you have to what it creates for you? Can you use it in the ways you want to?

    Keep in mind the generators are trained on existing material, including things that are protected by copyright and trademark law and registration and patents. There have been some court decisions on this precise topic, but the law is not completely settled. However, there are some certainties and principles of law that can guide you.

    AI-assisted programs, like online logo generators, aren't straight AI tools like ChatGPT. Instead, they provide templates tweakable using AI. If you're using an online logo generator, such as the one in Canva, a very popular online program for creating all kinds of visual projects, or Logo.com, you need to look at the license terms of the software. Canva and other logo generators are licensing the use of their product and the generated logos in it to you. You'll almost certainly see language that says you cannot apply for copyright or trademark registration for those logos, and that Canva and whoever they licensed the clip art, photos, etc. used in those generated logos retain the ownership to that original art and do not give you a license to use it exclusively. Even when you make a "new" creation with those elements, they still belong to Canva and/or whoever licensed them to Canva.

    I made a logo for Bob's Burgers for selling burgers on Tailor Brands' logo maker website. Their terms say I own full commercial (note they don't say "exclusive") rights to it and can apply for trademark registration for it (through the, naturally, even though they aren't lawyers and will just copy whatever you provide them into the application and submit it whether it's appropriate or not). Well, they're right, I can apply, but registration surely won't be granted. For starters, Bob's Burgers is already a trademark belonging to someone else. Second, they had me pick one of 20 graphics for use as part of the logo. That means in no way is that graphic element going to be unique to my logo. The lack of exclusive rights here is fatal. These generators also don't address other issues that can lead to refusal to register a trademark. usually you won't be given the rights needed to have ownership or apply for registration, but even if you are, your logo could still be refused copyright and trademark registration for other reasons.

    If you use another kind of AI tool to create a logo, like Canva's AI tools or DALL-E, the platform doesn't claim any ownership rights, including copyright ownership, to the output. That doesn't mean you're in the clear for ownership and registration, however. Some of the elements in the output may be identical to or similar enough to work made by others that it would be infringement to use it without proper credit to and licensing from them.

    Copyright

    I asked DALL-E to make some logos for me for use in this post. I've seen enough stock graphic elements when doing trademark and copyright searches to know that the crown elements and scales of justice elements are likely to be highly similar to or identical to crown and scales designs owned by Getty Images or some other entity or artist. That means not only is it possible I do not own exclusive rights to those elements, it is also possible I would be infringing if I use them commercially (I'm using them educationally here, so that's ok).

    The US Copyright Office has issued some very helpful guidance about copyright ownership of AI-generated works in the US. The general gist is this: copyright only protects works made by humans. AI isn't human. The Copyright Office views the human prompts that generate AI output as akin to instructions to a commissioned artist where the AI determines how the instructions are carried out. In such cases, the output is ineligible for copyright ownership or registration.

    If, however, a human takes AI output and selects, arranges, or modifies it in a creative way the work may qualify as a work of human authorship that can have copyright protection. There's a catch, though. Any parts that came from the AI are excluded from that ownership and protection. Only the human-authored parts can be protected.

    Trademarks

    The United States Patent and Trademark Office (USPTO), is also working on handling the influence of AI on trademarks and patents. There's no question AI has helped make trademark and patent searches much more efficient, but what happens when AI is involved in the creation of the trademark for which someone is requesting protection?

    As with copyright, AI-generated logos are subject to the terms and conditions of the generator regarding whether the user can use them commercially or apply for trademark registration. This is why I ask trademark clients who generated the logo and whether they have all necessary rights to apply for registration. Human or AI-created, if you don't have the proper rights to ownership or use, you don't have the proper rights to apply for trademark registration. Of course some people apply anyway, whether they know this or not, and some applications get approved. That doesn't mean all is ok, however. If the AI service retained the rights and sees you using the mark commercially, they will likely take action to have your registration cancelled and against you for infringement and violation of terms of use. Just because you know someone who has gotten away with it so far doesn't mean it's legal or ok. Like speeding.

    AI generators don't care if they produce infringing material. They don't have the sophistication yet to generate a logo that for sure wouldn't definitely infringe on someone else's if you used it. That means you have to take whatever it gives you and do your own research on it. Run the logo it creates, and distinct parts of it, through a reverse image search on Google and see what comes up. It's really no different than doing the same searches you'd need images you or an artist you hire create. Those logos I had it create for me could absolutely be infringing on something another law firm is using. The standards for non-infringing and registrable use of an AI-generated trademark are the same as for a human-generated trademark.

    Be Careful What You Ask For

    One more thing to consider is that your prompts to the AI generators could be used against you in infringement cases. If you asked for something that looks like the Starbucks logo, rather than just asking it to design a logo for a coffee shop, that would weigh on the side of what it created being infringement.

    AI isn't perfect. You can't trust it, at least not yet, to give you results that won't cause trouble for you. This isn't limited to logo design. It applies to anything it generates for you. You may also be surprised at what limits there are to the non-AI content you create on sites like Canva and Promo Republic.

    Beyond Logos

    Copyright

    Any creative work you have AI assist you in creating is subject to the same copyright issues as a logo AI helped create. You need to see what the terms and conditions of the generator permit. You also need to determine the extent to which AI was the creator and to which you were the creator. You may remember the case a few years ago about whether a monkey who took a selfie held the copyright to the photo. Because the monkey isn't human, the court held it couldn't own the copyright, so the poor little monkey couldn't make any money to buy treats by licensing the photo to calendars.

    If AI creates your image, music, or text, you don't have the copyright to that work. You'd have to do something to transform it, and you'd still only own the copyright to the parts to which you contributed any creativity.

    As with logos, any creative work you use the work of others to create, even on a site like Canva that you might think gives you a license to use whatever you create however you like, is subject to specific licensing terms. Those terms depend on how you're going to use the content and will vary significantly from a flyer you create for a block party or garage sale, to a classroom worksheet, to an advertisement for your business. It's annoying to comb through the terms and conditions on those sites, but if you are going to use any of what you create for a business or other commercial purpose, you are running huge risks not doing so. If you aren't sure, find out by contacting them or by consulting an intellectual property attorney.

    Trademarks

    If you have AI generate a business name, product name, slogan, etc., as with logos your rights and ownership, and therefore ability to use them commercially and receive trademark registration for them, depend on the terms and conditions of the generator. You will need to have proper searching due diligence done to make sure you won't be infringing on someone else's trademark rights. Search the names and slogans it gives you to see if something identical or similar is already in use for similar goods or services.

    Remember, the AI generator doesn't care if the slogan it generates will get a refusal from a trademark examiner for "failure to function" as a trademark because it's too common of a phrase, or that 20 other companies are already using the slogan it generated for your hand cream to market their eye creams and lip balms. The AI is a tool, not a solution, and it certainly isn't a lawyer well-versed in the nuances trademark law. Not yet, anyway.

    Patents

    In 2020, over 80,000 utility patent applications involved AI, and nearly 20% of all utility patent applications these days involve AI in some way. One of the biggest issues with AI and patents is whether AI can count as an inventor, and if so, to what extent, and how does that affect patentability of the invention? A case in 2022 held that an inventor must be human, but this isn't 100% settled law.

    If the AI did help with the invention, can the parts of the invention it didn't help with still be patented, or does the AI involvement render the entire invention unpatentable? What if the part it helped with isn't essential to the invention? What if it is?

    There are other issues as well. For something to be patentable, it can't be something someone with general knowledge in the field of the invention would find obvious. Given the depth of training of AI in so much of the content on the internet, its knowledge can far surpass a human's in scope, so does that make many more things obvious and therefore unpatentable?

    In early 2023 the USPTO asked for public comment on AI assistance with inventions to help it advise government rulemakers. If you're using AI to help you with an invention, you need to work with a patent attorney well-versed in current law and thought about AI and inventorship so you can receive good guidance on patentability, filing an application, and handling any issues the USPTO brings up about the use of AI with the invention.

    There Is So Much More to AI and IP!

    There are a host of other AI-related issues with intellectual property, such as whether you can keep your work from being used as training for AI generators. If you'd like me to do some posts on those or go into more depth on things I've touched on here, please let me know! DM me on social media or email me at info@kingpatentlaw.com.

    I'm fascinated by AI, and I have a good understanding of the various ways it can be trained. The speed at which it is improving is fascinating and sometimes a little scary. It's amazing what it can do. It's not perfect, though, and like any tool, it can be used poorly or intentionally misused. I hope this post has given you a better understanding of some of the limits and issues involved with using AI and other programs for generation of logos and other material.

    Elon Musk on 2024 Politics, Succession Plans and Whether AI Will Annihilate Humanity

    Elon Musk on 2024 Politics, Succession Plans and Whether AI Will Annihilate Humanity
    In an interview at WSJ's CEO Council Summit with editor Thorold Barker, Elon Musk talked about whether he regrets buying Twitter, who might eventually take the helm of the three companies he runs and how AI will change our future. Further Reading: - Ron DeSantis to Launch 2024 Presidential Run in Twitter Talk With Elon Musk  - Elon Musk Wants to Challenge Google and Microsoft in AI  - The Elon Musk Doctrine: How the Billionaire Navigates the World Stage  Further Listening: - Twitter’s New CEO: The Velvet Hammer  Learn more about your ad choices. Visit megaphone.fm/adchoices

    What Does ChatGPT Think?

    What Does ChatGPT Think?
    OpenAI's ChatGPT has sparked a new era in human-machine interaction. From medicine to creative works, AI's abilities seem boundless. However, concerns about AI's power and the need for regulation are growing. Can AI be effectively regulated? Who decides what's good or bad? Join Rebecca Finlay, CEO of the Partnership on AI, and host Alan Stoga as they explore these pressing questions in a New Thinking for a New World podcast.

    AI News v47 2023 - Pioneering AI Developments of the Week

    AI News v47 2023 - Pioneering AI Developments of the Week

    This edition of AI News showcases cutting-edge AI developments. Anthropic's Claude 2.1 stands out with its enhanced context window and accuracy, improving complex document comprehension. Meta Platforms strategically redistributes its Responsible AI team to emphasize generative AI in product development. Amazon's "AI Ready" initiative commits to training 2 million people in AI by 2025, offering free courses and scholarships. Additionally, a Spanish agency's AI model, Aitana López, emerges as a unique digital influencer, demonstrating AI's expanding role in advertising and social media. These advancements reflect the dynamic evolution and integration of AI in various sectors.

    Follow us on youtube: https://www.youtube.com/@aiawpodcast

    AI News v41 2023 - Pioneering AI Developments of the Week

    AI News v41 2023 - Pioneering AI Developments of the Week

    Welcome to AIAW News, a special segment of the Artificial Intelligence After Work (AIAW) Podcast, where we bring you the latest and most impactful developments in the world of AI. This week, we explore innovations from industry giants, revealing how AI is reshaping our digital experiences.

    Full Episode here: https://open.spotify.com/episode/5Q453WiXFzIOyddrkZqDYb?si=1bdad24375b24806

    In a landscape where tech giants like Microsoft, Google, and Adobe grapple with monetizing AI due to its high operational costs and customer dissatisfaction with pricing, various strategies and pricing models are being explored to make AI products more affordable and profitable. Meanwhile, OpenAI is projected to achieve a revenue of $1.3 billion in 2023, primarily from ChatGPT subscriptions, marking a significant leap from $28 million in 2022, despite observing a slowing growth rate. The company is also exploring fundraising opportunities at a valuation of $80-$90 billion.

    In the realm of e-commerce, Swedish fintech startup Klarna has launched an AI image recognition tool for shopping and other features like shoppable videos and a cashback rewards program, expanding its features to be more shopping-centric and utilizing AI to enhance user experiences. This comes amidst Klarna navigating through market challenges and a valuation drop, innovating in a domain where tech giants have established a strong presence.

    OpenAI, navigating through chip scarcity and high operational costs, is exploring the development of its own AI chips. This exploration includes considering acquisitions and collaborations and could potentially place OpenAI alongside tech giants like Google and Amazon in controlling chip design crucial to operations. On a national scale, Norway is allocating one billion NOK to AI research over the next five years, focusing on researching the societal impacts of technological development, fostering knowledge about new digital technologies, and spurring innovation in various sectors.

    Lastly, SMIC, China's top chip maker, has achieved a breakthrough in utilizing the 7-nm process for semiconductors, a significant development achieved in two years and faster than global leaders like TSMC and Samsung. This comes amidst US sanctions and global chip shortages and is crucial for China’s semiconductor self-sufficiency, with potential global market impacts.


    Don’t forget to subscribe, like, and share for more insightful discussions on AI advancements! Stay tuned for next week’s episode, where we continue to explore the fascinating world of Artificial Intelligence.

    Connect with Us:

    Website: www.aiawpodcast.com
    Email: contact@aiawpodcast.com
    Twitter: @aiawpodcast

    Disclaimer: All opinions presented in this segment are personal and are not founded by any in-depth research. They cannot be taken as true until qualified personally by the reader.

    Follow us on youtube: https://www.youtube.com/@aiawpodcast