Logo
    Search

    Podcast Summary

    • Protecting Human Creativity with NightshadeResearchers at the University of Chicago developed Nightshade, a set of tools to help artists protect their digital art from AI replication, ensuring human creativity continues to thrive

      The name "nightshade" for a piece of software in the context of this discussion refers to a tool designed to protect human creativity in the face of advancing AI capabilities. This name is inspired by the real-life plant, Atropa belladonna, also known as deadly nightshade, which holds deep associations with both death and aesthetic beauty. Historically, this plant has been known for its poisonous properties, but it also has a connection to art and beauty due to its use as a cosmetic. In the context of the software, nightshade is a set of tools created by researchers at the University of Chicago to help artists protect their digital art from being replicated and potentially replaced by AI models. By applying these tools to their work, artists can create art that appears normal to humans but is harmful to AI models, preserving the unique value and importance of human creativity. This conversation between Scott and Sean Chan highlights the potential implications of AI on human creativity and the need for measures to protect it. While AI models are impressive in their ability to generate images and copy styles, there is a concern that they could eventually replace human artists. The development and use of tools like nightshade aim to ensure that human creativity continues to thrive and evolve.

    • AI models learn from images and text, associating names with specific styles, but can make errors when dealing with complex images.AI models can mimic an artist's style, but may make errors or produce inaccurate results, requiring human oversight.

      AI models, such as Midjourney or DALL E, learn from images and corresponding text during the training process. They associate names with specific image styles and try to reproduce similar images when prompted. However, these models can make significant errors, or "poison" the output, especially when dealing with complex or nuanced images. For instance, an AI model might mistake an elephant for a cat, leading to inaccurate results in the future. This phenomenon, known as style mimicry in generative AI, can be used to mimic an artist's style by training the model on their work. While the quality may not be as good as the original, it can be sufficient for various commissions, potentially replacing the artist in some cases. It's important to remember that these models learn from the data they are given and may make errors or produce inaccurate results, making human oversight crucial.

    • Disrupting AI Art with Small ChangesResearchers are developing methods to confuse AI models, preventing them from accurately mimicking specific artistic styles, by adding small changes that exploit differences in human and AI perception of images.

      As artificial intelligence (AI) models become more advanced, they are increasingly capable of generating art that closely resembles human creations. This has the potential to disrupt the traditional art market, allowing customers to create art in the style of specific artists using AI models instead of commissioning human artists. However, researchers are developing methods to disrupt this style mimicry, such as Glaze, which adds small changes to images that confuse AI models without altering how humans perceive them. Essentially, these changes exploit the differences in how humans and AI models perceive images. While AI models can identify subtle variations in pixel values that humans cannot, they also have a fundamentally different way of processing visual information. By carefully crafting these small changes, researchers can confuse AI models, preventing them from accurately mimicking specific artistic styles. The analogy given is that AI models can be thought of as a UV light system, revealing hidden details that humans cannot see. This opens up new possibilities for research and development in the field of AI-generated art and its impact on the traditional art market.

    • Manipulating AI images is challengingDespite attempts to make AI models robust against image manipulations, it remains difficult to do so without sacrificing performance, especially for generative models.

      While it may be possible to make subtle changes to images that go unnoticed by the human eye, these changes can significantly disrupt the function of AI models that process and understand those images. These models work by mapping raw pixel data into high-level features, which can be interpretable for certain types of images but not for the complex feature spaces used by more advanced models. Researchers have been working for years to make these models more robust against such changes, but it is generally difficult to do so without sacrificing performance. In the case of generative models, the sacrifice can be quite significant due to the need for precision in producing high-quality artwork. Additionally, developers have added randomness into the image processing to make it harder for malicious actors to manipulate the models effectively. While some attempts have been made to train AI to recognize and bypass these manipulations, they have only worked in specific cases. Overall, the complexity and sophistication of AI models make them difficult to manipulate without significant effort and potential performance sacrifices.

    • CIS: A Community-Driven Resource for Enhancing CybersecurityThe Center for Internet Security (CIS) provides valuable cybersecurity best practices through a community-driven consensus process, saving time, money, and effort for organizations.

      The Center for Internet Security (CIS) is a valuable resource for businesses and individuals looking to enhance their cybersecurity. With the increasing number of cyber threats and limited IT resources, CIS uses a community-driven consensus process to develop and maintain security best practices. These resources save time, money, and effort for organizations at various stages of their cybersecurity journey. CIS also collaborates with government organizations to share information and strengthen their collective security. Two of CIS's tools, Glaze and Nightshade, were discussed. Glaze makes data unreadable to AI, while Nightshade not only conceals data but also corrupts the base model, making it generate incorrect results. These tools can be thought of as poisoning the well to mislead AI. To be truly useful at scale in the real world, these models need to be integrated into large platforms where data is being scraped to train them. Platforms are currently discussing the implementation of these tools, with some expressing concerns over artist copyright protection. As the conversation continues, these tools have the potential to move beyond individual use and significantly impact the security and accuracy of AI models.

    • The intersection of AI and intellectual property: Balancing cost savings and IP protectionCompanies in the entertainment industry seek to use AI for cost savings but must navigate the complex issue of intellectual property protection. Regulation around AI's use in this space is still evolving, leaving creators to explore technical solutions like Glazy to bridge the gap.

      The intersection of AI and intellectual property is a complex and contentious issue. Companies in the entertainment industry are interested in using AI to save costs but are wary of infringing on the value of their IP. Entertainment companies, including Disney, have already taken steps to protect their characters from being replicated by AI. This creates a tension between the desire to use AI for cost savings and the need to protect valuable intellectual property. The issue is further complicated by the lack of clear regulation in the space. While some countries, like the US and Europe, are exploring regulations around AI, others, like China and Japan, are more supportive of its use. The discussion around regulation has been slow, and it remains to be seen how court cases and potential new laws will shape the future of AI in this space. The technical solutions, like Glazy, can provide a temporary solution for creators while they wait for regulation to catch up.

    • AI-generated content copyright law in Beijing: Implications for artists and technological solutionsNew copyright law in Beijing for AI-generated content raises questions about protection and enforcement for artists, while technical solutions like preventing image scraping or compensating creators face challenges.

      The recent passing of a copyright law in Beijing for AI-generated content is a complex issue with implications for both artists and technological solutions. While this law may offer some protection for artists, it's unclear how it will be enforced and who will ultimately benefit. Technical research and solutions, such as preventing image scraping or compensating creators, are important but face challenges. The team working on this issue is a research team with a background in security and privacy of machine learning systems. They have transitioned their focus to protecting artists due to their expertise in studying the variability of AI models. The implications of this new law extend beyond just images, as AI is capable of generating text, music, and more. Decisions made now will impact future AI-generated content, so it's crucial that the creative class is considered in these discussions. The team's small but focused, and their background in machine learning security and privacy positions them well to tackle this issue.

    • Protecting Human Creativity from AIGlaze team aims to preserve human creativity by protecting images from AI-generated copyright infringement, reaching 1.6M downloads and 3k active users on their web-based version.

      The team behind Glaze, a tool used to protect images from AI-generated copyright infringement, is dedicated to preserving human creativity in the face of advanced AI models. With backgrounds in privacy and security of AI systems, the team has seen significant downloads of their app, reaching approximately 1.6 million as of last week, but they don't track active usage for privacy reasons. They also offer a web-based version for users without laptops or GPUs, which has 3,000 active users and a large waitlist. The team's motivation goes beyond helping individuals and includes a personal mission to protect human creativity from being overshadowed by AI. They believe that if the current trend continues, AI may eventually replace human artists, leading to a stagnant art world with no room for evolution. To support their research, they recently set up a donation platform through the University of Chicago.

    • AI's Impact on Human CreativityThe advancement of AI in generative art raises questions about its impact on human creativity's economic viability, necessitating regulation and a cultural shift towards understanding its consequences.

      The advancement of AI technology, specifically in the realm of generative art, raises significant questions about the future of human creativity as a viable career. While the technology may not completely replace human creativity, it could potentially make it less economically viable. The speakers expressed initial awe at the capabilities of the AI, but soon recognized the potential threats it poses. They anticipate regulation in the field, but also see a need for individuals and organizations to help guide AI development in a positive direction. The cultural shift towards AI use necessitates a better understanding of its impact on humanity and the development of tools to help steer its development. Overall, the conversation underscores the importance of balancing the benefits of AI with its potential consequences for human creativity.

    Recent Episodes from Hacked

    North Korean IT Scam + TikTok Zero Day + Consumer AI Gets Weird

    North Korean IT Scam + TikTok Zero Day + Consumer AI Gets Weird
    We discuss a bunch of stories, including the bizarre tale of how an anonymous business registration company let a massive IT scam unfold in the US, a TikTok zero day, Microsoft recall and Apple Private Cloud Compute, and a home-brew cell tower hack in the UK. NOTE: I (JB) misspeak at about 18 minutes in. I say "US" when we're talking about the UK. Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Hacked
    enJune 16, 2024

    Hotline Hacked Vol. 3

    Hotline Hacked Vol. 3
    It's our third call in episode and we're cooking now. Share your strange tale of technology, true hack, or computer confession at hotlinehacked.com. We discuss accidentally causing internet outages, creating a botnet pandoras box, and the proud tradition of hacking into stuff to play great songs the man does't want you to. Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Hacked
    enJune 02, 2024

    Hotline Hacked Vol. 2

    Hotline Hacked Vol. 2
    It’s our second call in show episode. Share your strange tale of technology, true hack, or computer confession at hotlinehacked.com. We discuss hacking e-bike networks, an act of white hat kindness, an 1970's hack from the prairies, and how bots have turned everyone into a commodities trader. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The iSoon Leaks

    The iSoon Leaks
    A data leak at a big Chinese security company reveals not just that they're engaged in state sponsored hacking-for-hire, but just how weirdly corporate a job that actually is. Our conversation with Mei Danowski, security researcher, about her analysis of the iSoon leaks. Check our her excellent Substack Natto thoughts: https://nattothoughts.substack.com/ Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Gaming Chat Vol. 1

    Gaming Chat Vol. 1
    Bonus Chat Episode. We both love (and make) video games. Thanks to our supporters, alongside our typical two episodes this month, we’re excited to drop this bonus episode where we chat about hacking games, making games, and playing games. If you want to support Hacked too, check out hackedpodcast.com to subscribe. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Real World

    The Real World
    The story of an online business school and the ex-student warning that it might be a cult. Check out some of our guest Tim Hume’s excellent reporting at the links below: https://www.vice.com/en/article/pkaw7k/andrew-tate-the-real-world-cult https://www.vice.com/en/article/n7emvg/andrew-tate-channels-culled-by-youtube-after-revelations-about-get-rich-quick-cult https://www.vice.com/en/article/4a385g/youtube-profited-from-andrew-tate-recruitment-videos-despite-banning-them Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Pokédex

    The Pokédex
    A lot of the tech we use today started out as a gizmo in a piece of science fiction. A conversation with Abe Haskins, creator of the DIY Pokédex, about how the sci-fi we love informs the tech we get, and how he hacked together an iconic piece of 90’s pop culture. Check out his excellent work at https://www.youtube.com/@abetoday Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Pretend: The Stalker - Part 1

    Pretend: The Stalker - Part 1
    Two competing stories about a cyberstalking that all comes down to an IP address. Today's episode was a partnership with "Pretend," hosted by Javier Leiva. Pretend is a true crime podcast about con artists. Definitely check it out wherever you get your shows. Spotify: https://open.spotify.com/show/2vaCjR7UvlN9aTIzW6kNCo Apple: https://podcasts.apple.com/ca/podcast/pretend-a-true-crime-podcast-about-con-artists/id1245307962 RSS: Click here Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Related Episodes

    When AI Comes for Your Art

    When AI Comes for Your Art
    AI-art generators let users create fantastical images with just a few text prompts. But some artists see a problem: They say AI is ripping them off. Artist Greg Rutkowski and WSJ tech columnist Christopher Mims explain what's at stake for the art world. Further Reading: - AI Tech Enables Industrial-Scale Intellectual-Property Theft, Say Critics  - Ask an AI Art Generator for Any Image. The Results Are Amazing—and Terrifying.  Further Listening: - The Company Behind ChatGPT  Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Valley Current®: 21st Century Entrepreneurship in an AI World

    The Valley Current®: 21st Century Entrepreneurship in an AI World

     What do current college students studying product design and entrepreneurship see as to their future lifespans, healthspans, and workspans? Is it really that crazy to think that living to 100 years old and being healthy through it is a realistic probability for society’s youngest members? Though none of us can predict the future, we can safely assume that our health span and life span will increase alongside modern medicines advancements. As awesome as a longer health and lifespan sounds, a life that is both long and rich will require funding from a longer work span. Today Jack Russo heads to USF to talk to our future knowledge workers about how to create intellectual property over a long long long workspan, healthspan and lifespan.

     

    Jack Russo

    Managing Partner

    Jrusso@computerlaw.com

    www.computerlaw.com

    https://www.linkedin.com/in/jackrusso

    "Every Entrepreneur Imagines a Better World"®️

    Are Copyright Battles Against AI Destined to Fail?

    Are Copyright Battles Against AI Destined to Fail?
    NLW looks at three arguments around AI and copyright and why the current contentious approach being taken by some authors, publishers, and policymakers may not hold up long term. Including excerpts from: Does the First Amendment Confer a ‘Right to Compute’? The Future of AI May Depend on It - https://www.scientificamerican.com/article/does-the-first-amendment-confer-a-right-to-compute-the-future-of-ai-may-depend-on-it/ The Copyright Office is making a mistake on AI-generated art https://arstechnica.com/tech-policy/2023/09/opinion-dont-exclude-ai-generated-art-from-copyright/ My Books Were Used to Train Meta’s Generative AI. Good. https://www.theatlantic.com/technology/archive/2023/09/books3-database-meta-training-ai/675461/ TAKE OUR SURVEY ON EDUCATIONAL AND LEARNING RESOURCE CONTENT: https://bit.ly/aibreakdownsurvey ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/