Logo
    Search

    AI for Shaming Politicians, the New AI Art Scene, DeepFake Phishing

    enJuly 16, 2021

    Podcast Summary

    • AI in Art and PoliticsAI is revolutionizing politics by exposing negligent politicians and creating unique art, while also presenting challenges such as bias in machine translation and YouTube's recommender system

      AI is making strides in various fields, from identifying negligent politicians through social media surveillance to generating unique artistic images. In the first discussion, a Belgian artist uses AI to publicly shame politicians who use their phones during work hours, raising questions about privacy and accountability. This innovative use of AI is a fun and novel way to highlight potential negligence in the political sphere. Moving on, the second topic covers the explosion of AI-generated art, with hackers combining different models to create surreal and imaginative images based on text prompts. This development is an exciting advancement in the art world, as it allows anyone to generate unique and visually stunning images with minimal technical expertise. Lastly, the podcast also touched upon the ongoing issue of bias in machine translation and the continued challenges of YouTube's recommender AI. While these topics are important, the overall tone of the episode was positive, highlighting the creativity and potential of AI to transform various industries and aspects of our daily lives. Overall, the podcast showcases the diverse and ever-evolving applications of AI, from the mundane to the imaginative, and the ongoing efforts to address the challenges and biases that come with this technology. Stay tuned for more insights and discussions on the latest developments in AI.

    • AI's Impact on Art and Language TranslationAI is revolutionizing art creation through tools like DALL-E 2 and addressing gender bias in machine translation with Google's new dataset.

      Artificial intelligence is making waves in the art world through tools like DALL-E 2, which generates unique and often surreal images based on text prompts. This technology, which can create sharp, plausible images as well as more abstract and surreal ones, is a game-changer for artists who want to explore new creative possibilities. The blog post "Alien Dreams and Emerging Art Scene" provides a history and explanation of how this all came about, and it's a great read for anyone interested in this topic. Another significant development in AI research is Google's introduction of a dataset for studying gender bias in machine translation. This dataset, which includes biographies of people and professional translations, is specifically designed to analyze common gender errors in machine translation, such as incorrect gender choices in pronoun drop languages, incorrect possessives, and incorrect gender agreement. This research is important because it helps address the issue of gender bias in AI and ensures that machine translations are more accurate and inclusive. Overall, these developments show that AI is making a significant impact in various fields, from art to language translation, and that it's important for researchers and developers to continue exploring its potential while also addressing ethical concerns. Whether you're an artist looking for new creative tools or a researcher interested in the latest advancements in AI, these developments are worth exploring.

    • Addressing gender bias in machine translation modelsResearchers analyze and address gender bias in machine translation models, with Loser AI leading efforts to make large language models open and accessible to the public, including art creation and long-term AI safety goals.

      Machine translation models, specifically those related to gender bias, are being analyzed and addressed by researchers. This issue, while complex and specific to machine translation, is important due to its potential influence and the fact that it is a contained problem with clear benchmarks. Machine translation errors are common in large companies and can now be analyzed for gender bias specifically. The nonprofit organization, Loser AI, has been working on making large language models open and available to the public, including the release of GPT Neo and GPTJ. Their retrospective of the past year provides an interesting and fun read into the crowdsourced effort and the use of Google's TPUs. Loser AI also creates art using AI and has long-term goals regarding AI safety, which is an intriguing aspect of their work. Overall, the discussions highlight the importance of addressing specific types of errors in AI models and the progress being made in making large language models accessible to the public.

    • AI's unintended consequences in content recommendationYouTube's recommendation algorithm promotes extreme and inappropriate content, increasing engagement but causing concerns, while attackers use AI for deep fakes in fishing campaigns, manipulating individuals and spreading misinformation.

      While AI technology, such as memes, can bring joy and entertainment, it also comes with potential unintended consequences, particularly in areas like content recommendation. For instance, YouTube's recommendation algorithm, which is designed to increase engagement, has been found to promote extreme content and even inappropriate videos for children. A study by Mozilla revealed that such videos receive 70% more views per day than others, highlighting the issue's severity. This problem, which has been ongoing for some time, underscores the need for more transparency and control over algorithms. In the realm of ethics, another article from Venture Beat reported on attackers using AI to create deep fakes for fishing campaigns. Although the title may seem sensational, this issue is a serious concern, as deep fakes can be used to manipulate individuals and spread misinformation. These examples illustrate the importance of understanding and addressing the ethical implications of AI applications in our society.

    • Growing concerns over AI threats in cybersecurityResearchers warn of potential AI threats, specifically deep fakes and bot activities, emphasizing the need for cybersecurity teams to prepare.

      While there may not be any active offensive AI attacks currently, researchers are warning of the potential threats, specifically in the areas of deep fakes and bot activities. A recent survey by Microsoft, Purdue, and Ben Gurion Universities highlights the need for cybersecurity teams to prepare for these potential threats. Meanwhile, in less serious news, Elon Musk's ongoing self-driving car predictions continue to be a source of amusement for Tesla owners, despite missing deadlines. While Musk's promises have been inconsistent, his execution and eventual delivery of the technology are still commendable. The intersection of cybersecurity and AI is a growing concern, and this study underscores the importance of being proactive in addressing potential threats.

    • Navigating complex real-world conditions with AIRobots are learning to adapt to real-world terrain and AI is creating personalized sports highlights, but challenges remain in ensuring positive societal impact

      Self-driving technology and real-world AI are complex problems that require significant adaptation and learning. Elon Musk's prediction of self-driving cars being commonplace by now may seem ridiculous in retrospect, but the challenges of navigating and adapting to real-world conditions are not obvious at first. In related news, researchers at Facebook, Carnegie Mellon, and UC Berkeley have made progress in this area with a robot that can adjust its gait in real time while walking on various terrains. The robot, created by Chinese startup Unitry, uses trial and error and information from its surroundings to learn how to adapt. Meanwhile, IBM Watson has been making waves in the sports world by creating highlight reels of tennis matches within two minutes of their completion. The technology tracks the action, ranks every point based on player reactions, crowd excitement levels, and gameplay statistics, and creates personalized highlight reels for viewers. However, technology is not without its challenges, and YouTube's video recommendation algorithm has faced criticism for fueling division and conspiracy theories by recommending extreme content to users. It's important for tech companies to be aware of these issues and work towards creating a more positive impact on society.

    • Mozilla Foundation study on YouTube's AI system serving polarizing or disinformation contentThe Mozilla Foundation's study revealed concerns about YouTube's AI system promoting harmful content through polarization or disinformation, emphasizing the need for transparency laws, better oversight, and consumer pressure to address the issue.

      Despite Google's occasional responses to negative publicity, there are concerns that YouTube's AI system continues to serve content with the intention of attracting attention through polarization or disinformation. This issue was highlighted in a study by the Mozilla Foundation, which gathered data using a crowdsourcing approach and a browser extension. The extension allowed users to self-report regrettable YouTube videos, generating reports on recommended content and earlier video views to understand the functioning of the recommender system. Mozilla is advocating for transparency laws, better oversight, and consumer pressure to address YouTube's algorithm, which they argue is not performing much better than before. It's crucial to stay informed and engaged in discussions around AI ethics and the responsibility of tech companies to mitigate the spread of harmful content. Stay tuned to Skynet Today's Let's Talk AI Podcast for more updates on AI and technology news. Don't forget to visit skynettoday.com for related articles and subscribe to our weekly newsletter for even more content. Don't hesitate to subscribe to our podcast and leave a review if you enjoy the show. Join us next week for another insightful episode.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Tech-Infused Social Engineering - A conversation with Frank McKenna, Chief Fraud Strategist, PointPredictive

    Tech-Infused Social Engineering - A conversation with Frank McKenna, Chief Fraud Strategist, PointPredictive

    In episode 13 of Scam Rangers podcast, we chat with Frank McKenna, a fraud and scam fighter with 30 years of experience. We discuss the use of technology in combination with social engineering tactics to execute online scams, including bots, voice imitation, and deep fakes. Frank emphasizes the need for proactive measures to stop fraudulent transactions, and the importance of being passionate about fraud-fighting. The episode offers valuable insights and advice for both fraud fighters and non-fraud fighters, highlighting the importance of staying informed and vigilant to protect ourselves and our finances from scams.

    Frank on Fraud: https://frankonfraud.com

    ScamRanger: hrrps://scamranger.ai

    This podcast is hosted by Ayelet Biger-Levin https://www.linkedin.com/in/ayelet-biger-levin/  who spent the last 15 years building technology to help financial institutions authenticate their customers and identify fraud. She believes that when it comes to scams, the story starts well before the transaction. She has created this podcast to talk about the human side of scams, and to learn from people who have decided to dedicate their lives to speaking up on behalf of scam victims and who take action to solve this problem. Be sure to follow her on LinkedIn and reach out to learn about her additional activities in this space. 



    Democratization of Anarchy in Payments

    Democratization of Anarchy in Payments

    Join us for this week’s episode as Ernesto Rolfson, CEO of Finexio, and Craig Jeffery of Strategic Treasurer discuss the escalating threat landscape in payments. They cover how the criminal playbook has been bolstered by AI, enabling dangerous techniques like deep fakes and fraud. Discover why this matters to corporates and how payment companies are countering these evolving threats. 

    Dario Amodei, C.E.O. of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’

    Dario Amodei, C.E.O. of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’

    Dario Amodei has been anxious about A.I. since before it was cool to be anxious about A.I. After a few years working at OpenAI, he decided to do something about that anxiety. The result was Claude: an A.I.-powered chatbot built by Anthropic, Mr. Amodei’s A.I. start-up.

    Today, Mr. Amodei joins Kevin and Casey to talk about A.I. anxiety and why it’s so difficult to build A.I. safely.

    Plus, we watched Netflix’s “Deep Fake Love.”

    Today’s Guest:

    • Dario Amodei is the chief executive of Anthropic, a safety-focused A.I. start-up

    Additional Reading:

    • Kevin spent several weeks at Anthropic’s San Francisco headquarters. Read about his experience here.
    • Claude is Anthropic’s safety-focused chatbot.

     

    Deep Fakes

    Deep Fakes

    Catch Mel O Live on the Stationhead app @momentofmetal Wednesday and Friday at 6 AM Pacific Time.

     

    To learn about money without out the pretentious Bu!!sh%t give me a follow here.

     

    Want to talk to Mel about money? Schedule a call with her directly: https://hotmoonfinancial.com/

    🔗 Stay Connected
    Facebook:
    https://www.facebook.com/financestheotherfword
    Instagram: https://www.instagram.com/financestheotherfword/
    TikTok: https://www.tiktok.com/@ftofw_podcast?
    Call or Text: 775-537-4352

    📕Finances the Other “F” Word-Another “F” Word to Love Amazon: https://www.amazon.com/FINANCES-Other-F-Word-Mel/dp/1733665927

    Walmart: https://www.walmart.com/ip/FINANCES-The-Other-F-Word-Paperback-9781733665926/361773875

    🔊 Other Ways to Listen
    Apple Podcasts:
    https://podcasts.apple.com/us/podcast/finances-the-other-f-word/id1460418731
    iHeart: https://www.iheart.com/podcast/263-finances-the-other-f-word-47519083/
    Spotify: https://open.spotify.com/show/78TrGiNHzzlIfXVY9INbdl

    Swag & Merch: https://financestheotherfword.com/

    Questions, comments, or want to be on the show? Contact us at mel@financestheotherfword.com

    35: Don't DeExtinct the Dodo, Humans vs Computers, and Art Forgery

    35: Don't DeExtinct the Dodo, Humans vs Computers, and Art Forgery

    Is it really possible to bring back the Dodo, and what even really happened in the first place?  Are there any games left that humans can beat computers at, and why might that be okay?  And what can art forgery tell us about art and the art world?

    Images/Videos we Mention:
    The TinkerToy Tic Tac Toe Computer
    The Rock Paper Scissors Robot

    Support us with a Max Fun Membership!
    Join our Discord!

    We also learn about: 

    The Cherished Dodo, tinction, what does a dodo taste i mean look like, dodo art telephone, only 1 full skeleton, actualy lots of things die this way, invasive species also killed the dodo, Alice in Wonderland’s trendsetting the Dodo, Caroline’s clearly a Dodo person, Dodo DNA, Jurassic Park was a Documentary, I hope to die in a funny way, bringing the Bucardo back, the first De-extinction, and the first Double Extinction, DNA vaults, the nicobar pigeon, Cloacas are “Fun” according to Ella, let’s reintroduce the Dodos too why not, “bring a little bit of magic back to the mauritious”, ancient egyptian tic tac toe, the Shannon Number, The Mechanical Turk, TuroChamp, Chess was the Drosophila of AI, bad AI predictions, Deep Blue vs Kasparov, The Googolplex of Go games, Dr Fill the Crowssword AI, 100% Rock Papers Scissor Win Rate, Poker AI, this really cool game called Superluminauts, what’s “unfair” in AI? the power of art comes from yourself, the art of forgery, Han van Meegeren, Dodo in Gearl with a Pearl Earring we all make the same dumb joke, new Vermeer, don’t arrest me it’s just a forgery, there could still be Elmyr de Hory forgeries, F is for Fake, “if my work hangs in a museum long enough, it becomes real”, Tom makes a totally normal and not embaressing mixup of Van Gogh and DaVinci, how to spot a forgery, carbon dating forgeries, just look at it closely, identifying craquelure, mass spectrometry, so much of this is just about money, Salvator Mundi Ship of Theseus, were these reviews forged? Go check! the laughing people

    Sources:

    Sources:

    NHM: Dodo De-Extinction Announcement Causes Debate

    NatGeo: The Dodo is Dead! Long Live the Dodo

    Smithsonian: Article on Colossal Biosciences

    Colossal Biosciences: The Dodo

    Nature: What Would it Take to Bring Back the Dodo

    The Atlantic: Article on the Dodo

    NatGeo: First Extinct Clone Created

    NHM: Frozen Banks

    TedTalk: De-Extinction

    Britannica: Gene Editing

    Video on CRISPR

    Revive and Restore Website

    MIT: The Plan to Bring Back the Passenger Pigeon

    ---

    The TinkerToy Tic Tac Toe Computer

    The Shannon Number

    TuroChamp Chess AI

    Newell & Simon AI Predictions

    NYTimes Deep Blue Predictions

    Kasparov on Deep Blue

    A GoogolPlex of Go Games

    Dr Fill Crossword AI

    Rock Paper Scissors Robot

    Poker AI

    Starcraft AI

    AI Survey

    Angry Birds AI Paper

    Another Angry Birds AI Paper

    Angry Birds AI Competition

    AI Birds Competition Blog

    Chess.com 100 Million Users

    ---

    Authenticity and Viewing Art Study

    Intent to Deceive

    Detecting Art Forgery

    Detecting Art Forgery 2

    Art Forgery Detective

    Forgers Reveal Secrets of Paintings