Logo
    Search

    AI for COVID detection fails, GitHub's Copilot can code, GAN Theft Auto is fun

    enJuly 01, 2021

    Podcast Summary

    • Balancing Efficiency and Quality in AI Model DevelopmentGoogle survey highlights importance of optimizing AI models for efficiency and quality through compression techniques, learning techniques, automation, efficient architecture, and infrastructure. Clinical use of AI models for COVID-19 diagnosis from chest x-rays faces challenges due to limitations and need for rigorous evaluation and validation.

      There are two primary considerations when it comes to AI model development: efficiency and quality. The Google survey discussed in the podcast highlights the importance of optimizing models for both, with a focus on compression techniques, learning techniques, automation, efficient architecture, and infrastructure. The paper proposes strategies for achieving this balance, and its release signifies a growing emphasis on optimizing models for real-world applications. Another key takeaway from the podcast is the limitations of AI models for clinical use, specifically in detecting COVID-19 from chest x-rays. Despite numerous models being developed during the pandemic, none have proven suitable for clinical deployment. This underscores the need for rigorous evaluation and validation of AI models before they are implemented in real-world settings. In summary, the Google survey discussed in the podcast emphasizes the importance of optimizing AI models for both efficiency and quality, while the second article highlights the challenges of deploying AI models for clinical use, particularly in the context of COVID-19 diagnosis from chest x-rays. These insights provide valuable context for understanding current trends and challenges in AI research and development.

    • Study reveals safety issues with COVID-19 machine learning modelsA recent study exposed safety concerns with COVID-19 diagnosis models, emphasizing the need for transparency and accountability in AI research. New AI application, Co-pilot, shows promise with code generation capabilities, but continued research is crucial for explainable AI and effective model development.

      A recent study published in Nature Machine Intelligence revealed that many machine learning models used for COVID-19 diagnosis and prediction, developed within the past year, have issues that make them unsafe for deployment. Two medical students from the University of Washington conducted the study and encountered challenges in recreating the published models. They discovered that some models incorrectly identified objects such as text, arrows, or image corners instead of lungs, which is not ideal for COVID-19 detection. This highlights the importance of transparency and accountability in AI research, especially for those outside the field. Co-pilot, a new AI application announced by GitHub and OpenAI, offers a more promising development. It's an intelligent AI model that can generate complete code for popular programming languages directly in Visual Studio Code Editor. While it's still in its early stages, it has the potential to auto-complete entire functions based on a dock string, making it an attractive tool for developers. The study serves as a reminder of the need for explainable AI and best practices in data collection and model development. It also underscores the importance of continued research and innovation in AI to address challenges and improve outcomes.

    • New AI tool generates code completions based on promptsOpenAI and GitHub's Codex tool generates code suggestions based on context, potentially saving time and effort, but requires human review for security and legality concerns

      OpenAI and GitHub have launched a new AI tool in private beta, which generates code completions based on given prompts. This tool, known as Codex, could potentially automate the process of finding solutions to coding problems by generating code snippets. However, it's important to note that the code generated by Codex is not necessarily secure and may need human review before implementation. The tool is trained on a large dataset of public code, mostly under the GitHub Public License (GPL), which restricts commercial use. Codex generates completions by analyzing the context of the given prompt and generating code that fits that context. It's not generating code from scratch but rather providing suggestions based on existing code. The tool could save time and effort by automating the process of finding solutions to coding problems, but it's not a replacement for human judgment and review. Some concerns have been raised about the potential for Codex to generate racist variable names or other problematic code, as well as the legal implications of using GPL-licensed code for commercial purposes. The tool is still in beta and its full capabilities and limitations are yet to be determined. Overall, Codex represents an exciting development in the field of AI-assisted coding, but it also raises important ethical and legal questions that need to be addressed.

    • Exploring the Potential and Limitations of AI in Code Generation and Warehouse AutomationAI technology offers significant benefits in code generation and warehouse automation, but comes with challenges and limitations requiring human oversight and critical perspective.

      While the advancements in AI technology, such as the AI code-generating tool discussed, show promising potential, they also come with significant challenges and limitations. The tool, still in its beta stage, requires human oversight and review due to potential security issues and inaccuracies. Additionally, it may not be as advanced or contextually aware as other similar tools in the market. Another topic covered was Amazon's new mobile robot, "Gut," designed to help employees by carrying items around the warehouse. While this technology is a step towards automation and employee safety, it was noted that it may be behind the state of the art in commercial mobile robotics. Companies like Fetch Robotics have already implemented robots with arms, making them more versatile and efficient. The reasons for Amazon's supposed lag could be due to their large scale and safety concerns, but it still highlights the rapid advancements in this field. In summary, the discussion highlighted the potential and limitations of AI technology in code generation and warehouse automation. While these advancements offer significant benefits, they also come with challenges that need to be addressed. It's important to approach these technologies with a critical and informed perspective, recognizing both their potential and limitations.

    • Amazon and Twitter prioritize employee safety and ethical AIAmazon aims to reduce incidents by 50% with tech, while Twitter hires critics to build ethical AI team

      Companies, like Amazon, are prioritizing employee safety and using technology, such as robotics, to achieve this goal. Amazon's recent blog post emphasizes their commitment to reducing recordable incidents by 50% by 2025 through technological advancements. However, it's important to note that this push towards safety is also a PR move in response to criticisms of harsh work conditions. On a different note, Twitter is taking ethical AI seriously by hiring tech's biggest critics to build and guide their machine learning ethics and transparency team. This team has already made strides in addressing controversies, such as racial bias in Twitter's cropping algorithm. Overall, both Amazon and Twitter are showing dedication to using technology responsibly and ethically in their respective industries.

    • Companies prioritize ethical AI within product teamsCompanies like Twitter integrate ethical AI research and implementation into their product teams, ensuring transparency and effectiveness in addressing unintended consequences.

      Companies like Twitter are starting to prioritize ethical AI research and implementation directly within their product teams, rather than just publishing research separately. This approach allows for more transparency and openness when issues arise, as seen with Twitter's recent work on responsible machine learning and algorithmic changes. By working closely with engineers and guiding them, companies can make significant changes and address unintended consequences more effectively. It's important for companies to be open about their processes and what went wrong, as machine learning systems will always have unintended side effects. An example of this is TikTok's text-to-speech issue, where the system produced strange results when certain characters were inputted in large quantities. By acknowledging and addressing these issues publicly, companies can build trust with their users and demonstrate their commitment to ethical AI practices.

    • Reducing bias in AI job recommendations, but creating more?LinkedIn discovered their AI job recommendations were biased towards men, so they created another AI to counteract it. However, these algorithms can still pick up on subtle patterns leading to potential bias.

      While companies like LinkedIn use AI to reduce bias in their job recommendation algorithms, there's a risk of creating more bias due to the complex and opaque nature of these systems. LinkedIn discovered that their algorithms were producing biased results, referring more men than women to certain jobs. To counteract this, they developed another AI program to counteract the bias. However, these algorithms can still pick up on subtle patterns that may contribute to bias, such as common gender identities. The challenge lies in understanding how these algorithms work and what factors may be influencing their recommendations. This issue is reminiscent of recommendation algorithms on platforms like YouTube, where creators must cater to the algorithm to increase views. The lack of transparency in these systems can lead to individuals trying to manipulate them, potentially exacerbating existing biases. While AI can be a powerful tool for reducing bias, it's essential to approach it with caution and transparency to ensure fair and equitable outcomes.

    • The viral nature of YouTube content and unexpected phenomenaYouTube's algorithm can create a feedback loop, leading to unintended consequences like violent or disturbing content for young children. YouTube is addressing this issue by requiring creators to label content as intended for children or not.

      The viral nature of content on platforms like YouTube can create a feedback loop, where broad appeal and algorithmic recommendations fuel each other in a virtuous cycle. This can lead to unexpected and sometimes concerning phenomena, such as the rise of unlicensed, automatically generated videos for young children that can be violent or disturbing. These videos, often made with nursery rhyme music and popular IP, can amass large viewerships and even influence the algorithm in unexpected ways. The solution to this issue from YouTube was to implement a more mindful upload process, requiring creators to explicitly state whether their content is intended for children or not. Additionally, there is the intriguing application of AI in content creation, as demonstrated by the creation of a highway stretch from GTA V using a Generative Adversarial Network (GAN) by YouTuber Harrison Kinsley. This example showcases the potential for AI to generate detailed and realistic content, although it still falls short of capturing the full complexity of pre-existing media. Overall, these discussions highlight the importance of understanding the interplay between human creativity, algorithmic recommendations, and AI-generated content on platforms like YouTube.

    • Investments in machine learning and robotics continue, with a focus on ethical developmentGoogle shifts to machine learning applications, Toyota's roboticists make strides, Hyundai acquires Boston Dynamics, ethical AI development emphasized

      Technology companies are continuing to invest heavily in machine learning and robotics, with Google shifting focus from AI research to machine learning applications, Toyota's roboticists making strides in understanding complex environments, and Hyundai Motor Group acquiring a controlling stake in Boston Dynamics. Additionally, the importance of ethical AI development is being emphasized, with Google's DeepMind research scientist Raya Hadzel advocating for collective responsibility and Stanford University requiring ethics and society reviews for AI research proposals. These developments underscore the growing significance of AI and robotics in various industries and the need for responsible innovation. Stay tuned for more updates on these and other stories on Skina Today's Let's Talk AI Podcast. Don't forget to check out the articles and subscribe to our weekly newsletter for more content at SkinaToday.com.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Mini Episode: Police Surveillance, Productivity, and Calls for Regulation and Cooperation

    Mini Episode: Police Surveillance, Productivity, and Calls for Regulation and Cooperation

    Our third audio roundup of last week's noteworthy AI news!

    This week, we look at the how the police are using surveillance against civil rights protesters, AI for productivity, and recent calls from AI researchers for regulation and cooperation. 

    Check out all the stories discussed here and more at www.skynettoday.com

    Theme: Deliberate Thought by Kevin McLeod (incompetech.com)

    Licensed under Creative Commons: By Attribution 3.0 License

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Our latest episode with a summary and discussion of last week's big AI news!

    This week Twitter and Zoom’s algorithmic bias issuesDespite past denials, LAPD has used facial recognition software 30,000 times in last decade, records show, We’re not ready for AI, says the winner of a new $1m AI prize, How humane is the UK’s plan to introduce robot companions in care homes?, OpenAI is giving Microsoft exclusive access to its GPT-3 language model

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-fourth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Top AI Trends for 2024 | The AI Moment, Episode 7

    Top AI Trends for 2024 | The AI Moment, Episode 7

    On this episode of The AI Moment, we will discuss the latest developments in enterprise AI, & the top 7 AI trends for 2024.

    After the year AI had in 2023, what could possibly be next? I will give you a hint, AI won’t slow down much in 2024. There are seven key trends I think will impact the adoption of AI in 2024. I walk through what those trends are and why they are so important.

    Holger Hoos, Responsible AI

    Holger Hoos, Responsible AI

     

    Transcript

     

    Holger Hoos is the Alexander von Humboldt Professor of AI at RWTH Aachen University (Germany), Professor of Machine Learning at Universiteit Leiden (the Netherlands), and Adjunct Professor of Computer Science at the University of British Columbia (Canada), where he also holds an appointment as Faculty Associate at the Peter Wall Institute for Advanced Studies. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), and the European Association for Artificial Intelligence (EurAI). He is board chairman of the Confederation of Laboratories of Artificial Intelligence Research in Europe (CLAIRE), vice-president of EurAI, past president of the Canadian Association for Artificial Intelligence (CAIAC), member of the Advisory Board, and former Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR). Prof. Hoos leads the VISION coordination mandate for the four European networks of centers of excellence in AI established in 2020.

    Holger’s research focuses on Human-Centered AI, AI for Good, and AI for All. His goal is to improve the efficiency of AI methods by automatically increasing performance and reducing resource needs; and to broaden access to and use of cutting-edge AI methods. Overall, Holger and his group develop and study AI methods that augment rather than replace human intelligence, and that help human experts to overcome their biases and limitations. Known for his work on machine learning and optimization methods for the automated design of high-performance algorithms and on stochastic local search, Holger has developed – and vigorously pursues – the paradigm of programming by optimization (PbO). He is also one of the originators of automated machine learning (AutoML). Holger works at the boundaries between computer science and other disciplines. Much of his work is inspired by and has broad impact on real-world applications.

    In 2018, Holger co-founded CLAIRE, an initiative to advance European AI research and innovation. Incorporated as an international nonprofit in 2019, CLAIRE promotes excellence across all of AI, for all of Europe, with a human-centered focus. In 2021, CLAIRE received, jointly with the European Laboratory for Intelligent Systems (ELLIS), the €100 000 German AI Innovation Prize (Deutscher KI-Innovations-Preis) in recognition of outstanding contributions to development and research in AI. This prize is the largest of its kind in Europe.

    In November 2021, Holger was selected for the Alexander von Humboldt Professorship in AI, Germany’s most highly endowed research award, which honors its recipients for their outstanding research record and aims to facilitate long-term and groundbreaking research contributions. Supported by this award and substantial additional resources made available by the university, he started building a new research group at Rhine-Westphalia Technical University of Aachen (RWTH Aachen, Germany) dedicated to the advance of Human-Centered AI, AI for Good, and AI for All, in January 2022. He also serves on the board of directors of the RWTH AI Center.

    LinkedIn

    Ousted OpenAI board member on AI safety concerns

    Ousted OpenAI board member on AI safety concerns

    Sam Altman returns and OpenAI board members are given the boot; US authorities foil a plot to kill Sikh separatist leader on US soil; plus, the UK’s Autumn Statement increases the tax burden.


    Mentioned in this podcast:

    US thwarted plot to kill Sikh separatist on American soil

    Hunt cuts national insurance but taxes head to postwar high

    OpenAI says Sam Altman to return as chief executive under new board 


    The FT News Briefing is produced by Persis Love, Josh Gabert-Doyon and Edwin Lane. Additional help by Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Our engineer is Monica Lopez. Manuela Saragosa is the FT’s executive producer. The FT’s global head of audio is Cheryl Brumley. The show’s theme song is by Metaphor Music. 


    Read a transcript of this episode on FT.com



    Hosted on Acast. See acast.com/privacy for more information.