Logo
    Search

    GPT-Neo, Wav2Vec-U, Deepfake Dubs, Michelangelo AI, History of Ethical AI at Google

    enMay 28, 2021

    Podcast Summary

    • Apple's self-driving car ambitions and AI advancementsApple is pushing into self-driving cars, but real-world complexities persist. Open-source AI community offers free alternatives, but may not match commercial performance. Google focuses on NLP and search improvements.

      While technology such as self-driving cars and advanced language models continue to advance, there are still challenges to overcome before they can be fully integrated into our daily lives. Apple, for instance, is reportedly increasing its efforts in the self-driving car industry, but a recent incident involving a Waymo One robot taxi demonstrates the complexities of navigating real-world road conditions. On the other hand, the open-source community, represented by Eleuther AI, is making strides in providing free alternatives to expensive and exclusive AI technologies like GPT-3. However, these alternatives may not yet match the performance of their commercial counterparts. During Google's IO developer conference, the company announced its focus on improving natural language processing and search, indicating that these areas are still a priority for the tech giant. Overall, the advancements in AI are promising, but there's still work to be done to make these technologies accessible and reliable for the masses.

    • New Open Source AI Model: GPT Neo, Google's Conversational AI: Lambda, and Ethical Implications of AIGPT Neo, a new open source AI model, offers similar capabilities as GPT-3 with fewer computational needs. Google introduces Lambda for conversational AI and Mum for implicit comparisons in search queries.

      Recent AI research includes the release of GPT Neo, a free alternative to the large language model GPT-3. This model, created by the team at Luther AI, has similar capabilities with billions of parameters and is now open source and accessible to everyone. The weights have been released, reducing the need for extensive compute power, making it a more eco-friendly option. Furthermore, there's a Hugging Face API available for this model. Moving on to AI applications, Google is working on integrating conversational AI technology, Lambda, into its search engine, voice assistant, and workspace. Lambda makes it easier for AI systems to have more conversational dialogue. Google aims to have AI systems take on more human tasks, allowing users to ask more sophisticated questions instead of multiple queries. Additionally, Google's Multitask Unified Model, or Mum, is designed to understand implicit comparisons in search queries and provide the most appropriate answer. In the realm of societal impacts, the discussion touched on the societal effects of AI and the importance of considering ethical implications as AI continues to advance. Stay tuned for more in-depth discussions on these topics and other AI news in the future.

    • Microsoft's Open-Source GPT Neo Challenges Proprietary ModelsMicrosoft's GPT Neo, an open-source language model with fewer parameters, competes with proprietary models like GPT-3 on quantitative metrics and generates coherent completions. A larger-scale version, GPT Neo X, is in development.

      GPT Neo, a new open-source language model from Microsoft, is making strides towards the capabilities of larger models like GPT-3. Although it has fewer parameters (3 billion as opposed to GPT-3's 170 billion), it performs impressively on quantitative metrics and generates coherent, well-thought-out completions, as shown in an example of a unicorn discovery story. This is a significant step towards making advanced AI models more accessible to the public, as OpenAI, the original developer of GPT-3, has made it proprietary. Microsoft's GPT Neo is an open-source alternative, and they are already working on a larger-scale version, GPT Neo X. This development is exciting for those who hoped that larger models wouldn't be limited to companies that develop them. The unicorn story generated by GPT Neo is grammatically correct and thematically consistent but lacks the specificity of recognizing the unique aspects of unicorns. Nonetheless, it's a promising sign of the potential for open-source AI language models to compete with proprietary ones.

    • Facebook's AI model, Wave, makes unsupervised speech recognition possibleFacebook's Wave uses unsupervised learning and a GAN to recognize speech without transcriptions, potentially saving costs and improving performance in ASR.

      Facebook's AI model, Wave, has made a significant stride in the field of unsupervised speech recognition. This means the model can recognize speech without being told what it is transcribed, allowing access to a vast amount of untranscribed speech data. This is exciting because creating transcriptions of speech is expensive and time-consuming, making it difficult to scale. Unsupervised learning, as demonstrated by Wave's use of a Generative Adversarial Network (GAN), could potentially lead to cost savings and improved performance for both startups and large companies in the field of automatic speech recognition (ASR). Additionally, self-supervised learning, which has proven effective with language models, could also lead to better performance with unannotated speech and text data. Overall, this research is significant as it offers a promising alternative to the traditional, costly method of creating large, annotated datasets for speech recognition.

    • AI's Impact on Entertainment: Deep Fake Dubbing and ChatbotsAI is transforming entertainment through deep fake dubbing for language localization and chatbots for information sharing

      AI technology is making strides in various industries, including entertainment. A recent development showcases AI's ability to do deep fake dubbing, enabling lip syncing while maintaining the original actor's performance. This technology, which uses a Generative Adversarial Network (GAN), has the potential to revolutionize the entertainment industry, particularly for localizing content into different languages. Another application of AI, though less serious, is the creation of a Michelangelo AI chatbot by Quelo for the Duomo Museum. This chatbot, while not advanced, offers visitors specific information about Michelangelo and the museum. These examples demonstrate the versatility and growing presence of AI in our daily lives, from entertainment to education.

    • Museums get interactive with AI chatbotsMuseums use AI chatbots like Michelangelo to offer personalized, immersive experiences, making learning convenient and enjoyable, and modernizing institutions

      Technology is transforming traditional museum experiences into more interactive and engaging platforms. The Domo Museum in Italy, initially hesitant to embrace technology, had to adapt during the COVID-19 pandemic and integrated an AI chatbot named Michelangelo onto their website. This chatbot offers visitors a unique and fun experience by answering queries and engaging in conversations, providing visitors with a more personalized and immersive museum experience. The integration of technology in this way adds an artistic flair and modernizes museums, making them more accessible and less intimidating. It also raises the possibility of implementing chatbots for querying Wikipedia stories or other information, making learning and exploration more convenient and enjoyable. Overall, the use of technology in museums not only adds a modern touch but also creates a more interactive and engaging experience for visitors.

    • Exploring the Role of AI in Art and EthicsAI's impact on society continues to evolve, with potential for interactive art and ethical considerations. Google's handling of ethical AI leaders has sparked controversy, highlighting the importance of recognizing contributions.

      The role of AI in our society continues to evolve, with both exciting possibilities and ethical considerations. The discussion touched upon the idea of making art appreciation more interactive and fun using AI, as well as the societal implications of AI, particularly in relation to recent events at Google. A Medium post by Black LeMoyne provided insights into the history of ethical AI at Google, highlighting the significant contributions of Margaret Mitchell and Timnit Gebru. Their dismissals from the company have sparked criticism and negative feelings, with this post serving as a tribute to their impact and importance to the team. The post underscores the significance of recognizing and acknowledging the contributions of key figures in the development of AI, even as companies may seek to move on from controversies. Overall, the conversation emphasized the importance of ethical considerations in the development and implementation of AI.

    • Struggles and Commitment in Ethical AI TeamsRecent events at Google and Twitter's image cropping algorithm highlight the importance of addressing ethical considerations and transparency in AI technology. Committed teams continue the work, building a foundation for future improvements.

      The state of teams in ethical AI, specifically at Google, has been a topic of concern recently. A reflection from an anonymous source expresses the current struggle of making decisions without trusted leadership and the ethical dilemmas faced by researchers and engineers. However, those who remain are committed to upholding the legacy of their team and continuing the work started. The connections and expertise built over the past few years will serve as a foundation to weather the current storm. The issue of ethics in AI was further highlighted in a blog post from Twitter regarding their image cropping algorithm. It was discovered that there were biases in the algorithm, with demographic groups being cropped differently. Twitter acknowledged these findings and introduced a new feature, allowing users to control the cropping of their images. While the percentages of bias were small, they still had a significant impact on the platform. This incident serves as a reminder of the importance of addressing biases and ethical considerations in AI technology. In conclusion, the ongoing challenges faced by teams in ethical AI, as demonstrated by the recent events at Google and the controversy surrounding Twitter's image cropping algorithm, underscore the importance of ethical considerations and transparency in the development and implementation of AI technology. The commitment of those in the field to continue the work and strive for improvement is a positive sign for the future.

    • Twitter's Ethical Approach to AI DevelopmentCompanies should be open to feedback and willing to make changes when necessary, even if it means altering initial decisions for the better. In the case of Twitter's photo cropping feature, allowing users control led to improved fairness and user experience.

      Companies, such as Twitter, should be open to feedback and willing to make changes when necessary, even if it means going against initial decisions. In the case of Twitter's photo cropping feature, the original implementation used machine learning to automatically crop photos for quicker posting. However, concerns arose when it was discovered that under certain conditions, the algorithm was cropping images in a biased way, favoring white people over others. Instead of dismissing these concerns, Twitter conducted a proper study and found that allowing users to control their own cropping was a better solution. This ethical approach to AI development is a commendable example of how companies should handle controversies and make decisions that prioritize user experience and fairness. Overall, the positive outcome of this situation highlights the importance of transparency, accountability, and adaptability in the field of AI.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    An Open GPT-4 Level Model? Meet Falcon 180B

    An Open GPT-4 Level Model? Meet Falcon 180B
    Falcon 180B is a new open access foundation model that reportedly performs between GPT3.5 and GPT-4 level. NLW looks at the release and explores the implications for the broader discussion of whether advanced models should be released open source. Before that on the Brief, new Zoom AI tools plus a Pentagon speech on AI defense systems. Today's Sponsor: Superintelligent - Advanced 1-on-1 AI mentorship for creators ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Tucker Carlson Says We Should Bomb the Data Centers and Kill AI Before It's Too Late

    Tucker Carlson Says We Should Bomb the Data Centers and Kill AI Before It's Too Late
    Some dramatic discourse from the former Fox journalist, plus Drake uses AI Snoop and Tupac in a diss track, and people are really, really into Llama 3. ** Join NLW's May Cohort on Superintelligent. Use code nlwmay for 25% off your first month and to join the special learning group. https://besuper.ai/ ** Consensus 2024 is happening May 29-31 in Austin, Texas. This year marks the tenth annual Consensus, making it the largest and longest-running event dedicated to all sides of crypto, blockchain and Web3. Use code AIBREAKDOWN to get 15% off your pass at https://go.coindesk.com/43SWugo  ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    After Seeing OpenAI’s Sora, Tyler Perry Cancelled Building an $800m Studio

    After Seeing OpenAI’s Sora, Tyler Perry Cancelled Building an $800m Studio
    Discover how OpenAI's groundbreaking Sora video model impacted Hollywood, leading Tyler Perry to reconsider a massive studio expansion. A testament to AI's immediate influence on the entertainment industry. INTERESTED IN THE AI EDUCATION BETA? Learn more and sign up https://bit.ly/aibeta Today's Sponsors: Notion - Notion AI. Knowledge, answers, ideas. One click away. - https://notion.com/aibreakdown ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    The Intelligence: Kherson, one year later

    The Intelligence: Kherson, one year later

    After a grinding and lethal eight-month battle, Ukraine’s forces retook the port city a year ago. Our correspondent visits, finding a populace both anxious and defiant. As with technological transformations that came before, the benefits of artificial intelligence will accrue disproportionately to the very stars who rail against it (10:22). And why New York is now safer—if you’re a bird (19:46).


    Sign up for a free trial of Economist Podcasts+.


    If you’re already a subscriber to The Economist, you’ll have full access to all our shows as part of your subscription.


    For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.



    Hosted on Acast. See acast.com/privacy for more information.