Logo

    AI Gets More Efficient, Improves Taxation, and Looks Out For Masks

    enMay 17, 2020
    What is the main focus of the Let's Talk AI Podcast?
    How does engaging in news discussions impact researchers' perspectives?
    What initiative is OpenAI taking regarding model efficiency?
    What concerns are associated with Clearview AI's operations?
    Why is algorithmic efficiency important in AI research?

    Podcast Summary

    • Staying informed about AI news broadens research perspectiveEngaging in AI news discussions promotes deeper reflection on research and potential implications, offers unique insights from diverse viewpoints, and encourages collaboration and transparency within the AI community.

      Staying informed about the latest AI news and trends, as researchers, not only broadens our perspective but also encourages us to reflect on the real-world impact of our research. This discussion on the Let's Talk AI Podcast, hosted by Andrei Konikov and Sharon, highlighted their experiences as PhD students at Stanford. They shared how engaging in news discussions has led them to think more deeply about their research and its potential implications. Moreover, the availability of news articles and media coverage on AI research provides a unique luxury and challenge for the field. While there can be inaccuracies and hype, it also offers opportunities for researchers to consider diverse viewpoints and gain insights from outside perspectives. One recent news article covered by the podcast was from VentureBeat, titled "OpenAI begins publicly tracking model efficiency." This article reported on OpenAI's new initiative to share information about the efficiency of their large language models. This move aims to promote transparency and encourage collaboration within the AI community, which could lead to advancements in model optimization and resource utilization. As AI continues to evolve and make headlines, it is essential for researchers to stay informed and engage in discussions about the latest developments. This not only helps us grow as researchers but also ensures that our work contributes positively to the world.

    • OpenAI's new project focuses on energy-efficient machine learning modelsOpenAI is driving a shift in AI research towards energy-efficient models, aiming to make efficiency evaluations standard and reduce resource usage in both research and industry.

      OpenAI, a semi-for-profit company, has initiated a new project to track machine learning models that achieve top performance with minimal energy consumption and computation. This effort aims to provide a quantitative picture of algorithmic success, informing policy making by emphasizing AI's technical attributes and societal impact. Over the last decade, OpenAI discovered that the amount of computation required to achieve state-of-the-art performance has been decreasing consistently. For instance, Google's transformer architecture surpassed a previous state-of-the-art model with 61 times less compute, and DeepMind's Alpha Zero took eight times less compute to match its predecessor. OpenAI speculates that algorithmic efficiency might even outpace Moore's Law. Currently, efficiency discussions have been overlooked in AI research, as researchers primarily focus on performance statistics without mentioning the required computation. However, in practice, industries prioritize efficiency due to increased costs and time for inference. By making this push, OpenAI aims to make efficiency evaluations a standard in research, enabling researchers and industries to compare and contrast the efficiency of various models. This shift in focus could lead to more resource-efficient AI systems, benefiting both research and industry.

    • OpenAI's new benchmarking effort focuses on efficiency in AI researchOpenAI's new initiative bridges industry and academia, emphasizing efficiency in AI research, aligning with leading institutions and benefiting green AI and climate change.

      OpenAI is pushing for greater emphasis on efficiency in AI research, as evidenced by their new benchmarking effort focusing on ImageNet and WMT14. This initiative, which bridges the gap between industry and academia, aligns with the views of researchers from institutions like the LN Institute of AI, Carnegie Mellon, and University of Washington. The push for efficiency is also beneficial for the development of green AI and reducing the impact on climate change. Additionally, Salesforce's AI Economist project showcases the potential of AI in optimizing environments, even in a limited simulation of a city economy, where it aimed to maximize productivity while minimizing inequality. These developments demonstrate the importance of considering efficiency as a key metric alongside accuracy in AI research.

    • AI performance in controlled settings vs real worldEthics and challenges of AI use, including fair reward functions and objective, were discussed in relation to economics study and facial recognition company ClearView AI.

      AI models can demonstrate impressive performance when applied to controlled game settings or simulations. However, the challenge lies in understanding where these models break down and how they apply to the real world. This was discussed in relation to an economics study that used an AI model to optimize tax policies, comparing it to Alpha Zero, which has been used by professional Go players for insight. The ethics of AI use also came into focus with the news that ClearView AI, a controversial facial recognition company, will no longer sell its app to private companies and terminate contracts in Illinois. The debate around what constitutes a fair reward function and objective in AI optimization was also highlighted. Overall, while there are promising developments in AI, there are also ethical considerations and challenges that need to be addressed.

    • Concerns over facial recognition companies like Clearview AIThe lack of oversight and scrutiny of facial recognition companies raises concerns for potential misuse and privacy violations. Clearview AI's involvement with far-right organizations and questionable leadership adds to the unease, emphasizing the need for active journalists and regulators.

      The lack of significant oversight and scrutiny of companies like Clearview AI, which specialize in facial recognition technology, raises concerns about potential misuse and privacy violations. The recent announcement of Clearview ending contracts in Illinois due to legal issues is a step in the right direction, but the company's continued operation in other states and potential connections to far-right organizations add to the unease. The involvement of individuals with questionable backgrounds in the leadership of these companies further adds to the unease. It's crucial that there are active journalists and regulators scrutinizing these companies to prevent potential misuse and protect individual privacy. The intersection of AI technology and far-right organizations is a complex issue that requires ongoing attention and investigation.

    • AI-powered mask detection in France's CCTV camerasFrance uses AI at edges of network for mask detection, preserving privacy by not collecting individual data, and identifying mask non-compliance hotspots

      France, known for its privacy-focused stance, has implemented AI-powered mask detection in its CCTV cameras at the Paris metro, but the system does not collect or store individual data. Instead, it generates statistics every 15 minutes, which are then sent to authorities for monitoring mask-wearing trends and distribution of masks. The system, developed by French startup DetakaLab, is an example of edge computing, ensuring data privacy. The technology is currently being used in Cannes, with plans to distribute masks to residents based on the data. The system does not identify individuals or serve as a pretext for mass surveillance. The use of mask detection technology is mandatory in France, with fines for non-compliance. However, the software will not be used to identify or rebuke individuals. Overall, the system aims to identify hotspots of mask non-compliance for targeted interventions, while respecting privacy.

    • London's use of edge AI for traffic and crime preventionEdge AI technology can be an acceptable form of surveillance if implemented with clear scope and transparency, limiting potential for mass surveillance and addressing privacy concerns.

      The use of edge AI technology for traffic monitoring and crime prevention in London, as discussed in the podcast, could be an acceptable form of surveillance if implemented with clear scope and transparency. The edge AI technology, which performs computations on the device itself, limits the potential for mass surveillance and makes the system more acceptable to the public. Additionally, the fact that the system is not used for identifying individuals but rather for collecting hotspots could further reduce privacy concerns. This approach could serve as a potential model for other countries, especially those with democracies, to adopt in their own privacy and AI regulations. Overall, the podcast highlights the importance of clear communication and thoughtful implementation when it comes to AI and surveillance technologies.

    Recent Episodes from Last Week in AI

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    #181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy

    Our 181st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov and Jeremie Harris

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode:

    - Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.  - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.  - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.  - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.

    Timestamps + Links:

    Last Week in AI
    enSeptember 15, 2024

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    #180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047

    Our 180th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    • Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
    • Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results. 
    • Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
    • Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

    Timestamps + Links:

    Last Week in AI
    enSeptember 03, 2024

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    #179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist

    Our 179th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Episode Highlights:

    - Grok 2's beta release features new image generation using Black Forest Labs' tech.

    - Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

    - Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

    - Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

    Timestamps + Links:

    Last Week in AI
    enAugust 20, 2024

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    #178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk

    Our 178th episode with a summary and discussion of last week's big AI news!

    NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

    Timestamps + Links:

    Last Week in AI
    enAugust 16, 2024

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    #177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

    Our 177th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

    If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

    In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 11, 2024

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    #176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

    Our 176th episode with a summary and discussion of last week's big AI news!

    NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

     

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enAugust 03, 2024

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    #175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts

    Our 175th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.  

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 25, 2024

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    #174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

    Our 174rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

     

    Timestamps + links:

    Last Week in AI
    enJuly 17, 2024

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    #173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court

    Our 173rd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    See full episode notes here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.  

    Timestamps + links:

    Last Week in AI
    enJuly 07, 2024

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    Related Episodes

    #027 - Baksmalla Smack on the Arse

    #027 - Baksmalla Smack on the Arse

    Honk! Honk! Let MachineDaena and Salmon entertain you in the wonderful world of the Electric Honk Podcast! A world of humour, politics, sport and some genuinley interesting random facts!

    This week we cover:

    Tops off for the YanTans

    Salmon’s Gets Wood

    EasyJet Lisps & Disney

    Massive Orca Brains and The Real Squid Games

    The Five Regrets of the Dying

    National French Toast Day - Days of the Year

    TOP SHELF – “Captain Morgan’s Tiki”

    The History of Hangovers

    Sam Altman – OpenAI Hokey Cokey

    Make it more AI trend – Make the Goose Sillier!

    Garrett Scott - AI Goose Tweet  “For every 10 likes this gets, I will ask ChatGPT to make this goose a little sillier.”

    GOOSE NEWS – “AI Goose Facial Recognition”

    Net migration at 745,000 UK

    THE HEADLINER – “OAP Sells Scrumpy Cider to passers by .... but it's Really his Piss”

    PORK SCRATCHINGS – Region Beta Paradox

    Napoleon Released in Cinemas

    Man City – Player Scouting Presentations

    The Nine Most Painful Ways to Die

    The 5 Most Important AI News Stories Last Month

    The 5 Most Important AI News Stories Last Month
    OpenAI Superalignment? Claude 2? White House? Find out what NLW thinks were the most important AI news stories in July. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Mini Episode: More Facial Recognition, Racism in Academia, and the latest in Commercial AI

    Mini Episode: More Facial Recognition, Racism in Academia, and the latest in Commercial AI

    Our fourth audio roundup of last week's noteworthy AI news!

    This week, we look at recent progress in curtailing the development of facial recognition technology, a recent call for attention to racism in academia, and recent news from OpenAI and Boston Dynamics. 

    Check out all the stories discussed here and more at www.skynettoday.com

    Theme: Deliberate Thought by Kevin McLeod (incompetech.com)

    Licensed under Creative Commons: By Attribution 3.0 License

    What's In A Face

    What's In A Face
    We think our faces are our own. But technology can use them to identify, influence and mimic us. This week, TED speakers explore the promise and peril of turning the human face into a digital tool. Guests include super recognizer Yenny Seo, Bloomberg columnist Parmy Olson, visual researcher Mike Seymour and investigative journalist Alison Killing.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    This week:

    0:00 - 0:35 Intro 0:35 - 4:30 News Summary segment 4:30 News Discussion segment

    Find this and more in our text version of this news roundup:  https://lastweekin.ai/p/99

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)