Logo
    Search

    Waymo‘s Taxi Struggles, Robot Surveillance in Singapore, AI against Human Trafficking

    enOctober 22, 2021

    Podcast Summary

    • Self-driving taxis face limitations but offer convenienceAI is revolutionizing industries from self-driving taxis to climate studies and anti-social behavior patrols, but challenges persist in optimizing performance and addressing ethical concerns.

      While self-driving technology is making strides, it still faces challenges. Waymo, a leading player in the field, is testing its self-driving taxi service in Arizona, but customers have reported some limitations, such as longer trip times due to the car's avoidance of complex turns and shared lanes. However, the convenience and novelty of the self-driving experience have won over many riders. Meanwhile, AI is being used in other areas, such as climate studies and anti-social behavior patrols in Singapore. In the fight against human trafficking, AI is being employed to analyze patterns and identify potential victims. Climate researchers are using AI to analyze satellite images and predict weather patterns. And in Singapore, robots are patrolling public areas to detect and deter anti-social behavior. These are just a few examples of how AI is making an impact in various fields. Overall, the potential of AI is vast, but it also comes with challenges and limitations that need to be addressed.

    • Introducing Self-Driving Cars and AI for Human TraffickingSelf-driving cars prioritize safety over efficiency when navigating busy streets. AI is used to combat human trafficking by identifying vulnerability indicators and potential victims on commercial websites.

      Self-driving cars are being introduced into communities carefully and gradually, with a focus on safety. Self-driving cars face challenges when making left turns from busy streets into neighborhoods, and taking a longer, easier route is a preferred approach to ensure safety. Transparency about potential delays and upfront communication with users can help mitigate any potential complaints. In the realm of AI, another significant development is the use of AI to combat human trafficking. Marina's Analytics, a startup based in Pittsburgh, has created an AI-based tool called "Traffic Jam" that searches for vulnerability indicators on commercial adult services websites and uses facial recognition to identify possible victims of human trafficking. The system saves investigative hours and has already identified thousands of victims in 2019 and 70,000 hours in 2020. The goal is to filter and highlight potential cases for review by professionals, making the process faster and more efficient in addressing the large-scale problem of human trafficking.

    • AI analyzes 100,000 climate studies to reveal key findingsResearchers used machine learning and deep learning language analysis tools to identify high-level findings from 100,000 climate studies, revealing 80% of global land area shows trends in temperature and precipitation due to human influence or climate change.

      Artificial Intelligence (AI) is making a significant impact on various fields, even in areas where complex problems abound. A recent example comes from the field of climate science, where researchers used AI to analyze 100,000 climate studies and reveal key findings about the state of the research. With an exponential rise in the number of climate change papers being published, it has become increasingly challenging for scientists to keep up with the latest research and gain a global perspective. To address this challenge, researchers employed machine learning techniques and deep learning language analysis tools based on BERT to sift through the vast amount of published climate science and identify high-level findings. The results were impressive, with 80% of global land area showing trends in temperature and precipitation that can be attributed to human influence or climate change. While traditional assessments may be more precise, the machine learning-assisted approach provides a broader summary of the research, albeit with some uncertainty. This application of AI in climate science is just one example of how technology is helping to tackle the issue of information overload and providing valuable insights from vast amounts of data.

    • Exploring the Limits of Machine Learning in Understanding Climate Change ImpactsGoogle researchers challenge the notion of one-size-fits-all machine learning models for climate change impacts, highlighting the importance of continued research and innovation in the field.

      Climate change is a pressing issue that is already having significant impacts around the world, and the research community is making great strides in understanding these impacts. However, the complexity of the issue means that there is still much work to be done, particularly in understudied regions. In the field of machine learning, researchers at Google are exploring the limits of large-scale pre-training, challenging the common narrative that one model can fit all downstream tasks. Instead, they found that different checkpoints perform best on specific tasks, highlighting the importance of continued research and innovation in this area. Overall, both of these studies underscore the need for ongoing research and adaptation in the face of complex and evolving challenges.

    • Exploring efficient and targeted approaches in AI researchGoogle study reveals that smart training and data sourcing can lead to significant improvements in AI models, while large-scale models continue to dominate research.

      While making models and data bigger has been a trend in AI research, leading to improvements in various tasks beyond the original one, it's not the only way to achieve progress. The recent study by Google showcases that being smarter about how models are trained and where data is sourced can also yield significant results. The study, which involved thousands of experiments with models ranging from 10 million to 10 billion parameters, highlights the growing importance of large-scale models in AI research. However, it also underscores the need for more efficient and targeted approaches. This research, which required significant resources and compute power, demonstrates the growing influence of industry in driving AI research, although it still benefits academia. Another intriguing development is the deployment of a robot, Xavier, in Singapore for surveillance purposes, raising questions about privacy and societal acceptance of such technologies. Overall, these findings underscore the importance of continued exploration and innovation in AI research, both in terms of scale and efficiency.

    • Singapore trials social distancing robot, raising privacy concernsTechnology's rapid advancement brings opportunities for law enforcement but also raises concerns about privacy and potential misuse, especially with uncurated AI data sets that can perpetuate harmful biases.

      Technology is advancing rapidly, with Singapore trialing a robot designed to enforce social distancing and detect antisocial behaviors. This robot, which resembles a small Jeep with a screen and camera, is part of a larger trend towards using technology for law enforcement and regulation. However, the use of such technology raises concerns about privacy and potential for misuse. In a related article, researchers warn about the dangers of using uncurated, hyperscale AI data sets for training artificial intelligence. These data sets, which are often sourced from the internet, can contain problematic content such as misogyny, pornography, and malignant stereotypes. As these data sets are used to train AI, they can perpetuate and amplify these harmful biases. It's important for developers and policymakers to be aware of these issues and take steps to mitigate them. Overall, these developments highlight the need for careful consideration and regulation when it comes to the use of technology in society.

    • Bias and ethical considerations in AI developmentCareful curation and assessment of training datasets and addressing ethical considerations are crucial in AI development to prevent biased output and objectionable content.

      The use of large, scraped datasets for training AI models can result in biased output and objectionable content. This was highlighted in a recent study on the Lyon 400 million dataset used for the CLIP AI model. For instance, a portrait of a female astronaut with an American flag had a lower similarity score than a photograph of a smiling housewife in an orange jumpsuit with the same flag. The dataset also contained explicit content, despite a safe search option. In San Francisco, a dead-end street is experiencing increased traffic from self-driving cars making multi-point turns, leading to confusion and congestion. These stories underscore the importance of addressing bias and ethical considerations in AI development. The first issue calls for more careful curation and assessment of training datasets, while the second serves as a reminder of the ongoing challenges in implementing autonomous transportation systems.

    • Autonomous Vehicles Making Frequent U-turns Disrupt NeighborhoodAutonomous vehicles' unexpected U-turn behavior highlights AI limitations, challenges of integrating tech into society, and importance of addressing community impact.

      An unusual situation has arisen in a neighborhood due to autonomous vehicles making frequent U-turns, disrupting residents' sleep and causing confusion. Despite the vehicles following road rules, the behavior is unexpected and has led to amusing anecdotes from those affected. The reason for this behavior is unclear, but it seems that the vehicles may be having difficulty navigating the area, leading to numerous U-turns. The situation highlights the limitations and quirks of current AI technology and the challenges that come with integrating autonomous vehicles into society. It also underscores the importance of understanding and addressing the impact of such technology on communities. Let's hope that a solution is found soon to restore peace and normalcy for the residents. The discussion also touched upon the fact that the vehicles seem to be operating beyond a testing phase, and that their behavior is reminiscent of wacky AI agents from early experiments. This mismatch in expectations and the lack of full control over the situation has added to the intrigue and amusement of the situation. It's a reminder that as we continue to develop and deploy AI, we must be prepared for the unexpected and work to minimize disruptions and ensure safety and harmony for all. You can find more articles on similar topics and subscribe to our weekly newsletter for updates on the latest advancements and developments in AI at SkanaToday.com.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    #122 Connor Leahy: Unveiling the Darker Side of AI

    #122 Connor Leahy: Unveiling the Darker Side of AI

    Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

    Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

    Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

    If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

    (00:00) Preview

    (00:48) Connor Leahy’s background with EleutherAI & Conjecture  

    (03:05) Large language models applications with EleutherAI

    (06:51) The current negative trajectory of AI 

    (08:46) How difficult is keeping super intelligence in a sandbox?

    (12:35) How AutoGPT uses ChatGPT to run autonomously 

    (15:15) How GPT4 can be used out of context & negatively 

    (19:30) How OpenAI gives access to nefarious activities 

    (26:39) The problem with the race for AGI 

    (28:51) The goal of Conjecture and advancing alignment 

    (31:04) The problem with releasing AI to the public 

    (33:35) FTC complaint & government intervention in AI 

    (38:13) Technical implementation to fix the alignment issue 

    (44:34) How CoEm is fixing the alignment issue  

    (53:30) Stages of research and development of Conjecture

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    #275 - Preparing Young People for their Future with AI

    #275 - Preparing Young People for their Future with AI

    What's in this episode?

    Delighted to launch this new 5-episode miniseries on AI in education, sponsored by Nord Anglia Education, host Professor Rose Luckin kicks things off for the Edtech Podcast by examining how we keep education as the centre of gravity for AI. 

    AI has exploded in the public consciousness with innovative large language models writing our correspondence and helping with our essays, and sophisticated images, music, impersonations and video generated on-demand from prompts.  Whilst big companies proclaim what this technology can achieve and how it will affect work, life, play and learning, the consumer and user on the ground and in our schools likely has little idea how it works or why, and it seems like a lot of loud voices are telling us only half the story.  What's the truth behind AI's power?  How do we know it works, and what are we using to measure its successes or failures?  What are our young people getting out of the interaction with this sophisticated, scaled technology, and who can we trust to inject some integrity into the discourse?  We're thrilled to have three guests in the Zoom studio with Rose this week:

    Talking points and questions include: 

    • We often ask of technology in the classroom 'does it work'?  But when it comes to AI, preparing people to work, live, and play with it will be more than just whether or not it does what the developers want it to.  We need to start educating those same people HOW it works, because that will not only protect us as consumers out in the world, as owners of our own data, but help build a more responsible and 'intelligent' society that is learning all of the time, and better able to support those who need it most.  So if we want that 'intelligence infrastructure', how do we build it?
    • What examples of AI in education have we got so far, what areas have been penetrated and has anything radically changed for the better?  Can assessment, grading, wellbeing, personalisation, tutoring, be improved with AI enhancements, and is there the structural will for this to happen in schools?
    • The ‘white noise’ surrounding AI discourse: we know the conversation is being dominated by larger-than-life personalities and championed by global companies who have their own technologies and interests that they're trying to glamourise and market. What pushbacks, what reputable sources of information, layman's explanations, experts and opinions should we be listening to to get the real skinny on AI, especially for education?

    Sponsorship

    Thank you so much to this series' sponsor: Nord Anglia Education, the world’s leading premium international schools organisation.  They make every moment of your child’s education count.  Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other.  Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible.

    "Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education

     

    #62 - Maria Meier // CTO @ Phantasma Labs

    #62 - Maria Meier  // CTO @ Phantasma Labs
    Get insight into the world of reinforcement learning in this CTO podcast with Maria Meier CTO @ Phantasma Labs (a Simulation-a-a-Service). Maria shares with us how the simulation industry🔮 is developing - particularly in the autonomous vehicle context. Listen to find out: - Why reinforcement learning 🍭 is about to catch up to the other 2 ML paradigms - Why we need to model diversity in AI 👩‍🦽👴🧒🌍 - especially when simulating urban pedestrian traffic. - How much progress the autonomous vehicle industry 🚘 has made towards Level 5. - How standardisation 📜 is unfolding in ML Listen here: https://alphalist.com/podcast/62-maria-meier-cto-phantasma-labs

    Breaking Up is Hard to Do: A conversation about Facebook

    Breaking Up is Hard to Do: A conversation about Facebook

    Is Facebook a monopoly? This week Paul and Rich tackle the 2-billion-user elephant in the room and go back and forth on two big questions: whether Facebook violates antitrust laws and should be broken up, and how the platform (or its regulators) can solve its rampant fake news problem. Topics covered include what “breaking up” Facebook would even look like, how the platform might verify news sources, separating news from satire, and the general public’s ambivalence about privacy and security.  

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Anil Dash, Capitalist to Activist

    Anil Dash, Capitalist to Activist

    Ethics and access on the web: in this week’s episode, Paul and Rich talk to entrepreneur-turned-activist Anil Dash about the early days of the web, access and inclusivity, and the ethical responsibilities of the people who build digital technologies. Plus they try to settle how much you should tip on a New York City cab ride—no matter what the interface.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.