Logo
    Search

    AI for lonely people, GPT-3 is toxic, Tesla investigation develops, Kiwibot

    enSeptember 11, 2021

    Podcast Summary

    • Microsoft's chatbot Shao Ice provides comfort and companionship to millions in ChinaMicrosoft's Shao Ice chatbot offers comfort to millions in China, acting as a virtual companion, but concerns exist about excessive use leading to decreased human interaction and deeper feelings of disconnection.

      AI technology, specifically the chatbot Shao Ice, is providing comfort and companionship to millions of people in China, particularly during times of loneliness. With over 150 million users, this Microsoft-created bot has become a highly valued spinoff and is being used as a virtual companion for many, filling the gap of human interaction during peak hours. While the potential benefits of reducing loneliness are positive, there are concerns that excessive use of the bot could discourage human interaction and exacerbate feelings of disconnection. The line between helpful and harmful use of AI technology is a fine one, and the implications of such technology on human relationships are still being explored.

    • Exploring AI's potential in detecting depression through speech analysisWhile AI tools show promise in detecting depression from speech, their accuracy and potential implications require further research

      While reducing social media usage might not be as engaging as some other activities, it could still be a better choice compared to the current norm. The discussion also touched upon the potential of AI in detecting depression through speech analysis. Several startups, including Ellipsis Health, are exploring this area, with Ellipsis Health claiming to assess a person's depression severity from just 90 seconds of speech. However, the accuracy of these AI-driven tools is still up for debate, with some experts expressing skepticism. False positives could be a concern, leading to unnecessary self-diagnosis and potentially reinforcing negative thoughts. While the potential benefits are promising, it's essential to approach this technology with caution and continue researching its effectiveness and potential implications.

    • Exploring AI use in mental health: Balancing potential benefits with ethical concernsStudy reveals GPT-3 and similar models agree more with offensive comments, highlighting the need for ongoing research and responsible AI deployment in sensitive areas like mental health

      As we explore the potential applications of advanced AI models like GPT-3, it's crucial to consider their downstream use cases and interventions, especially in sensitive areas like mental health. During our discussion, we touched upon the possibility of using AI for early detection of depression, but noted that it should be used in a human-in-the-loop approach. However, a recent study revealed a concerning finding: GPT-3 and similar models have a tendency to agree with offensive comments more frequently than safe ones. This study, conducted at Georgia Tech and the University of Washington, analyzed 2000 Reddit threads and found that these models were twice as likely to agree with offensive comments. Interestingly, the models tended to direct personal attacks towards individuals, whereas humans were more likely to target specific demographics or groups. This research underscores the importance of understanding the behavior of these AI models in various contexts and developing methods to mitigate their offensive responses. The study also showed that existing controllable text generation methods could help improve the contextual appropriateness of these models. In essence, these findings highlight the need for ongoing research and development in the field of AI ethics and responsible AI deployment. It's essential to recognize the potential risks and challenges associated with advanced AI models and work towards creating systems that are safe, fair, and beneficial for all.

    • Unexpected connections between language models and common sense conceptsResearchers find statistical patterns in linguistic context to uncover links between language models and common sense concepts, challenging our intuitive understanding and emphasizing the significance of examining downstream behavior.

      Researchers are discovering unexpected connections between language models and common sense concepts, such as spiciness, by analyzing statistical patterns in linguistic context. This finding challenges our intuitive understanding and highlights the importance of examining the downstream behavior of models. In another development, a team of researchers from various tech companies and labs introduced a graph neural network for more accurate Estimated Time of Arrival (ETA) predictions in Google Maps. The model, already deployed in production, has led to significant reductions in wrong estimates, making it particularly useful for intensive route planning and avoiding heavy traffic. This collaboration between tech giants, despite being seen as competitors, demonstrates the potential benefits of sharing knowledge and resources to improve technology. Lastly, the National Highway Traffic Safety Administration (NHTSA) has ordered Tesla to hand over all autopilot data by October 22nd, as part of an ongoing investigation into Tesla cars crashing into various vehicles. This request underscores the importance of ensuring safety in autonomous vehicle technology and the role of regulatory agencies in overseeing their development.

    • Tesla's Autopilot under investigation by NHTSA, public perception divided on self-driving carsThe NHTSA is investigating Tesla's Autopilot system, with potential fines for non-compliance. Public opinion on self-driving cars remains split, with concerns over safety and biases in face detection systems persisting.

      The investigation into Tesla's Autopilot system is gaining momentum and the stakes are high. The National Highway Traffic Safety Administration (NHTSA) has requested detailed information from Tesla regarding the functionality and safety measures of Autopilot, and failure to comply could result in a fine of up to $115 million. This comes as public perception of Tesla and autonomous vehicles remains divided, with nearly half of US adults expressing concerns about their safety. A recent study revealed that 34% of adults would not consider riding in a self-driving car, while 17% believe they are as safe as human-driven vehicles. Despite this, a significant portion of the population remains unaware of the crashes involving Tesla vehicles using Autopilot or the federal investigation. Meanwhile, face detection systems from tech giants Amazon, Microsoft, and Google have been found to exhibit persistent biases, particularly against older and darker-skinned individuals. These companies claimed to have addressed these issues in their commercial products, but a new study indicates that significant progress is yet to be made. These developments underscore the ongoing challenges in the implementation and public acceptance of advanced technologies like autonomous driving and AI.

    • Bias and reliability issues in AI systemsDespite technological advancements, AI systems can make mistakes or label content inappropriately, leading to negative impacts, especially for marginalized communities. Companies must prioritize research and improvement to prevent unintended consequences.

      Despite advancements in technology, bias and reliability issues persist in AI systems, particularly in areas like facial recognition. These systems can make mistakes or label content inappropriately, leading to unintended consequences and negative impacts, especially for marginalized communities. For instance, Amazon's face detection API was found to have significant disparities in error rates for older people and those with darker skin types. Similar incidents have occurred with Facebook's AI-powered features, labeling videos of black men as "primates." These incidents highlight the need for continued research and improvement in AI systems to address these issues and prevent unintended consequences. It's important for companies to prioritize these efforts and for designers to be aware of potential errors and biases in their systems. Even with advancements in technology, such as self-driving cars, unexpected incidents can still occur, like a human-driven car running over a self-driving robot. These incidents serve as reminders of the importance of ongoing research, development, and awareness in the field of AI.

    • Autonomous Delivery Robot Gets Hit by a CarDespite an accident, autonomous food delivery robots are on the rise, with companies like Kiwi Bot leading the way in production and pilot programs in cities like San Jose, Pittsburgh, and Detroit.

      While we may have become accustomed to news of larger autonomous vehicles causing accidents, a recent clip showcases a tiny autonomous delivery robot getting hit by a car at the University of Kentucky. Despite the mishap, the trend of using semi-autonomous robots for food delivery is on the rise, with companies like Kiwi Bot making headlines for their cute, expressive bots that have already completed over 150,000 food deliveries since 2017. The robots, which are currently being piloted in cities like San Jose, Pittsburgh, and Detroit, have the potential to revolutionize local food delivery, especially for establishments that are just a short distance away. With over 400 robots in production, it seems that these adorable delivery bots will soon be a common sight on our city streets, adding a fun and efficient twist to our daily lives.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    The Miseducation of Google’s A.I.

    The Miseducation of Google’s A.I.

    When Google released Gemini, a new chatbot powered by artificial intelligence, it quickly faced a backlash — and unleashed a fierce debate about whether A.I. should be guided by social values, and if so, whose values they should be.

    Kevin Roose, a technology columnist for The Times and co-host of the podcast “Hard Fork,” explains.

    Guest: Kevin Roose, a technology columnist for The New York Times and co-host of the podcast “Hard Fork.”

    Background reading: 

    For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday. 

    Episode 8 | Culture of Safety and Innovation, A Conversation with Chuck Price, TuSimple

    Episode 8 | Culture of Safety and Innovation, A Conversation with Chuck Price, TuSimple

    Chuck Price, Chief Product Officer, TuSimple joins Grayson Brulte on The Road To Autonomy Podcast to discuss TuSimple's culture of safety and innovation.

    In this episode, Grayson and Chuck start by discussing the economics of applying autonomy to fleets of trucks. Grayson asks Chuck if TuSimple ever considered creating a self-driving car.

    In the founding of TuSimple, Chuck discusses why the founding team focused solely on trucking from day one. The team saw a difference in the economics of self-driving trucks.

    We did see a difference. We saw that there were specific economic pain points in trucking. Robotaxis were solving a problem that didn't appear to exist.

    It was a fantasy, it was science fiction. It was a future were cities did not have to have individually owned cars. Where parking issues would be resolved. This is a grand vision without clear economic drivers. - Chuck Price, Chief Product Officer, TuSimple

    The conversation then veers into the universal driver debate and the great pivot to self-driving trucks from self-driving cars. Chuck shared his open and honest opinion on the universal driver.

    I do not believe there is such a thing as a universal driver. It's a marketing term. - Chuck Price, Chief Product Officer, TuSimple

    Wrapping up the conversation around the economics of self-driving trucks and why the universal driver is not the correct approach, the conversation shifts to TuSimple's culture of safety and innovation.

    TuSimple has a corporate culture of safety which they call 'SafeGuard". SafeGuard applies to every single employee in the company no matter what their job function or title is. From the individuals working on the trucks to the engineers writing the code to the executives leading corporate strategy, each and every employee is measured on their contribution to safety.

    What Did You Do To Contribute to Safety? - Chuck Price, Chief Product Officer, TuSimple

    Safety is built into every aspect of what the company does, from the office to the depots to the on-road deployments. Drivers and safety engineers (Left and Right Seaters) go through six months of formal training before they are even able to touch the autonomy in the truck. Each and every safety driver goes through a drug test prior to being allowed in the vehicle.

    TuSimple treats it's drivers as Blue Angels as the company requires them to operate at the highest ability at all-times. When drivers and safety engineers leave the depot, they are monitored in real-time with in-cabin monitoring and drive cams to ensure the highest level of safety.

    The culture of safety and innovation is attracting partners such as UPS, Penske, U.S. Xpress, and McLane Company Inc. to work with TuSimple. As TuSimple scales, the company is working with Navistar to develop SAE Level 4 self-driving trucks at the factory which are safety certified.

    Rounding out the conversation, Grayson and Chuck talk about the economics of self-driving trucks and how TuSimple Self-Driving Trucks can show an ROI after the first 24 months of purchase. 


    Follow The Road To Autonomy on Apple Podcasts

    Follow The Road To Autonomy on LinkedIn

    Follow The Road To Autonomy on Twitter


    Recorded on Tuesday, September 8, 2020

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Episode 180 | The Rise and Fall of Digital Freight Brokerages and the Growth of Autonomous Trucking, A Conversation with Timothy Dooner, WHAT THE TRUCK?!?

    Episode 180 | The Rise and Fall of Digital Freight Brokerages and the Growth of Autonomous Trucking, A Conversation with Timothy Dooner, WHAT THE TRUCK?!?

    Timothy Dooner, Host, WHAT THE TRUCK?!?, joined Grayson Brulte on The Road to Autonomy podcast to discus the rise and fall of digital freight brokerages and the growth of autonomous trucking

    The conversation begins with Dooner discussing his outlook for the freight market.

    There is 8. 1% less brokerages than there were a year ago at the start of this year. But there’s still 17% more brokerages than we started at the pandemic. Everyone’s been waiting for not just volumes to go up, but the way freight works, it’s volume plus capacity. They’ve been waiting for the capacity to go down. Volumes are looking a little bit better. Things are receding and this year I’m hearing a lot more optimism. – Timothy Dooner

    The optimism is being shared by Walmart as there are rumors circulating that Walmart is looking to develop a digital freight brokerage. Since Walmart operates their own fleet, they have a unique data set that could potentially help them leapfrog the competition when and if they are introduce a digital freight brokerage service. 

    The freight market is currently turbulent as the demand for freight and the capacity to haul the freight are not in sync. Then there is the California electric truck mandate which will ultimately end up increasing the costs to ship freight, hurting both the carriers and the consumer. Could these mandates help to accelerate the adoption of autonomous truck as they are cheaper to operate? 

    It’s possible and as we are seeing in California, autonomous vehicle technology is not always welcome. in San Francisco vandals set fire to a Waymo autonomous vehicle with a firework, burning the vehicle to the ground. If the regulatory environment in California eventually allows autonomous trucks to operate, will similar vandals also try to cause damage to autonomous trucks? 

    Autonomous trucking is going to play a major role in the future of trucking and the global economy. As the technology is developed different business models are going to come to fruition and one of those is the licensing model. Kodiak has the potential to license their SensorPods technology, creating a lucrative revenue stream as they develop their autonomous trucking platform. This is in addition to their growing defense business.

    Then there is Uber. Uber has investments in Aurora and Waabi, and has the Uber Freight division. Yet they do not operate an autonomous trucking fleet. Grayson and Dooner go onto dicuss Uber’s autonomous trucking investment strategy and who ultimately owns the asset.

    Wrapping up the conversation, Dooner shares his 2024 outlook for the trucking market. 


    Recorded on Wednesday, February 14, 2023


    Episode Chapters

    • 0:00 Introduction 
    • 1:34 Freight Market Outlook 
    • 7:31 Walmart’s Rumored Digital Freight Brokerage 
    • 10:42 Are Electric Truck Mandates Accelerating the Adoption of Autonomous Trucks 
    • 13:57 Vandals in San Fransisco Set Fire to a Waymo Autonomous Vehicle 
    • 18:20 Commercializing Autonomous Trucking 
    • 25:32 The Business of Kodiak Robotics
    • 28:15 Autonomous Delivery Drones 
    • 31:55 Uber’s Autonomous Trucking Investment Strategy 
    • 39:18 Who Owns the Asset? 
    • 42:59 Tesla Cybertruck 
    • 43:52 Apple Vision Pro 
    • 51:08 2024 Trucking Outlook


    --------

    About The Road to Autonomy

    The Road to Autonomy® is a leading source of data, insight and analysis on autonomous vehicles/trucks and the emerging autonomy economy™. The company has two businesses: The Road to Autonomy Indices, with Standard and Poor’s Dow Jones Indices as the custom calculation agent; Media, which includes The Road to Autonomy podcast and This Week in The Autonomy Economy newsletter.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Interview with FSD Skeptic Dan O'Dowd | Tesla Motors Club Podcast #53

    Episode 125 | Q1 2023 Oil and Gas Markets Outlook, A Conversation with Dean Foreman, Chief Economist, American Petroleum Institute (API)

    Episode 125 | Q1 2023 Oil and Gas Markets Outlook, A Conversation with Dean Foreman, Chief Economist, American Petroleum Institute (API)

    Dean Foreman, Chief Economist, American Petroleum Institute (API) joined Grayson Brulte on The Road To Autonomy Podcast to discuss his 2023 Q1 outlook for the oil and gas markets.

    The conversation begins with Dean sharing his thoughts and insights into the current state of the oil and gas markets

    As the economy goes, that is what we are going to look for in oil and gas markets. – Dean Foreman

    The demand for oil has been strong. The U.S. Petroleum demand in December 2022 was 20.5 million barrels per day. For 2022, oil demand grew by 2.2%. Going back to 2000, 2022 was the forth highest year for growth. 

    It says that on the heels of the pandemic, $20 trillion dollars worth of economic stimulus has continued to have a pretty positive effect for the economy, despite Fed Funds rate hikes, despite concerns about a recession, despite individual sectors that have been under pressure. – Dean Foreman

    The trend of demand outpacing supply has continued for over a year now with inventories that are at historic lows. Oil demand is growing because of the rebound in travel and the increase in cargo shipping by air. 

    During the last six months in 2022, 1.5 million barrels per day (1.5% of the global market) of new oil globally came online from Government reserves. While there was some downward price movement, there was also long-term negative consequences as oil companies were discouraged to start new drilling and new infrastructure projects. This could lead to a global imbalance as there will not be enough infrastructure to meet demand. 

    The official estimates for demand growth this year range between basically 1 million barrels per day or about 1% of the market, up to 1.7 million barrels per day. – Dean Foreman

    In order to meet this demand, investment has to be made and drilling has to expand around the world to ensure that new supply can come to the market. Adding more context to this, the U.S. Energy Information Administration is predicting that global oil demand is expected to reach a record-high of 101 million barrels per day in 2023. 

    The U.S. Strategic Petroleum Reserve ended 2022 at the lowest point since 1983. When comparing 2022 to 1983, the U.S.’s oil consumption was more than 33% higher. There is little margin for error with solid oil demand and a dwindling Strategic Petroleum Reserve. When you factor in geo-politics and weather, the situation becomes even more unpredictable.

    In 2022, the U.S. dollar rose 6.23%. So far this year (2023) the U.S. dollar has begun to weaken. With a weakening U.S. dollar that is projected to weaken by 3% this year according to Bloomberg, oil is beginning to trade on local currencies. 

    For Q1 2023, the trends to watch in the oil and gas markets are the Russia/Ukraine conflict, systemic risks to the global food supply and emerging markets debt.

    Wrapping up the conversation, Dean discuses the global economics and the impact it has on household budgets. 


    Recorded on Tuesday, January 17, 2023

    --------

    About The Road to Autonomy

    The Road to Autonomy® is a leading source of data, insight and commentary on autonomous vehicles/trucks and the emerging autonomy economy™. The company has two businesses: The Road to Autonomy Indices, with Standard and Poor’s Dow Jones Indices as the custom calculation agent; Media, which includes The Road to Autonomy podcast and This Week in The Autonomy Economy newsletter.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.