Logo
    Search

    Don't Scrape Me, Bro + The Activists Sabotaging Self-Driving Cars + How Reddit Beat a Rebellion

    enAugust 11, 2023

    Podcast Summary

    • Backlash against AI use in creative industries with focus on crawling and scrapingArtists and writers are pushing back against companies' secretive practices regarding data collection for AI models, leading to a heated debate over data ownership and usage in the creative industry

      There's a growing backlash against the use of AI in creative industries, particularly in relation to how data is gathered and used to train these models. This week, we saw several incidents that illustrate this trend, all involving issues around crawling and scraping. For instance, there's a counter movement forming among artists and writers who want more control over how their data is used. Companies in the past have been more transparent about how they obtain data, but now they're more secretive, leading to concerns and questions. Crawling and scraping are not new practices, as they've been used by internet companies for years. Crawling refers to bots that travel the web visiting links, while scraping involves downloading information for personal use. While these practices have been controversial due to potential misuse, they're essential for indexing and organizing web content. However, as AI becomes more prevalent, the stakes are higher, and the debate over data ownership and usage is heating up. This is just the beginning of a long and significant fight.

    • Data collection and AI models: Transparency and control are keyCompanies must be transparent about their data practices to build trust and avoid backlash. Individuals should be aware of the potential risks and benefits of sharing their data for AI models.

      Data collection and usage by companies, particularly in relation to AI models, can be a contentious issue. This was highlighted in the case of LinkedIn and the controversy surrounding data scraping. More recently, Zoom faced backlash due to concerns that their terms of service allowed them to collect and use customer data from video calls to train their AI models without consent. This led to a public outcry and Zoom quickly clarified that they would only use such data with user consent. The incident serves as a reminder of the importance of transparency and control over personal data in the digital age. Furthermore, the case of ProseCraft illustrates the potential of using data and AI in creative and innovative ways. ProseCraft is a website created by a computational linguist to answer literary questions using data analysis. However, the discussion also touched upon the potential risks and controversies surrounding data collection and usage. It's essential for companies to be transparent and clear about their data practices to build trust and avoid backlash. In summary, the Zoom and ProseCraft stories demonstrate the importance of transparency and control over data, particularly in relation to AI models, and the potential for innovative uses of data and technology. It's crucial for companies to be clear about their data practices and for individuals to be aware of the potential risks and benefits of sharing their data.

    • Literary works used for statistical analysis sparks controversyAuthors discovered their copyrighted books were being used without consent on ProseCraft, sparking concerns over data privacy and intellectual property rights. The website was taken down and an apology issued, but the incident underscores the need for clearer guidelines around AI use of copyrighted creative works.

      The use of literary works for statistical analysis through a website called ProseCraft, run by a small company, sparked controversy this week when authors discovered their copyrighted books were being used without their permission. The website's creator, Benjie Smith, aimed to provide users with insights into the linguistic patterns of famous works of literature. However, the reaction from authors was strong, with some expressing concerns about data privacy and potential misuse of their intellectual property. Smith eventually took down the website and issued an apology, acknowledging the need for author consent in the future. While some argue that the analysis provided by ProseCraft was harmless, others see it as a potential precursor to more invasive uses of literary data, particularly as the value of large language models continues to grow. The incident highlights the need for clearer guidelines around the use of copyrighted material in AI applications, particularly in the context of creative works.

    • Technology and Intellectual Property Rights: Balancing Creators' Desires and AI DevelopmentThe tension between creators' desire to protect their work and the potential benefits of using large datasets for AI development is ongoing. Some argue that recent actions, such as website scraping bans and AI web crawler blockers, provide a false sense of security while others believe ongoing access to new material is crucial for accurate language models.

      The ongoing debate around technology and intellectual property rights, as exemplified by the controversy over a website scraping literary works and OpenAI's new feature allowing website owners to block its web crawler, highlights the tension between creators' desire to protect their work and the potential benefits of using large datasets for AI development. While some argue that these actions come too late and provide a false sense of security, others believe that ongoing access to new material is crucial for creating language models that accurately reflect current language usage and trends. Ultimately, these issues require thoughtful consideration and balanced solutions that respect both creators' rights and the potential benefits of AI technology.

    • Growing anti-scraping movementUsers are becoming more skeptical about data collection by AI companies and demanding clearer value propositions and greater control over their personal data.

      The increasing awareness and concern around data collection and usage by AI companies has led to a growing anti-scraping movement. People are becoming more skeptical about the value exchange in this context, as they feel their data is being collected without clear benefits and potentially used to automate jobs or sell back to them for a fee. This shift in sentiment is causing websites and corporations to take action against scraping, making it a much higher stakes proposition for AI companies looking to gather large data sets. The previous generation of services, such as email or social media, had a clearer value proposition for users, but with AI tools, the benefits are less apparent, leading to a sense of unfairness and a desire for greater control over personal data.

    • AI Ethics and Public BacklashAI's access to vast data raises ethical concerns, potential plagiarism, and lack of fair compensation for content creators may lead to public backlash, impacting technologies like self-driving cars.

      The use of AI and its access to vast amounts of data raises ethical concerns and potential backlash from the public. The discussion highlights the issue of AI plagiarism and the lack of fair compensation for users whose content is being used without consent. Additionally, the self-serving attitude of companies that believe they have already downloaded the Internet and do not need ongoing access to new material was criticized. The potential impact of these backlashes, such as the one against self-driving cars, was also emphasized. Overall, the conversation underscores the importance of addressing these ethical concerns and engaging in transparent dialogue with the public to build trust and ensure the responsible use of AI technology.

    • Activists use unconventional methods to oppose self-driving cars in San FranciscoGrassroots group Safe Street Rebel employs tactics like placing traffic cones on self-driving cars to voice opposition, while companies view it as vandalism, as self-driving cars become more common in cities.

      In San Francisco, a grassroots activist group called Safe Street Rebel has been using unconventional methods, such as placing traffic cones on the hoods of self-driving cars, to express their opposition to the testing of these vehicles in the city. The group argues that self-driving cars obstruct buses, emergency vehicles, and regular traffic, and they believe that these vehicles are not safer than human drivers. The self-driving car companies view this activism as vandalism and a misguided movement. The stakes are high, as self-driving cars are expected to become more common in cities in the coming years. Safe Street Rebel is an extreme part of the opposition, similar to how Greenpeace is to fossil fuel companies. They employ various tactics, including direct actions and street theater, to raise awareness about their concerns. The group's organizer, Adam Eggleman, came up with the idea of using traffic cones to disable the self-driving cars, and it turns out that the cars stop when a cone is placed on their hoods. This is just one example of the civic battles that are likely to arise as self-driving cars become more prevalent.

    • Protesters against self-driving cars take disruptive actionsProtesters argue that self-driving cars lack democratic accountability and contribute to more cars on the road, raising concerns over their reliability and safety. They prefer alternative solutions and acknowledge potential inconvenience or danger in their protests.

      The protest against self-driving cars in San Francisco was driven by feelings of powerlessness towards the regulatory agencies and the influx of robo-taxis, leading the protesters to take disruptive actions like disabling cars or making them stall in the middle of the road. They argue that these companies are unregulated and have no democratic accountability, and their actions are meant to raise awareness and start a conversation about their concerns, which include the unreliability of AVs and their contribution to increasing the number of cars on the road. The protesters believe that human-driven cars are unsafe and that AVs do not solve the problems they claim to, such as reducing car ownership or improving safety. They prefer alternative solutions like self-driving buses or vans that can carry multiple people, reducing the overall number of vehicles on the road. However, they acknowledge that their tactics may be inconvenient or even dangerous, but see it as an acceptable part of a protest.

    • Labor and Safety Concerns with Self-Driving Buses and TrainsSelf-driving buses and trains offer efficiency but raise labor concerns and safety risks, including job loss and new hazards for cyclists.

      While self-driving vehicles, such as buses or trains, could offer efficient transportation solutions, the labor implications and safety concerns cannot be ignored. The idea of a self-driving bus or train might seem appealing from a transit perspective, but the labor perspective raises valid concerns about exploitation and job loss. Additionally, while self-driving vehicles may reduce some accidents, they also introduce new risks, such as stopping randomly on the road, which can create dangerous situations for other road users, particularly cyclists. The list of AV failures, although contested by companies and proponents, highlights the need for continued scrutiny and improvement in self-driving technology. Ultimately, it's crucial to balance the benefits of self-driving vehicles with the potential risks and labor implications.

    • The Distraction of Autonomous Vehicles from Prioritizing Public TransitAutonomous vehicles create unnecessary congestion, hinder emergency response, and contribute to plastic pollution. Transit agencies struggle due to competition with cars and underfunding. To improve transit, treat it as a public service and prioritize its allocation of streets.

      The proliferation of autonomous vehicles (AVs) on our roads may not be the solution to our transportation woes, but rather a distraction from the need to prioritize public transit and reduce our reliance on cars altogether. The speaker argues that AVs create unnecessary congestion, hinder emergency response, and contribute to plastic pollution. Moreover, the speaker believes that transit agencies are struggling due to competition with cars and underfunding. To improve transit, the speaker suggests treating it as a public service rather than a product, and prioritizing its allocation of streets. The speaker's vision is a world where cars, including AVs, are banned in cities, leading to quieter, cleaner, safer, and more convenient urban environments. Despite the challenges, the speaker remains optimistic, drawing inspiration from past opposition to the invention of automobiles and recognizing the potential benefits of AVs, but emphasizing the importance of prioritizing people and public transit over cars.

    • Impacts of Autonomous Vehicles Beyond TechnologyAutonomous vehicles (AVs) raise concerns beyond technology, including labor exploitation, privacy, and second-order effects, with critics arguing that opponents are not anti-technology but rather acknowledging these issues

      The introduction of autonomous vehicles (AVs) raises significant concerns beyond just their technological capabilities. The history of cars shows that their expansion has had negative impacts on communities, and the arrival of AVs is leading to a shift in power dynamics, with the burden of adjustment falling on those outside of the vehicles. The constant recording by AVs, which is technically legal but raises privacy concerns, adds to this unease. Critics labeling opponents as Luddites misunderstand the issue, as it's not about being anti-technology but rather acknowledging the potential labor exploitation, privacy concerns, and second-order effects. AVs may not necessarily improve over time, and their impact on reducing cars on the road is not guaranteed. Specific examples include the potential for increased accidents due to AVs' inability to handle certain situations, such as encountering cones or pedestrians. The shift to AVs may create more problems than it solves.

    • AVs in Urban Environments: Challenges and OpportunitiesAVs present unique challenges in urban environments, but also offer potential benefits in certain contexts. Debate surrounds their safety and impact on reducing car usage, with a focus on finding ways to mitigate risks and integrate them into a larger vision for safer, more sustainable transportation.

      While Autonomous Vehicles (AVs) have the potential to significantly reduce harm on the roads, they also present unique challenges and dangers, particularly in urban environments. AVs lack human intuition and can create new hazards, such as stopping in crosswalks or obstructing emergency scenes. However, there are no documented instances of a fatal or serious collision caused by an AV in San Francisco, where they have been running for years. The debate over AVs lies not only in their safety but also in their impact on efforts to reduce car usage and prioritize other modes of transportation. While AVs require a car-centric built environment, initiatives like protected bike lanes and transit prioritization can make streets safer for all users. The argument against AVs is not solely based on opposition to the technology, but rather on the belief that they distract from more effective solutions. AVs have value in certain contexts, such as long-haul trucking, but their implementation in busy urban cores remains a challenge. The conversation around AVs should focus on finding ways to mitigate their risks and integrate them into a larger vision for safer, more sustainable transportation.

    • Reddit's data scraping controversy and user backlashReddit's heavy-handed response to a data scraping controversy led to a disruptive period, temporary shutdown of smaller subreddits, and highlighted the power struggle between tech companies and their users.

      During a major controversy over data scraping, Reddit faced a significant backlash from its user base. In response, the company threatened to remove moderators and installed new ones, leading to a disruptive period on the platform. The controversy reached a peak when users collaboratively created a digital art mural with the message "fuck Spez," referring to Reddit CEO Steve Huffman. Despite the protests, Reddit did not make significant concessions and instead replaced the problematic moderators. This heavy-handed approach led to the temporary shutdown of around 1800 smaller subreddits, but the furor seems to be subsiding. The incident highlights the power struggle between tech companies and their users, and the potential consequences when companies take a hardline stance against their user base.

    • Reddit's power struggle with local moderators and centralized controlDecentralized platforms like LEMI provide user autonomy and control, contrasting with Reddit's centralized authority and external interference.

      While Reddit users had significant power through local moderators, the platform ultimately retained control and made unprecedented moves to regain authority. This highlights the importance of decentralization for some Internet users, who value ownership and control over their online spaces. Decentralized alternatives, like LEMI, offer an opportunity for users to build their own communities without fear of external interference. The ongoing tension between centralized and decentralized platforms underscores the importance of user autonomy and governance in the digital world.

    • Decentralization vs. Better ProductDecentralization offers censorship resistance and accountability, but the appeal may not resonate with average users. A better product is likely the main driver of success.

      While the idea of decentralized social media platforms with ownership and portability rights sounds appealing, it may not be the primary reason people choose to use one platform over another. The proponents of decentralization argue that it offers censorship resistance and accountability to no centralized entity. However, the appeal of decentralization may not resonate with the average user, and a better product is likely to be the main driver of success. Decentralization can enhance a product, as seen in Mastodon's interoperability with other services, but it also comes with challenges, such as the potential for chaos and the lack of a centralized force to manage content moderation. Ultimately, a balance between decentralization and centralization may be the most effective approach for social media platforms.

    • Decentralization comes with challengesDecentralized platforms like Reddit and Wikipedia have benefits but also face challenges like maintaining order and reliability. LK 99, a supposed room temperature superconductor, has shown the importance of scientific rigor in the face of false hopes.

      While decentralization may have its benefits, such as community control and potential financial independence, it also comes with its own set of challenges, like maintaining order and ensuring reliability. The Reddit rebellion serves as an example of this, as the platform's centralized structure allowed for a resolution to the conflict between the company and the users, despite the ongoing debates about the merits of decentralization. Wikipedia, another decentralized platform, has been successful in many ways, but it also faces internal conflicts and dramas. Ultimately, the balance of power on the internet is a topic of ongoing debate, with some advocating for more decentralization and others for a more regulated approach. As for LK 99, the alleged room temperature superconductor, it appears that initial excitement was premature, as several labs have been unable to replicate the findings. While there is evidence that LK 99 may be ferromagnetic, this does not make it a superconductor, and the betting markets now predict that it is not one. The hype around LK 99 serves as a reminder of the importance of scientific rigor and the potential for false hopes in the realm of scientific discoveries.

    • The complexities of training advanced AI modelsTraining advanced AI models requires more training parameters and additional GPUs, and involves a collaborative team effort.

      Learning from this episode of Hardfork is the complexities and requirements involved in training advanced AI models. The hosts discussed the importance of having more training parameters and the need for additional GPUs to handle such tasks. They also acknowledged the team behind the production of the podcast, including Davis Land and Rachel Cohn as producers, Jen Poyant as editor, Caitlin Love as fact checker, Sophia Landman as engineer, and Dan Powell, Alicia Byutub, Marian Lozano, and Diane Wong as original music contributors. Special thanks were given to Paula Schumann, Buing Tam, Naga Logli, Kate Lapreste, and Jeffrey Miranda. The hosts also mentioned their email address, hardfork@nytimes.com, and assured listeners that they do not use emails to train their AI models, but in a humorous note, one host admitted to training a language model in their JSON as they spoke. Overall, the episode highlighted the intricacies of AI development and the collaborative efforts required to produce a podcast.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    Episode 125 | Q1 2023 Oil and Gas Markets Outlook, A Conversation with Dean Foreman, Chief Economist, American Petroleum Institute (API)

    Episode 125 | Q1 2023 Oil and Gas Markets Outlook, A Conversation with Dean Foreman, Chief Economist, American Petroleum Institute (API)

    Dean Foreman, Chief Economist, American Petroleum Institute (API) joined Grayson Brulte on The Road To Autonomy Podcast to discuss his 2023 Q1 outlook for the oil and gas markets.

    The conversation begins with Dean sharing his thoughts and insights into the current state of the oil and gas markets

    As the economy goes, that is what we are going to look for in oil and gas markets. – Dean Foreman

    The demand for oil has been strong. The U.S. Petroleum demand in December 2022 was 20.5 million barrels per day. For 2022, oil demand grew by 2.2%. Going back to 2000, 2022 was the forth highest year for growth. 

    It says that on the heels of the pandemic, $20 trillion dollars worth of economic stimulus has continued to have a pretty positive effect for the economy, despite Fed Funds rate hikes, despite concerns about a recession, despite individual sectors that have been under pressure. – Dean Foreman

    The trend of demand outpacing supply has continued for over a year now with inventories that are at historic lows. Oil demand is growing because of the rebound in travel and the increase in cargo shipping by air. 

    During the last six months in 2022, 1.5 million barrels per day (1.5% of the global market) of new oil globally came online from Government reserves. While there was some downward price movement, there was also long-term negative consequences as oil companies were discouraged to start new drilling and new infrastructure projects. This could lead to a global imbalance as there will not be enough infrastructure to meet demand. 

    The official estimates for demand growth this year range between basically 1 million barrels per day or about 1% of the market, up to 1.7 million barrels per day. – Dean Foreman

    In order to meet this demand, investment has to be made and drilling has to expand around the world to ensure that new supply can come to the market. Adding more context to this, the U.S. Energy Information Administration is predicting that global oil demand is expected to reach a record-high of 101 million barrels per day in 2023. 

    The U.S. Strategic Petroleum Reserve ended 2022 at the lowest point since 1983. When comparing 2022 to 1983, the U.S.’s oil consumption was more than 33% higher. There is little margin for error with solid oil demand and a dwindling Strategic Petroleum Reserve. When you factor in geo-politics and weather, the situation becomes even more unpredictable.

    In 2022, the U.S. dollar rose 6.23%. So far this year (2023) the U.S. dollar has begun to weaken. With a weakening U.S. dollar that is projected to weaken by 3% this year according to Bloomberg, oil is beginning to trade on local currencies. 

    For Q1 2023, the trends to watch in the oil and gas markets are the Russia/Ukraine conflict, systemic risks to the global food supply and emerging markets debt.

    Wrapping up the conversation, Dean discuses the global economics and the impact it has on household budgets. 


    Recorded on Tuesday, January 17, 2023

    --------

    About The Road to Autonomy

    The Road to Autonomy® is a leading source of data, insight and commentary on autonomous vehicles/trucks and the emerging autonomy economy™. The company has two businesses: The Road to Autonomy Indices, with Standard and Poor’s Dow Jones Indices as the custom calculation agent; Media, which includes The Road to Autonomy podcast and This Week in The Autonomy Economy newsletter.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Episode 8 | Culture of Safety and Innovation, A Conversation with Chuck Price, TuSimple

    Episode 8 | Culture of Safety and Innovation, A Conversation with Chuck Price, TuSimple

    Chuck Price, Chief Product Officer, TuSimple joins Grayson Brulte on The Road To Autonomy Podcast to discuss TuSimple's culture of safety and innovation.

    In this episode, Grayson and Chuck start by discussing the economics of applying autonomy to fleets of trucks. Grayson asks Chuck if TuSimple ever considered creating a self-driving car.

    In the founding of TuSimple, Chuck discusses why the founding team focused solely on trucking from day one. The team saw a difference in the economics of self-driving trucks.

    We did see a difference. We saw that there were specific economic pain points in trucking. Robotaxis were solving a problem that didn't appear to exist.

    It was a fantasy, it was science fiction. It was a future were cities did not have to have individually owned cars. Where parking issues would be resolved. This is a grand vision without clear economic drivers. - Chuck Price, Chief Product Officer, TuSimple

    The conversation then veers into the universal driver debate and the great pivot to self-driving trucks from self-driving cars. Chuck shared his open and honest opinion on the universal driver.

    I do not believe there is such a thing as a universal driver. It's a marketing term. - Chuck Price, Chief Product Officer, TuSimple

    Wrapping up the conversation around the economics of self-driving trucks and why the universal driver is not the correct approach, the conversation shifts to TuSimple's culture of safety and innovation.

    TuSimple has a corporate culture of safety which they call 'SafeGuard". SafeGuard applies to every single employee in the company no matter what their job function or title is. From the individuals working on the trucks to the engineers writing the code to the executives leading corporate strategy, each and every employee is measured on their contribution to safety.

    What Did You Do To Contribute to Safety? - Chuck Price, Chief Product Officer, TuSimple

    Safety is built into every aspect of what the company does, from the office to the depots to the on-road deployments. Drivers and safety engineers (Left and Right Seaters) go through six months of formal training before they are even able to touch the autonomy in the truck. Each and every safety driver goes through a drug test prior to being allowed in the vehicle.

    TuSimple treats it's drivers as Blue Angels as the company requires them to operate at the highest ability at all-times. When drivers and safety engineers leave the depot, they are monitored in real-time with in-cabin monitoring and drive cams to ensure the highest level of safety.

    The culture of safety and innovation is attracting partners such as UPS, Penske, U.S. Xpress, and McLane Company Inc. to work with TuSimple. As TuSimple scales, the company is working with Navistar to develop SAE Level 4 self-driving trucks at the factory which are safety certified.

    Rounding out the conversation, Grayson and Chuck talk about the economics of self-driving trucks and how TuSimple Self-Driving Trucks can show an ROI after the first 24 months of purchase. 


    Follow The Road To Autonomy on Apple Podcasts

    Follow The Road To Autonomy on LinkedIn

    Follow The Road To Autonomy on Twitter


    Recorded on Tuesday, September 8, 2020

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Episode 180 | The Rise and Fall of Digital Freight Brokerages and the Growth of Autonomous Trucking, A Conversation with Timothy Dooner, WHAT THE TRUCK?!?

    Episode 180 | The Rise and Fall of Digital Freight Brokerages and the Growth of Autonomous Trucking, A Conversation with Timothy Dooner, WHAT THE TRUCK?!?

    Timothy Dooner, Host, WHAT THE TRUCK?!?, joined Grayson Brulte on The Road to Autonomy podcast to discus the rise and fall of digital freight brokerages and the growth of autonomous trucking

    The conversation begins with Dooner discussing his outlook for the freight market.

    There is 8. 1% less brokerages than there were a year ago at the start of this year. But there’s still 17% more brokerages than we started at the pandemic. Everyone’s been waiting for not just volumes to go up, but the way freight works, it’s volume plus capacity. They’ve been waiting for the capacity to go down. Volumes are looking a little bit better. Things are receding and this year I’m hearing a lot more optimism. – Timothy Dooner

    The optimism is being shared by Walmart as there are rumors circulating that Walmart is looking to develop a digital freight brokerage. Since Walmart operates their own fleet, they have a unique data set that could potentially help them leapfrog the competition when and if they are introduce a digital freight brokerage service. 

    The freight market is currently turbulent as the demand for freight and the capacity to haul the freight are not in sync. Then there is the California electric truck mandate which will ultimately end up increasing the costs to ship freight, hurting both the carriers and the consumer. Could these mandates help to accelerate the adoption of autonomous truck as they are cheaper to operate? 

    It’s possible and as we are seeing in California, autonomous vehicle technology is not always welcome. in San Francisco vandals set fire to a Waymo autonomous vehicle with a firework, burning the vehicle to the ground. If the regulatory environment in California eventually allows autonomous trucks to operate, will similar vandals also try to cause damage to autonomous trucks? 

    Autonomous trucking is going to play a major role in the future of trucking and the global economy. As the technology is developed different business models are going to come to fruition and one of those is the licensing model. Kodiak has the potential to license their SensorPods technology, creating a lucrative revenue stream as they develop their autonomous trucking platform. This is in addition to their growing defense business.

    Then there is Uber. Uber has investments in Aurora and Waabi, and has the Uber Freight division. Yet they do not operate an autonomous trucking fleet. Grayson and Dooner go onto dicuss Uber’s autonomous trucking investment strategy and who ultimately owns the asset.

    Wrapping up the conversation, Dooner shares his 2024 outlook for the trucking market. 


    Recorded on Wednesday, February 14, 2023


    Episode Chapters

    • 0:00 Introduction 
    • 1:34 Freight Market Outlook 
    • 7:31 Walmart’s Rumored Digital Freight Brokerage 
    • 10:42 Are Electric Truck Mandates Accelerating the Adoption of Autonomous Trucks 
    • 13:57 Vandals in San Fransisco Set Fire to a Waymo Autonomous Vehicle 
    • 18:20 Commercializing Autonomous Trucking 
    • 25:32 The Business of Kodiak Robotics
    • 28:15 Autonomous Delivery Drones 
    • 31:55 Uber’s Autonomous Trucking Investment Strategy 
    • 39:18 Who Owns the Asset? 
    • 42:59 Tesla Cybertruck 
    • 43:52 Apple Vision Pro 
    • 51:08 2024 Trucking Outlook


    --------

    About The Road to Autonomy

    The Road to Autonomy® is a leading source of data, insight and analysis on autonomous vehicles/trucks and the emerging autonomy economy™. The company has two businesses: The Road to Autonomy Indices, with Standard and Poor’s Dow Jones Indices as the custom calculation agent; Media, which includes The Road to Autonomy podcast and This Week in The Autonomy Economy newsletter.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    The Rise of Self-Driving Cars

    The Rise of Self-Driving Cars

    Neil deGrasse Tyson investigates the fast-rising world of self-driving cars with former VP of R&D at GM and Mobility Consultant for Google, Inc. Larry Burns, Wired magazine transportation editor Alex Davies, and comic co-host Chuck Nice.
    NOTE: StarTalk All-Access subscribers can watch or listen to this entire episode commercial-free. Find out more at https://www.startalkradio.net/startalk-all-access/