Logo
    Search

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    enMay 24, 2024

    Podcast Summary

    • AI voice controversy and cultural appropriationCreating AI voices that mimic real people can lead to controversy and accusations of cultural appropriation, highlighting the need for ethical considerations in AI technology development.

      The use of a celebrity-like voice for AI technology can lead to controversy and accusations of cultural appropriation. This was recently highlighted when OpenAI's new voice assistant, Chat CPT, was compared to Scarlett Johansson's character in the movie "Her." Despite OpenAI's clarification that the voice was not an imitation, the comparison sparked discussions about the ethics of creating AI voices that mimic real people. The incident serves as a reminder of the potential implications and reactions when using realistic voices in AI technology. Additionally, the discussion touched on the benefits of taking a bath instead of using a float tank for relaxation, as it allows for more freedom and personal music choice.

    • AI voice similar to Scarlett Johansson leads to controversyUse of voice actor's likeness in AI technology without consent can lead to controversy and legal issues.

      The use of a voice actor's likeness without consent in AI technology can lead to controversy and legal issues. In the case of Scarlett Johansson and OpenAI, the voice of the AI named Sky was found to be strikingly similar to Johansson's voice, leading to accusations of unauthorized use and confusion. Johansson herself released a statement expressing her shock, anger, and disbelief, alleging that OpenAI had approached her to voice ChatGPT for the prestige, but she had declined. OpenAI later clarified that they had cast the voice actor for Sky before reaching out to Johansson and had paused the use of Sky's voice out of respect for Johansson. The incident highlights the need for transparency and clear guidelines regarding the use of likenesses in AI technology and the potential legal and ethical implications of such use.

    • OpenAI's Unexpected Collaboration with Scarlett JohanssonOpenAI initially planned to record five voices for their ChatGPT model, but later added Scarlett Johansson's voice without initially intending to. The subsequent marketing and connection to her likeness raises questions about OpenAI's policy against synthetic voices mimicking public figures.

      OpenAI's involvement with Scarlett Johansson's voice in their ChatGPT model was not part of the initial plan, but rather an addition later on. According to OpenAI, the voice team wanted to record five voices for the model, but then decided to pursue Johansson's involvement. Sam Altman was sent to reach out to her in September, and while there was a job posting for voice actors from May 2022, it did not mention Johansson. OpenAI showed evidence of Skye's audition and a recording session, but the video clip was heavily pixelated and did not provide clear confirmation. While OpenAI maintains that Johansson's involvement was unintended, the subsequent marketing and connection to her likeness raises questions. OpenAI has previously stated that they do not want their synthetic voices to mimic public figures, and the potential discovery of conversations about the similarity between Johansson's voice and the model could lead to legal issues. Despite the plausibility of OpenAI's explanation, the extensive promotion of the connection to Johansson's voice and the company's own policy raise concerns.

    • AI-generated voices raise legal and public trust concernsThe use of AI-generated voices by entertainment companies could lead to legal disputes and damage public trust due to potential misuse and lack of clear guidelines

      The use of AI-generated voices, as seen in the case of Scarlett Johansson and Chat TBT, raises valid concerns about legality and public trust. The entertainment industry has a long history of impersonators being used without consent, leading to lawsuits for false endorsement. The cases of Tom Waits and Bette Midler serve as precedents. Scarlett Johansson, after refusing to allow her voice to be used, found her likeness replicated by OpenAI for their AI model. The legality of this action is unclear, and Johansson has lawyered up, potentially leading to litigation. Beyond the legal implications, this incident could negatively impact public trust in OpenAI. The company, which is working on developing artificial general intelligence, has previously enjoyed a positive public image due to its innovative and useful AI applications. However, this incident could shift public perception, raising concerns about the potential misuse of AI technology and the need for clearer guidelines and regulations.

    • Tech companies, like OpenAI, face trust issuesRecent controversies, including leadership changes and employee agreements, have damaged OpenAI's reputation, highlighting the need for tech companies to prioritize transparency and ethical practices to regain public trust

      The recent actions of tech companies, such as OpenAI, have contributed to a significant erosion of public trust. The perception that these companies prioritize profits over people, as evidenced by the controversy surrounding Scarlett Johansson and OpenAI's use of her voice, has tarnished their reputation. Additionally, the handling of leadership changes within OpenAI, such as the board coup against Sam Altman last year, has raised concerns about transparency and accountability. However, a more recent revelation about employee agreements at OpenAI may be an even bigger concern for the company's future. The departure of key employees and the subsequent scrutiny of their exit paperwork have shed light on potentially problematic clauses, adding to the growing unease around the company's practices. Overall, tech companies need to reframe their narratives and prioritize transparency and ethical business practices to regain public trust.

    • OpenAI's Controversial Exit PolicyOpenAI's unusual exit policy sparked criticism and concerns about trust, transparency, and talent loss in the AI community. Despite the controversy, OpenAI's business continues to thrive.

      OpenAI, a leading AI research lab, has been under scrutiny this week due to an unusual provision in their exit paperwork. This provision allows the company to claw back vested equity from employees who publicly disparage or disclose confidential information after leaving the company. This is highly unusual in the tech industry, and many in the AI community have criticized it as an attempt to silence former employees. The potential loss of trust and talent due to this policy could be detrimental to OpenAI, as they are constantly competing for the best researchers in the field. Despite the controversy, OpenAI's business is still thriving, with reports of partnerships with Apple and Microsoft and significant revenue growth. However, the vibe around OpenAI and the wider AI community has shifted, with some expressing concerns about the company's trustworthiness and transparency. This incident highlights the importance of clear communication and ethical practices, especially in a talent-heavy and rapidly evolving industry like AI.

    • AI and Ethics: Companies Must Seek ConsentThe OpenAI voice controversy highlights the importance of seeking public consent in the development and deployment of AI technologies. Neuralink's brain-computer interface raises ethical concerns and the need for careful consideration in its implementation.

      While companies like Uber and Facebook have been able to ask for forgiveness instead of permission in the past, the emerging field of AI may not afford the same luxury. The OpenAI voice controversy serves as a reminder of the potential consequences of pushing technological boundaries without proper consent. The public's perception of AI is shifting, and companies must be cautious in their decision-making process. Another intriguing development in technology is the brain-computer interface (BCI) by Neuralink, which Elon Musk's company is pioneering. While noninvasive BCIs have been around for some time, Neuralink's implant goes a step further by directly connecting the brain to the computer. This technology, which involves threads with electrodes that penetrate the brain, translates electrical activity into commands to control external devices. Although BCIs have been explored for medical purposes, some in Silicon Valley envision it as the next step in computing. However, as we delve deeper into this technology, it's crucial to consider the ethical implications and potential risks, especially with invasive procedures.

    • Experimental Brain-Computer Interface Helps Quadriplegic Man Regain ControlNeuralink, a brain-computer interface, has helped a quadriplegic man move a cursor using thoughts, potentially revolutionizing life for disabled individuals and advancing neurotechnology.

      Neuralink, a brain-computer interface technology, is currently in its experimental stages and has shown promising results in helping a quadriplegic man named Nolan Arbaugh regain some control over his life. Nolan, who suffered from a severe spinal cord injury eight years ago, volunteered to be the first human patient to receive the Neuralink implant. The device, about the size of a coin with threads and electrodes connecting to his brain, enables him to move a cursor on a computer using only his thoughts. This technology, which is still in its infancy and faces numerous challenges, has the potential to revolutionize the lives of people with disabilities and could pave the way for further advancements in neurotechnology. Despite the excitement surrounding this technology, it's important to remember that it's still an unproven new technology, and its long-term effects and potential risks are yet to be fully understood.

    • A man's decision to undergo a risky brain implant procedureA man, motivated by personal growth and contribution, made the difficult choice to undergo a risky brain implant procedure, despite concerns from loved ones, for the potential to regain lost abilities and contribute to technology advancements.

      The decision to undergo a groundbreaking and risky brain implant procedure, such as the Neuralink, requires deep introspection and consideration. The individual involved, a quadriplegic man, weighed the potential risks and rewards, his personal experiences, and the impact on his loved ones. He ultimately decided to go through with it due to his desire to contribute to the advancement of technology and improve his own quality of life, despite the unknowns. His conversations with his parents were difficult, as they were understandably concerned about the potential risks to his mental and physical wellbeing. The man's determination to move forward was fueled by his desire to regain the ability to read and play games, experiences he had missed out on due to his disability. Despite the nerves, the night before the surgery he was filled with excitement and made light of the situation with jokes about being a cyborg.

    • Neuralink surgery: Quicker and less painful than anticipatedPatient underwent a 2-hour Neuralink surgery instead of the anticipated 4-6 hours, with minimal pain and a smooth recovery process. After the procedure, the implant was connected to a tablet and charged using a coil, allowing the patient to control a cursor on the screen with their thoughts.

      The Neuralink surgery experience was much easier than expected, with a quick procedure and a surprisingly smooth recovery process. The patient was scheduled for a 4-6 hour surgery, but it only took under 2 hours. There were no complications, and pain was minimal. After the surgery, the implant was connected to a tablet via Bluetooth and charged using a coil, similar to charging a phone. The patient was able to see neurons firing in real-time on the screen, and after a week or two of practice, was able to control a cursor on the screen just by thinking about it. The initial attempts at cursor control required focused thought, but as the algorithm learned the patient's intentions, it became more intuitive. Overall, the patient was impressed by the ease and effectiveness of the Neuralink technology.

    • Neuralink user shares his experience with improved control and enjoyment of gamesNeuralink allows users to control computers intuitively, enhancing leisure activities like gaming, and is a significant improvement over other assistive technologies for those with mobility issues.

      The Neuralink technology, which allows users to control a computer cursor using their brain, is becoming increasingly intuitive and natural for users, even if they're not as quick as others yet. The user in this discussion, who has been using Neuralink for a few months, shared that he's not as fast as other users but believes that with more practice and tweaking on the software side, he will improve. He also mentioned that he was able to play video games like Civilization 6 and Mario Kart after getting the implant, which shows the technology's potential for enhancing leisure activities. Compared to other assistive technologies like eye trackers, Neuralink is a significant improvement for users with mobility issues, as it doesn't require users to be centered on the screen and is less affected by body spasms. Overall, Neuralink offers a more seamless and effective way to control computers, allowing users to do things they enjoy and making their lives better.

    • Neuralink user experiences thread retraction issueEngineers discovered the cause of Neuralink user's thread retraction was human brain's greater movement than anticipated and solved it with a software update.

      The Neuralink device, which connects to the brain through threads, can experience thread retraction, potentially affecting the user's ability to use the device. Nolan, a Neuralink user, experienced this issue about three weeks after the surgery, resulting in a loss of control over the cursor. The threads had retracted by 85%, and brain scans couldn't detect the threads. Instead, engineers analyzed the electrodes' signals to determine which threads were still active. The cause was discovered to be the human brain's greater movement than anticipated – it moves about 3 millimeters instead of the expected 1 millimeter. Fixing the issue required a software solution rather than another surgery. Engineers changed the way they recorded neuron spikes and found a more effective method. Nolan's performance with the device is now better than before, and he continues to improve. The potential future applications of BCIs, like Neuralink, extend beyond assisting people with disabilities, with the possibility of mainstream use for anyone. Nolan believes in this future, considering the technology's safety and endless possibilities. Applications include curing paralysis, motor diseases, and even blindness.

    • A man's emotional journey with NeuralinkNeuralink has the potential to revolutionize lives of individuals with disabilities, enabling simple pleasures and promising exponential growth for future applications.

      The development of Brain-Computer Interfaces (BCIs) like Neuralink holds immense potential to revolutionize the lives of individuals with disabilities and push the boundaries of human capabilities. Nolan, a recipient of a Neuralink implant, shares his emotional journey of regaining independence and the impact it has had on his life. He expresses gratitude for the simple pleasures, such as being able to read a book or have a drink of water in the middle of the night, that many take for granted. The future of BCIs promises exponential growth, with the potential to help blind people see or give paralyzed individuals the ability to move their bodies. Nolan's conversations with Elon Musk have been inspiring, with a shared vision of using this technology to make a positive impact on humanity. The promises made by visionaries like Musk, while ambitious, have the potential to transform lives and open up new possibilities for individuals with disabilities.

    • Individuals' Excitement for Emerging Technologies like NeuralinkPeople are eager to try new technologies, even if they're not fully realized, highlighting the potential impact and excitement surrounding AI and brain-computer interfaces.

      Despite skepticism from health experts, individuals like Nolan remain excited about the potential of emerging technologies like Neuralink. Nolan's willingness to volunteer for the procedure underscores the hope and determination to push boundaries, even if the technology may not yet be fully realized. Meanwhile, at Microsoft's Build conference, the company showcased new hardware focused on AI integration. Microsoft's significant investment in AI, with projects like OpenAI, positions them as a major player in the AI industry. While often overshadowed by more consumer-focused tech companies, Microsoft's impact on the working world through their PC dominance warrants close attention to their AI developments.

    • Microsoft Introduces New AI-Powered PCs with Neuro Processing UnitsMicrosoft unveiled new Copilot Plus PCs, featuring NPUs for local AI processing, pre-installed AI models, and potential for faster application speeds. Microsoft aims to make these PCs a developer platform.

      Microsoft announced new Copilot Plus PCs, which are Windows PCs designed to run AI locally using neuro processing units (NPUs). These PCs come with multiple AI models pre-installed and have the potential to significantly speed up AI applications. Microsoft aims to make these PCs a developer platform by providing access to local data and faster processing. While there are similarities to past AI announcements, the difference this time is Microsoft's vast market share and partnerships with major laptop manufacturers, making this a more promising development.

    • Microsoft's new local AI features: Affordable solutions for real-time assistanceMicrosoft introduces local AI features like running language models locally and a new Recall feature for real-time assistance in applications with latency concerns. The Recall feature, which builds a history of digital encounters using screenshots and generative AI, can be a helpful digital version of photographic memory.

      Microsoft's new local AI features, including running language models locally and the introduction of a new Recall feature, aim to offer "good enough" solutions that are more affordable for developers and can provide real-time assistance in applications where latency matters, such as gaming. The Recall feature, which builds a history of everything you've looked at on your laptop by taking constant screenshots and allowing you to search through them using generative AI, can be seen as a digital version of photographic memory. The target audience for this feature includes individuals who frequently search for information or shop online and want to easily access their past digital encounters. Microsoft provided examples of personal uses, such as a woman searching for a dress for her grandma or someone looking for jeans, to demonstrate the potential usefulness of the Recall feature. While these features may not be as advanced as cloud-based solutions, they offer a more cost-effective alternative for developers and can provide valuable assistance in specific use cases.

    • Microsoft's Copilot raises privacy concerns with screenshot featureMicrosoft's new AI-powered Copilot feature takes screenshots of users' activities to provide suggestions, but raises privacy concerns due to potential for unwanted surveillance and misuse of data.

      Microsoft's new AI-powered Copilot feature raises significant privacy concerns, especially for individuals and businesses dealing with sensitive information. The feature, which takes screenshots of users' activities and uses them to provide suggestions, could lead to unwanted surveillance and potential misuse of data. While Microsoft assures that the screenshots remain on the device and are not sent to the company, the ease of access to these screenshots by employers or other authorized users could lead to a dystopian workplace environment. The feature, which is aimed at both businesses and consumers, could potentially normalize this level of surveillance. The lack of awareness about the technology and its capabilities among users could further exacerbate these concerns. Overall, the implementation of this feature requires careful consideration and clear communication from Microsoft to address these privacy concerns and build trust with its users.

    • Microsoft's new AI laptops challenging Apple's dominanceMicrosoft's new AI-powered laptops offer improved performance and AI capabilities, but potential intrusive ads may deter some users from switching from Apple's ecosystem

      Microsoft's new AI-powered laptops, with their advanced chips and NPUs, are challenging Apple's dominance in the tech industry. These laptops, which offer improved performance and AI capabilities, are now in the same class as Apple's offerings. However, the competition between the two tech giants isn't just about hardware. Microsoft's approach to implementing AI and potential intrusive advertising on its devices may deter some users from making the switch from Apple's ecosystem. Microsoft's recent decision to test ads in the Windows 11 start menu adds to these concerns. While the new AI laptops show promise, Microsoft must be careful not to annoy users with excessive nudges and intrusive ads. The success of Microsoft's new laptops will depend on how well the company balances utility and user experience. The upcoming Apple developer conference is expected to bring more AI-focused announcements, making the competition between these tech giants even more intense. Ultimately, the decision for users to switch ecosystems will depend on their individual needs and preferences.

    • Discussing the concept of a search engine that returns 'dummy content'While the idea of a search engine that returns extensive information might seem helpful, it could lead to information overload and the challenge of effectively filtering and making sense of the data.

      While the idea of a search engine that returns an extensive amount of information in one go might seem appealing, it could potentially be overwhelming for users. This was a topic of discussion on a recent episode of the Hardfork podcast, where the team explored the concept of a search engine that returns "dummy content" as a solution to the problem of having too many tabs open. While the intention behind this was to help users manage their online research more effectively, the speakers expressed doubts about its practicality due to the sheer volume of information it would provide. They also highlighted the challenges of effectively filtering and making sense of this data. Overall, the podcast raised interesting questions about the role and design of search engines in an age of information overload.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.