Logo
    Search

    Reddit Revolts + MrBeast’s YouTube Empire + Peak Trust and Safety?

    enJune 16, 2023

    Podcast Summary

    • Hotel rooms get tech upgrades, but controversy arises for RedditHotels modernize with USB-C charging ports, while Reddit faces backlash over API pricing changes, highlighting the double-edged sword of technological advancements

      The progress of technology is continually evolving, even in unexpected places like hotel rooms. The speaker, a tech columnist, was surprised to find a USB-C charging port in his hotel room, marking a significant improvement from the outdated 30-pin connectors that were common in the past. This development symbolizes the country's dedication to keeping up with modern technology. However, there are instances where technology advancements can lead to controversy, as seen in the ongoing dispute between Reddit and its users over the site's changes to its API pricing structure. This has resulted in thousands of subreddits being shut down in protest, causing disruptions and raising concerns about the future of the open internet. Overall, it's important to recognize the benefits of technological progress while also addressing the potential challenges it brings.

    • Reddit's API fee sparks controversy over data privacy and app devsReddit's decision to charge for API access raises concerns over data exploitation, user privacy, and the impact on third-party app developers.

      Reddit's decision to charge for API access has sparked controversy due to its potential impact on third-party app developers and users' data privacy. The financial necessity for Reddit to generate revenue before its IPO is a significant factor, but the defense against data exploitation by large language models is another crucial aspect. This change has led to a backlash from Reddit users, who are concerned about their data being monetized without their consent. The history of Twitter's similar move a decade ago provides some context, but the execution of Reddit's decision has been met with more resistance. Ultimately, this situation highlights the complex relationship between data ownership, monetization, and user privacy in the digital age.

    • Reddit's Data Sell Sparks ControversyReddit's decision to sell user data to language model companies sparked controversy among users, some threatening to go dark in protest. Despite backlash, Reddit shows no signs of reversing decision, highlighting tension between platforms, users, and AI industry over user data.

      Reddit's decision to sell its user data to large language model companies has sparked controversy among its user base. While Reddit initially positioned itself as protecting its platform from these tech giants, users felt betrayed as they were not being compensated for their data. The situation escalated with some subreddits threatening to go dark indefinitely in protest. Despite the backlash, Reddit has shown no signs of reversing its decision, leading to a standoff between the company and its users. This incident highlights the tension between social media platforms, their users, and the AI industry over the use of user data. It remains to be seen how other forums will approach this issue in the future.

    • The Future of the Open Internet and User-Generated ContentThe open internet's future is uncertain as companies lock down their platforms and monetize user-generated content, but the value of this content is increasingly recognized.

      The current controversy surrounding Reddit's API policy change and the potential demise of third-party apps is indicative of a larger issue: the shrinking open internet. While some argue that the open internet is dying due to companies locking down their platforms and killing off third-party apps, others believe that the average internet user is not affected and that the open internet is merely in a state of transition. Regardless, it's clear that the data generated by users on social media platforms and websites is becoming increasingly valuable, and companies are starting to recognize this. Media organizations, in particular, are grappling with how to prevent language models from scraping their archives and ensure they are compensated for their valuable content. Ultimately, the future of the open internet remains uncertain, but one thing is clear: the value of user-generated content is increasingly recognized, and companies are taking steps to monetize it.

    • The future of AI data collection and Mr. Beast's influenceCompanies might sponsor journalism for high-quality data while Mr. Beast's generosity and elaborate videos resonate deeply with audiences, making him a cultural icon.

      The future of data collection for AI language models might involve sponsoring journalism organizations to produce fact-checked and reliable text data, essentially reinventing journalism as a means for AI companies to obtain high-quality data. Meanwhile, Mr. Beast, the second biggest YouTube channel after T-Series, has become a cultural icon and a source of fascination for many, especially among younger audiences. His popularity lies in his elaborate, expensive, and well-produced videos, which often involve contests and giving away large sums of money. Mr. Beast, whose real name is Jimmy Donaldson, can be seen as the Willy Wonka of YouTube, appealing to the inner child in all of us with his seemingly limitless wealth and generosity. Despite his success, it's not immediately clear what it is about Mr. Beast that resonates so deeply with his audience, making him an intriguing figure to study in the ever-evolving world of online content creation.

    • From YouTube stardom to philanthropy: Mr. Beast's journeyMr. Beast, a YouTube sensation, rose to fame by giving away large sums of money and creating feel-good videos, inspired by TV formats. Controversy arose from a misleading thumbnail, but his focus on philanthropy and unexpected twists continues to inspire.

      Mr. Beast, whose real name is Jimmy Donaldson, gained popularity on YouTube by giving away large sums of money and creating feel-good videos. He was inspired by the success of similar formats in television history and built on this concept, experimenting extensively to grow his audience. A pivotal moment was when he gave away $10,000 to a homeless man, which resonated with viewers and led him to focus more on philanthropy. Mr. Beast's unique approach to giving and creating entertaining content has made him a standout among other YouTubers. The controversy surrounding one of his videos, "A Thousand People See for the First Time," stemmed from the misleading thumbnail, which did not accurately represent the content of the video. Despite the controversy, Mr. Beast's videos continue to generate excitement and inspire viewers with their unexpected twists and generous gestures.

    • Mr. Beast's Unique Approach to YouTubeMr. Beast's success on YouTube is due to his unique approach, including attention-grabbing thumbnails, fast-paced intros, and emotional reactions, despite controversy over large sums of money given away.

      Mr. Beast's success on YouTube is not an accident. He deliberately grabs viewers' attention with unique thumbnails and fast-paced intros. In the specific video discussed, he showcases the impact of curing blindness through surgery, giving each recipient a "Beast bonus" of $10,000. Mr. Beast's approach focuses on the extreme emotional reactions rather than lengthy backstories. This video was significant in understanding Mr. Beast's relationship with his audience as it generated controversy due to the large sums of money given away. Despite the controversy, Mr. Beast's unique approach to content creation and genuine desire to help people continue to resonate with his audience.

    • Mr. Beast's Expansion Beyond YouTube and Ethical ImplicationsMr. Beast's philanthropy and media expansion raise ethical questions, as transparency about recipients' struggles and long-term implications is lacking, while he aims to master various algorithms to expand his brand.

      Mr. Beast, a popular YouTube personality, has expanded his reach beyond the platform by entering other media outlets and engaging in philanthropic activities. However, the ethical implications of his actions, such as paying for surgeries in exchange for audience growth, remain debatable. Jeremiah Howard's story, a recipient of Mr. Beast's charity, highlights the complexities of this situation. While Mr. Beast's kindness is appreciated, the lack of transparency about the struggles and long-term implications of the recipients' lives in his videos raises questions about exploitation. Mr. Beast's success extends beyond YouTube, as he explores other platform marketplaces, such as DoorDash, and aims to master their algorithms to expand his brand. Ultimately, Mr. Beast's impact on society and the ethical implications of his actions continue to be topics of discussion.

    • The divide between generations in YouTube content creationMr. Beast's philanthropic stunts showcase the evolving norms of YouTube content creation, where sincerity and cynicism blur, and the potential for positive impact remains.

      For the younger generation of YouTube creators like Mr. Beast, the line between sincerity and cynicism in content creation may not be as clear-cut as it seems to older generations. Mr. Beast, who has gained fame for his philanthropic stunts on YouTube, may genuinely want to help people while also recognizing the need for attention-grabbing thumbnails and controversial content to succeed on the platform. This divide between generations highlights the unique nature of YouTube as a medium and the evolving norms of content creation. Additionally, the success of Mr. Beast and creators like him provides a refreshing contrast to the negative aspects of YouTube, such as radicalization and offensive content, and serves as a reminder of the platform's potential for positive impact.

    • YouTube's Algorithm Shifts Towards Rewarding More Wholesome ContentYouTube's algorithm now prioritizes positive and entertaining content, as seen in the rise of creators like MrBeast, who gives away money and originated the 'junklord' genre.

      YouTube's algorithm has shifted towards rewarding more wholesome and substantive content, as evidenced by the rise of creators like MrBeast. This change may be driven by a growing audience preference for niceness and higher production value content, but it's also likely that YouTube is actively promoting these types of videos. MrBeast, for instance, is not just giving away money, but also originated a new genre on YouTube called "junklord," which involves spending large sums on junk and showing it off. While there are imitators on both YouTube and TikTok, MrBeast's philanthropic aspect sets him apart. Overall, this shift towards more positive and entertaining content could be a response to audience fatigue with drama and edginess, or a deliberate strategy by YouTube to attract and retain viewers.

    • Empowering viewers to make a tangible impactMr. Beast's unique business model engages fans as active participants in philanthropic projects, challenging the traditional notion of the audience as a commodity for advertising.

      Mr. Beast's unique business model and transparency with his audience sets him apart from traditional media. His fans are not just viewers, but active participants in his philanthropic projects, as they contribute to the ad revenue that funds these initiatives. This relationship challenges the traditional notion of the audience as a commodity for advertising, and instead, empowers viewers to feel they are making a tangible impact through their engagement with his content. This trend towards audience involvement and creators using their platforms for social good may be a significant shift in the creator economy.

    • Understanding the Power of YouTube and PhilanthropyAdolescents feel empowered by contributing to Mr. Beast's philanthropy through watching his videos, and his authentic generosity has been proven despite skepticism

      The popularity of Mr. Beast's YouTube channel goes beyond the feel-good stories in his videos. The viewers, particularly adolescents, have a sophisticated understanding of the platform and feel empowered by their contribution to his philanthropic efforts, even if it's just by watching his videos. Despite concerns about potential exploitation or cynicism, Mr. Beast consistently stays on the ethical line and genuinely donates large sums of money to individuals and causes. While there may be skepticism and conspiracy theories surrounding his generosity, the evidence supports the authenticity of his actions. If you need $10,000 from Mr. Beast, the best advice is to hang out in Greenville, North Carolina, and be ready for an unexpected offer.

    • Tech industry's efforts to maintain trust and safety on their platforms may be decreasingDespite efforts to remove controversial figures, they can still spread misinformation and gain large followings, raising concerns about the decline in trust and safety measures on tech platforms.

      We may have reached a peak in the tech industry's efforts to maintain trust and safety on their platforms. Instances of controversial figures, such as Robert F. Kennedy Jr., being able to amass large followings and spread misinformation despite being removed from one platform, raise concerns that the attention and resources dedicated to this issue may be decreasing. The potential consequences of this trend could have significant impacts on the integrity and safety of online spaces. Max and Casey discussed starting a YouTube show and considered focusing on "old guys reacting to young people stuff." Meanwhile, Kevin raised an alarm about the potential decline in trust and safety measures on tech platforms, citing the example of Robert F. Kennedy Jr.'s ability to run for president and continue spreading conspiracy theories despite being banned from Instagram. The industry's investment in trust and safety teams saw a surge after 2016, but Kevin believes that we may look back on this period as the time when these platforms paid the most attention to this issue.

    • Inconsistent enforcement of rules against misinformation on social media platformsSocial media platforms have been inconsistent in enforcing their rules against misinformation, allowing potentially harmful lies to spread during election seasons, increasing the risk of real-world harm.

      Social media platforms, including Meta (Facebook), Twitter, and YouTube, have been inconsistent in enforcing their rules against misinformation and lies, particularly regarding the 2020 election. This inconsistency raises concerns about the potential for harmful misinformation to spread and influence public opinion, especially during election seasons. For instance, Robert F. Kennedy Jr.'s Instagram account was restored despite past violations, and Donald Trump was allowed back on Meta after being banned. Additionally, Twitter and YouTube have stopped enforcing rules against lying about the 2020 election. These actions could potentially curtail political speech while increasing the risk of real-world harm. As we approach the 2024 election, it is crucial to monitor the spread of misinformation on these platforms and consider the potential consequences for our democracy.

    • Factors influencing social media platforms' trust and safety policiesPolitical pressure, economic realities, and decreased employee leverage contribute to platforms' relaxed approach to trust and safety measures, including the issue of child sexual abuse materials.

      The recent shift in social media platforms' trust and safety policies could be attributed to a combination of factors. These include political pressure from regulatory bodies, economic realities, and a decrease in employee leverage. The platforms may be less inclined to invest heavily in trust and safety measures due to financial constraints and the current economic climate. Additionally, employees, who previously had significant influence on these decisions, now have less leverage due to job market changes and fear of losing their positions. Furthermore, the issue of child sexual abuse materials (CSAM) on these platforms has emerged as a serious concern, and the platforms' apparent relaxation of enforcement in this area is a cause for concern. The Stanford report on CSAM networks highlights the severity of this issue and underscores the need for robust content moderation policies. Overall, the current state of trust and safety on social media platforms is complex and multifaceted, with various factors at play.

    • The lack of investment in trust and safety teams by big tech companiesDespite concerns about harmful content online, some tech companies have dismantled their trust and safety teams. This raises questions about who ensures platform safety and integrity, but the risks of harmful content spreading from smaller platforms to larger ones still exist. Journalists and regulators must stay vigilant and hold companies accountable.

      The lack of investment in trust and safety teams by big tech companies has contributed to the proliferation of harmful content online. The dismantling of these teams at companies like Twitter has raised concerns about who is ensuring the safety and integrity of these platforms. However, some argue that as social media becomes less centralized and more fragmented, the need for large content moderation teams may decrease. Yet, it's important to note that the risks of harmful content spreading from smaller platforms to larger ones still exist. Therefore, it's crucial for journalists and regulators to stay vigilant and hold these companies accountable for their trust and safety practices. The recent disinvestment in these teams may have led to a sense of "outrage fatigue" among the tech press, but the importance of addressing online harm cannot be understated.

    • The urgency of content moderation and trust and safety on social media has decreasedDespite the importance of trust and safety on social media, some companies may prioritize revenue over these efforts, leading to a complex issue.

      The urgency around content moderation and trust and safety on social media platforms has decreased since the end of the Trump presidency. This is due in part to the shift in the pro-Trump media ecosystem, with some of its key figures now on alternative platforms. However, the connection between a platform's moderation policies and its advertising revenue cannot be ignored. Despite this, some executives at these companies may feel that their past efforts to combat misinformation and hate speech have not been effective, leading them to question the necessity of continued investment in these areas. This leaves us with a complex issue, as the need for trust and safety on social media remains crucial, but the motivations and priorities of these companies may not align with this goal.

    • The power of internal shame and positive legacy in regulating social mediaSocial media platforms' internal desire for a positive legacy and shame over misinformation and harmful content may be the most effective regulation. Platforms invest in addressing these issues, but impact is uncertain. AI tools can help detect and prevent harmful content, but ongoing dialogue and innovation are needed.

      While social media platforms contribute to the spread of misinformation and harmful content, the most effective regulation might come from internal shame and the desire for a positive legacy. The platforms have made significant investments in addressing these issues, but the impact is uncertain. In five years, if trust and safety are further de-emphasized and AI tools are more prevalent, the platforms could become even more challenging to navigate for discerning what is true and false. However, these same AI tools can also be used by platforms to detect and prevent the dissemination of harmful content. Ultimately, it's a complex issue that requires ongoing dialogue and innovation.

    • Using AI for combating misinformation on social mediaAI systems can help reduce misinformation on social media, but ethical concerns and legislation are necessary to ensure trust and safety.

      Technology, specifically AI systems, can play a significant role in combating misinformation and content moderation on social media platforms. However, there are concerns about the ethics and implications of relying on AI to police content, especially when it comes to sensitive and potentially harmful material. Legislation, such as the Platform Accountability and Transparency Act, could potentially push platforms to prioritize trust and safety by requiring them to be more transparent about their content and measurements. Additionally, platform design choices also have a significant impact on the spread of misinformation. For instance, Facebook's decision to de-emphasize political news in the newsfeed has helped reduce the spread of political misinformation. However, the introduction of new social networks for discussing news and politics, like the one Matt Hackett recently announced, could potentially introduce new challenges. Ultimately, a multi-faceted approach, including both technological solutions and regulatory measures, is necessary to effectively address the issue of misinformation on social media.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    Spotlight: How Biden’s Regulatory Blunders Are Crushing American Ingenuity

    Spotlight: How Biden’s Regulatory Blunders Are Crushing American Ingenuity

    Administration regulators have tightened water-use rules, pushed for energy-efficiency standards and its war on fossil fuels continues. All these unnecessary rules from Washington are making life less pleasant, more irritating and more expensive! Steve Forbes on how Biden's regulatory blunders are crushing American ingenuity and on why government interference is only making things worse.

    Steve Forbes shares his What’s Ahead Spotlights each Tuesday, Thursday and Friday.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    S1E4: From Investment Manager to Invested Nonprofit CEO

    S1E4: From Investment Manager to Invested Nonprofit CEO

    My guest, Andrea Pauls Backman, spent most of her adult life in investment management, but everything changed when her mother was diagnosed with A.L.S. -- a rare disease for which there is no cure.

    In this episode you’ll learn how this amazing leader transitioned from decades in corporate America as an investments manager to becoming an invested CEO of a nonprofit in the Chicagoland area right as the Founding CEO stepped down.

    To connect with Andrea, learn more about ALS, and view Laura's ice bucket challenge video from 2014, please visit
    https://yournonprofitlife.com/ep4-andrea-pauls-backman/

    A Clockwork Metaverse - (Hour 2)

    A Clockwork Metaverse - (Hour 2)
    Alternate Current Radio Presents: BOILER ROOM - Learn to protect yourself from predatory mass media cartels and social media feedback loops.

    Hour 2
    In the second hour of the podcast the BOILER ROOM is looking at the Neil Young vs Joe Rogan story is unfolding in the wake of the outrage at Joe Rogan interviewing Dr. Robert Malone. Who does the mass media and fortune 100 cartels look to as voices of analysis for this...? Well, a Unicef spokesperson and a one-hit-wonder viral video 'anti-golf' tiktok personality who's now tried to rebrand herself as a "anti-conspiracy researcher." Emperor Fauci is on the jab path again, pushing for jabbing children under the age of five years old with experimental biotech gene therapy mRNA drugs, just following the W.H.O. coming out and saying there was little evidence that this type of experimenting on children is helpful at all. Whats up with mass media platforming someone lightly roasting Schwab and the Davos crowd? Some discerning readers might think they backhandedly want to paint swaths of people as "antidemocratic populists."

    Featured guests: Ruckus (The Daily Ruckus), Oddman Out The Oddcast), Infidel Pharaoh (Magic Carpet Ride), Mindspace Art (Soul Purpose)

    Hour 2 Reference Links:

    An Open Letter to Spotify: A call from the global scientific and medical communities to implement a misinformation policy
    https://spotifyopenletter.wordpress.com/2022/01/10/an-open-letter-to-spotify/

    ‘A Menace to Public Health’: Doctors Demand Spotify Puts an End to Covid Lies on ‘Joe Rogan Experience’
    https://www.rollingstone.com/culture/culture-news/covid-misinformation-joe-rogan-spotify-petition-1282240/

    Ministry of Truth on Abbie Richards
    https://en.wikipedia.org/wiki/Abbie_Richards

    Unicef wants your money
    https://www.unicefusa.org/support-campaign-end-pandemic

    The High Cost of Disparaging Natural Immunity to Covid
    https://www.wsj.com/articles/the-high-cost-of-disparaging-natural-immunity-to-covid-vaccine-mandates-protests-fire-rehire-employment-11643214336

    HOW KLAUS SCHWAB BUILT A BILLIONAIRE CIRCUS AT DAVOS
    https://www.vanityfair.com/news/2022/01/how-klaus-schwab-built-a-billionaire-circus-at-davos