Logo
    Search

    The People vs. Meta + Marques Brownlee on YouTube and Future Tech + DALL-E 3 Arrives

    enOctober 27, 2023

    Podcast Summary

    • Meta Sued by Attorneys General Over Alleged Harm to TeensMeta, the parent company of Facebook, is being sued by several attorneys general for allegedly keeping kids engaged with addictive products, despite internal knowledge of risks, and failing to disclose this information to the public.

      Meta, the parent company of Facebook, is facing a major lawsuit led by several attorneys general, alleging that the social media platform is addictive and harmful to teens. This lawsuit, which is being compared to those against Big Tobacco and Big Pharma, claims that Meta had a long-running scheme to keep kids engaged with their products, despite internal knowledge of the risks. The lawsuit comes as part of a wider trend of state attempts to regulate tech companies, particularly when it comes to children and teenagers. The lawsuit's allegations include the products being addictive, having health concerns, and Meta's failure to share internal knowledge of the risks with the public. This is Meta's first major legal challenge since the company changed its name from Facebook. The lawsuit's outcome could have significant implications for the tech industry as a whole.

    • Lawsuit alleges Facebook's addictive features harm teenagersThe lawsuit accuses Facebook of using features like counts, alerts, rewards, and infinite scroll to trigger dopamine responses and potentially harm teenagers. However, the lack of regulation in app development may make it challenging for states to hold the company accountable.

      The allegations in the lawsuit against Meta (formerly Facebook) suggest that the company's use of addictive features, such as counts, persistent alerts, variable rewards, filters, disappearing stories, infinite scroll, and getting rid of chronological feeds, are designed to produce dopamine responses and are particularly harmful to teenagers. However, the lack of a regulatory framework for app development in the US may make it difficult for the states to successfully argue that Meta should be held liable for having these features. Additionally, many popular social media apps use similar features, so if Meta is targeted, other apps may also face scrutiny. Social networks are constantly evolving and adding new features to keep users engaged, and it remains unclear what, if any, restrictions companies face in pursuing young users.

    • Lawsuit against Meta alleges targeting young users with potential mental health harmsThe lawsuit against Meta hinges on the AGs' ability to provide concrete evidence of Meta's knowledge and intent to cause harm to young users, potentially impacting their mental health.

      The ongoing legal complaint against Meta, formerly Facebook, centers around allegations of targeting young users, who are particularly vulnerable to mental health harms, and marketing Instagram to them despite the potential risks. The success of this lawsuit may hinge on the Attorneys General's ability to provide concrete evidence of Meta's knowledge and intent to cause harm. The lawsuit, as it stands, presents several controversial claims against Meta, but the impact of the redacted parts remains uncertain. The case brings to mind instances where companies have faced backlash for marketing harmful products to children, such as the Jewel vaping case. The perception of harm, while significant, may not be enough to prove Meta as the primary driver of the mental health crisis among teenagers in the US. The AGs will need to present substantial evidence to support their claims.

    • Meta's handling of underage users on Instagram and Facebook under scrutinyMeta faces a significant lawsuit over collecting data from underage users without proper consent, raising concerns about the presence of minors on its platforms and the need for stricter regulations.

      Meta's handling of underage users on its platforms, particularly Instagram and Facebook, has been a significant issue, and the company's attempts to downplay the problem may not be effective. The ongoing lawsuit against Meta, led by multiple attorneys general, alleges that the company has violated the Children's Online Privacy Protection Act (COPPA) by collecting data from users under 13 without proper consent. The presence of underage users on these platforms is a major concern for parents and regulators, and Meta's past attempts to create apps specifically for younger users have raised ethical questions. The stronger part of the lawsuit appears to be the data privacy concerns, and it is unlikely that Meta will escape without paying a significant fine. The company's argument that it is one of the only companies trying to address the issue may not hold water, as other companies have also faced scrutiny for similar issues. Overall, Meta's handling of underage users on its platforms has been a persistent problem, and the ongoing lawsuit underscores the need for stricter regulations and more effective enforcement of existing laws.

    • Lawsuit against Meta could lead to more than a fineThe ongoing lawsuit against Meta for teen mental health concerns could result in more regulations to protect users, potentially including restrictions on features and screen time limits.

      The ongoing lawsuit against Meta (Facebook) for its impact on teen mental health could potentially lead to more than just a fine if there's compelling evidence of direct harms. However, without seeing the full complaint and its unredacted portions, it's uncertain if such evidence exists. The speaker expresses hope that it does, as mental health issues among young people are a pressing concern. They also believe that more regulation is needed to protect users, citing Europe's Digital Services Act as an example. The speaker suggests that the US could establish its own regulatory framework, potentially including restrictions on features like likes for minors and screen time limits. They believe that such regulations could significantly improve the social media industry.

    • Exercising Caution in Designing Products for Young UsersTech companies should be mindful of potential negative consequences for young users and take on additional responsibility for their well-being. Marques Brownlee's success story highlights the importance of dedication, hard work, and adaptability in the tech world.

      Tech companies, including Meta and others, should exercise greater caution when designing products for young users. The speaker suggests that these companies should consider the potential negative consequences of their actions and take on additional responsibility to ensure the well-being of their younger audience. This idea is drawn from the context of the discussion about the role of the Federal Communications Commission (FCC) in regulating content on traditional media, and the desirability of having a similar oversight body for social media. Another key takeaway is the inspiring story of Marques Brownlee, a successful tech creator on YouTube, whose channel, MKBHD, has grown from humble beginnings to a massive following of 17.7 million subscribers. Brownlee's journey offers valuable insights for anyone looking to start a YouTube channel, as he has demonstrated the importance of dedication, hard work, and adaptability in the ever-evolving world of technology. In essence, the discussion emphasizes the importance of responsibility and care when it comes to designing products for young users in the digital age, and the potential for individuals to achieve great success through hard work and dedication on YouTube.

    • YouTube's evolution from a hobby to a careerCreators like Kevin saw YouTube grow from a hobby to a potential career path, with the introduction of ad revenue sharing opening up new opportunities.

      YouTube's evolution from a hobbyist platform to a potential career path for creators was a gradual process. When the partner program started sharing ad revenue with more creators, it opened up the possibility for individuals to make a living from their videos. However, for some creators like Kevin, the growth was steady and organic, allowing them to explore various topics within their niche without being swayed by the allure of viral videos. During YouTube's early days, which Kevin described as the Wild West, anyone could create content without the expectation of monetary gain. It was a time of exploration and discovery, where creators like Kevin made videos out of curiosity and a desire to share information. The introduction of monetization opportunities did not drastically change the approach for everyone, but it did add a new dimension to the platform.

    • Optimizing YouTube content for successBalance engaging content with optimization for high engagement and interaction, adapt to YouTube algorithm changes, and be attentive to meta elements.

      Creating successful content on YouTube involves a balance between creating engaging and informative videos, and optimizing various elements such as titles, thumbnails, and retention strategies. While the primary focus should be on delivering valuable content, ignoring optimization can lead to missed opportunities. The YouTube algorithm continues to evolve, rewarding videos that receive high engagement and interaction from viewers, and successful creators keep up with these changes to stay ahead. The approach to optimization can vary from extensive testing and analysis to a more intuitive, creative process. Regardless of the approach, it's essential to be attentive to the YouTube meta and adapt to the changing landscape of what performs well on the platform.

    • Focus on making high-quality videosCreators should prioritize making good content over worrying about algorithms or trends. Defining what 'good' means, such as providing value, entertainment, and truth, is crucial for attracting and retaining an audience.

      Creating good content on YouTube is the key to growing a successful channel. Ralph, a renowned video games critic, emphasized this point by suggesting creators focus on making high-quality videos rather than worrying too much about the algorithm or the latest trends. Marques Brownlee, the speaker in the conversation, agreed and added that defining what a good video means to him, which includes providing value, entertainment, and delivering the truth, should be the priority. He also acknowledged that YouTube may have its own definition of a good video, but as long as creators keep making what they believe is good content, they can attract and retain an audience. Regarding the tech industry, Marques shared his view that smartphones, despite being mature, still offer excitement due to the ongoing innovation, such as folding phones. He also noted that every technology follows an adoption curve, and understanding where each technology stands on that curve can help us anticipate future developments.

    • Predicting the Future Form Factor of AR, VR, and AI HardwareThe future of AR, VR, and AI hardware is uncertain, with both smart glasses and VR headsets aiming for inconspicuous designs, and AI hardware predicted to be discreet and unobtrusive, possibly as earbuds or glasses.

      We're in the early stages of new technologies like AR, VR, and AI hardware, and it's uncertain which form factor will become the most widely adopted. The speaker expresses interest in both smart glasses and VR headsets, predicting that they'll both aim to create inconspicuous devices that augment reality. Smart glasses may have an edge due to their resemblance to regular glasses, but it's hard to predict which technology will win out. As for AI hardware, the speaker believes it will be most successful if it's discreet and unobtrusive, possibly taking the form of earbuds or glasses. The future of technology is constantly evolving, and it's essential to stay open-minded about new innovations.

    • Advancements in Electric Cars and VR/AR Technology in the Next Two YearsBattery technology improvements will make current electric cars obsolete, Apple's entry to VR/AR market will accelerate mass adoption, and AI image generators like Dolly 3 are making significant progress

      The next two years are expected to bring significant advancements in both electric cars and Virtual Reality/Augmented Reality (VR/AR) technology. Regarding electric cars, the rapid improvement in battery technology will make current models obsolete. As for VR/AR, the entry of tech giants like Apple into the market is predicted to accelerate mass adoption, with smart glasses also becoming available. In the realm of AI, image generators like Dolly 3 are experiencing remarkable progress. As an example, Dolly 2's output of monkey firefighters in 2021 looked somewhat melted and cartoonish, while the same prompt in Dolly 3 produced photorealistic and 2D cartoon images. This demonstrates the impressive strides being made in this area. Overall, the next two years are shaping up to be an exciting time for these emerging technologies.

    • Dolly AI expands on user prompts for more creative imagesDolly AI's ability to generate more detailed images based on user prompts saves time and effort, and offers a range of styles to choose from, demonstrating the rapid evolution of AI language models.

      Dolly, an AI text-to-image model, can significantly enhance and expand on user prompts, resulting in more creative and detailed images than what the user might initially input. This feature can save users time and effort in crafting elaborate prompts, as the AI can rewrite and expand on short and simple ones. Additionally, Dolly can generate images in various styles, from photorealistic to illustrative, providing a range of options for users. This design decision not only simplifies the user experience but also serves as a teaching tool, demonstrating the capabilities of the model. While this feature might not have immediate practical applications for everyone, it offers valuable insights into the rapid evolution of AI language models and their increasing ability to generate high-quality, detailed content. By using Dolly or similar text-to-image generators, users can gain a better understanding of the advancements in AI technology and appreciate the improvements in language models over time.

    • Dolly 3's Content Policies: Unclear and StrictDolly 3's content policies are unclear and strict, leading to inconsistent results and challenges for users in navigating them.

      While Dolly 3, a text-to-image AI model from OpenAI, shows impressive capabilities, it's not yet ready for professional media creation due to its unclear and strict content policies. Users have reported receiving flags for reasons that aren't always clear, making it challenging to understand and navigate. These rules, put in place to prevent misuse, include restrictions on public figures and nudity. However, even seemingly innocuous prompts, like a teddy bear detective meeting a client, have triggered content policy violations without clear explanations. Additionally, the model's inconsistency in handling corporate logos adds to the confusion. Although these restrictions may be necessary to avoid copyright issues and potential misuse, they create a more restrictive environment than expected for a new technology.

    • AI image generators raise concerns over representation and diversityWhile AI image generators offer numerous benefits, they also raise concerns over the representation and diversity of the images produced. Users need to be educated about the rules and guidelines for generating images, and creators should consider the ethical implications of using these tools.

      While AI image generators like Dolly 3 offer numerous benefits, they also raise concerns regarding the representation and diversity of the images produced. The discussion revealed that the images generated by Dolly 3 often feature very symmetrical and conventionally attractive faces, which can be a reversion to the mean rather than an accurate representation. This issue was highlighted in a recent Atlantic article. Furthermore, the creators of these AI image generators need to do a better job of educating users about the rules and guidelines for generating images. There is a lack of transparency regarding what is considered acceptable, leading to unexpected results. Despite these concerns, the use of AI image generators in creative processes can be a source of inspiration and enjoyment for those who may not have the artistic abilities they desire. For creators like the speaker, who have always wanted to be artists but never quite reached their potential, these tools offer a new way to explore their creativity and generate impressive visuals. However, it's essential to consider the ethical implications of using AI image generators, particularly in regards to representation and the potential impact on the creative industry. As the use of these tools becomes more widespread, it's crucial to have ongoing discussions about their benefits and limitations.

    • AI image generators raise ethical concernsAI image generators offer convenience but raise ethical concerns over copyrighted images and imitating artists' styles, with ongoing debate around artists' rights and responses from the industry

      While AI image generators like Dolly offer convenience and accessibility, they raise ethical concerns, particularly regarding the use of copyrighted images in their training and the potential for imitating living artists' styles. Dolly has implemented measures such as opt-outs for artists and refusals for searches related to living artists, but these steps may not fully address the issue. The debate around artists' rights and the use of their work in AI models is ongoing, and it remains to be seen how artists will respond to these measures. Ultimately, the use of AI image generators in creative industries requires careful consideration and ongoing dialogue to ensure fairness and respect for artists' rights.

    • Artists manipulate AI image generators with data poisoningArtists use data poisoning to alter AI outputs, but effectiveness is uncertain due to advancements in detection systems, raising questions about ownership, authenticity, and creative expression.

      As the use of AI image generators becomes more prevalent, artists are exploring unconventional methods to assert their creative control. One such method is data poisoning, where artists manipulate pixels in their art before uploading it online to alter the AI's outputs. This was demonstrated in a study using the tool Nightshade, which can make an image of a handbag appear as a toaster to an AI model. However, the effectiveness of this method is questionable as companies like OpenAI are developing sophisticated systems to detect AI-generated images with high accuracy. This cat-and-mouse game between creators and platforms highlights the ongoing tension between AI-generated content and human creativity. The implications of this development extend beyond the art world, raising important questions about ownership, authenticity, and the role of technology in creative expression.

    • Exploring ethical ways to compensate artists in AI art generationAdobe's Firefly model pays artists bonuses based on number and commercial value of their images in the training dataset, addressing artists' rights and compensation in AI art generation.

      Companies like Adobe are exploring ethical ways to compensate artists in the AI art generation space. Adobe's Firefly model, which uses licensed images, plans to pay artists bonuses based on the number of their images in the training dataset and the commercial value of those images. This system seems fair and ethical, and could help alleviate concerns around the use of AI art generators. It's important for companies to consider artists' rights and compensation in the development and implementation of these tools. By doing so, they can create a more positive and creative environment for users. This is just one example of how companies can approach this issue, and it's likely that more will follow suit. Overall, the conversation around AI art generation is an important one, and it's crucial that we continue to explore ethical solutions.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    Ghost Notes and Friends: Maggie Mae Fish

    Ghost Notes and Friends: Maggie Mae Fish

    Cory and Noah are joined by Maggie Mae Fish to discuss how artists grow and change as they move into their late careers, along with what a "late career" even means.


    Check out Ghost Notes on Nebula, where you can hear the new episodes a month early: https://nebula.tv/ghost-notes


    12tone

    https://twitter.com/12tonevideos

    https://nebula.app/12tone

    https://www.youtube.com/c/12tonevideos

    https://www.patreon.com/12tonevideos


    Polyphonic

    https://twitter.com/WatchPolyphonic

    https://nebula.app/polyphonic

    https://www.youtube.com/c/Polyphonic

    https://www.patreon.com/polyphonic


    Maggie Mae FIsh:

    https://www.youtube.com/@MaggieMaeFish

    https://nebula.tv/maggiemaefish

    https://nebula.tv/unrated

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Artistic Intent

    Artistic Intent

    Cory and Noah talk about artistic intent, when and why it matters, and what it's actually useful for.


    Subscribe to Curiosity Stream and get access to Nebula where you can listen to Ghost Notes episodes one month early: https://curiositystream.com/ghostnotes


    12tone

    https://twitter.com/12tonevideos

    https://nebula.app/12tone

    https://www.youtube.com/c/12tonevideos

    https://www.patreon.com/12tonevideos


    Polyphonic

    https://twitter.com/WatchPolyphonic

    https://nebula.app/polyphonic

    https://www.youtube.com/c/Polyphonic

    https://www.patreon.com/polyphonic

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Does Genre Matter?

    Ghost Notes and Friends: Aimee Nolte

    Ghost Notes and Friends: Aimee Nolte

    Cory and Noah are joined by Aimee Nolte to discuss the difficulties of making and marketing music on a platform like YouTube, even as an established music analyst.


    Subscribe to Curiosity Stream and get access to Nebula where you can listen to Ghost Notes episodes one month early: https://curiositystream.com/ghostnotes


    Aimee Nolte

    https://twitter.com/AimN

    https://nebula.tv/aimeenolte

    https://www.youtube.com/c/AimeeNolte


    12tone

    https://twitter.com/12tonevideos

    https://nebula.app/12tone

    https://www.youtube.com/c/12tonevideos

    https://www.patreon.com/12tonevideos


    Polyphonic

    https://twitter.com/WatchPolyphonic

    https://nebula.app/polyphonic

    https://www.youtube.com/c/Polyphonic

    https://www.patreon.com/polyphonic

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.