Logo
    Search

    Generative AI is Here. Who Should Control It?

    enOctober 21, 2022

    Podcast Summary

    • Exploring the Impact of Generative AI on Art, Creativity, and IdeasGenerative AI is revolutionizing industries by generating new content such as images, videos, and audio, and its impact on art, creativity, and ideas is significant.

      We're witnessing a new wave of artificial intelligence (AI) that goes beyond analyzing existing data to generating new content, such as images, videos, and even audio. This technology, known as generative AI, is revolutionizing various industries and making headlines in Silicon Valley. During this episode of Hard Fork, we'll explore the impact of generative AI on art, creativity, and ideas, and discuss its potential implications with Emmy Admostock, the founder and CEO of Stability AI. Although AI is not a new concept, the recent advancements in generative AI have made it more prominent and consequential in our daily lives. It's important to note that while there's hype surrounding this technology, there have also been significant breakthroughs in what's possible with AI in the past few years. In the past, CEOs often promised to solve problems with AI, but it didn't seem to materialize in everyday applications. However, with the launch of DALL-E, a generative AI model, the potential of AI became more apparent and tangible. In this episode, we'll dive deeper into the capabilities and implications of generative AI and discuss its potential impact on society.

    • Shift in AI: From Analytical to GenerativeGenerative AI models, like GPT 3, can now create human-like text, images, audio, and even code, leading to unexpected and emergent properties, and the future of AI is about enabling them to create and generate content.

      We are witnessing a significant shift in the field of Artificial Intelligence (AI), specifically in the area of generative AI. This type of AI, which is only a few years old in its current form, uses techniques like deep learning to learn from data and generate new content. Large language models, such as GPT 3, have been the pioneers in this field, starting with text and later expanding to images, audio, and even code. The recent trend in AI development is characterized by the increasing size of these models, leading to unexpected and emergent properties that make them more creative than analytical tools. For instance, large language models can now generate human-like text, complete code snippets, create images, and even generate short videos. Companies like OpenAI, Google, and Meta are leading this innovation, and the applications of generative AI are vast, from creating illustrations for newsletters to generating code for millions of programmers. The future of AI is not just about making computers smarter, but about enabling them to create and generate content, making them more integral to our daily lives and industries.

    • Generative AI's Impact on Creative IndustriesGenerative AI is revolutionizing creative industries like architecture, filmmaking, interior design, and advertising, with companies like DALL E, Midjourney, and Stable Diffusion leading the way. Despite concerns about displacement, organic user growth and industry adoption indicate a promising future.

      Generative AI is making a significant impact across various industries, beyond just the tech sphere. It's being adopted by professionals in creative fields like architecture, filmmaking, interior design, and advertising as a valuable tool to enhance their work. With user bases growing rapidly, companies like DALL E, Midjourney, and Stable Diffusion are gaining traction and unlocking immense economic potential, estimated to be worth over a trillion dollars by Sequoia. However, the rise of generative AI also raises concerns about the potential displacement of artists and illustrators. Despite these concerns, the organic user growth and industry adoption suggest that generative AI is here to stay and will continue to reshape the creative landscape.

    • AI copyright concerns and ethical implicationsAI models trained on vast data may generate copyrighted content, leading to ethical concerns and potential misuse. Regulators are calling for federal regulations to address these issues.

      The advancement of generative AI technology, such as Stable Diffusion by StabilityAI, raises significant concerns regarding copyright infringement, potential misuse, and ethical implications. These models are trained on vast amounts of data, some of which may be copyrighted, leading to instances of generating copyrighted code or content. The technology has also been used to create nonconsensual nude imagery and violent content. While some models come with safety filters, open-source models like Stable Diffusion can be modified and used without restrictions, increasing the potential for abuse. Regulators are starting to express concerns and call for federal regulations. StabilityAI's CEO, Ahmad Mustaq, recently raised $101 million to expand their supercomputer capabilities and continue research in this field. It is crucial for the industry to address these concerns and ensure that the technology is used ethically and responsibly.

    • Open-source AI model Stable Diffusion leads to diverse applicationsStable Diffusion's open-source nature fosters creativity, diversity, and freedom of expression, resulting in various applications from image generation to 3D modeling, and contrasts with centralized AI models' potential ethical dilemmas and limited creativity.

      Stable Diffusion, an open-source AI model, has led to a multitude of creative applications, from image generation to 3D modeling and architectural drawings. The open-source approach allows for a global community to contribute, resulting in diverse outputs that cater to various contexts. This contrasts with centralized AI models, which may restrict creativity and limit diversity due to their centralized nature and potential ethical dilemmas. The Stable Diffusion team trusts the community to use the technology responsibly and has even announced a prize for the best open-source deepfake detector. The open-source model's flexibility enables the creation of contextually relevant applications, making it a powerful tool for freedom of expression. The team is working on bringing the technology to various regions and industries, ensuring its accessibility to the average person. Overall, the open-source approach to AI development fosters creativity, diversity, and freedom of expression, setting it apart from centralized models.

    • Revolutionizing Communication and Content Creation with DreamStudio AIDreamStudio AI is transforming the way we create and communicate with accessible, personalized AI tools for developers and users, offering features like animation, 3D, and video. It's being integrated into other software and is expected to significantly impact human communication, potentially reaching a holodeck or Ready Player One level.

      DreamStudio AI is revolutionizing communication and content creation by providing accessible and personalized AI tools for developers and users. Currently, DreamStudio offers an API and its own platform, DreamStudio.ai, which includes features like animation, 3D, and video. These tools are being integrated into other software, such as Canva and Photoshop, allowing users to easily create and edit visual content. DreamStudio AI's goal is to "let people poop rainbows" by lowering the barriers to communication and content creation, making it accessible to everyone. The technology is still evolving, but it's expected to significantly impact human communication in the next few years, potentially reaching a level of realism and ease comparable to the holodeck or Ready Player One experience. The combination of search capabilities and concept understanding in AI is a game-changer for developers, and the increasing investment in augmented reality and AI chips will only make these tools more powerful and advanced.

    • Generative AI's Role in the Future of Content CreationRunway's strategy to monetize generative AI models through APIs and custom embeddings, the future of content being generative and assisted, human consciousness and intention remaining crucial, and the potential for increased value of uniqueness and originality.

      As the world moves towards the metaverse and the generation of vast amounts of content in various formats, companies will need to focus on their generative AI strategies to keep up. Runway, a company that recently raised $101 million, plans to make money by providing access to their generative AI models through APIs and embedding teams to create custom models for content providers. The future of content is expected to be generative and assisted, enhancing the media space rather than replacing it. Human consciousness and intention remain crucial in the creation process, making these tools an extension of us rather than a replacement. The accessibility of these tools, like the printing press opening up access to visual creativity, opens up opportunities for individuals to create and monetize their work. The value of uniqueness and originality is expected to increase as the sector grows.

    • Regulating AI: Balancing Creativity and SafetyCongresswoman Anna Eshoo's call for AI regulation raises concerns over potential misuse, while some argue it could limit creativity. History of social media regulation offers a cautionary tale. Ultimately, creators and users bear responsibility for AI output.

      The debate surrounding the regulation of AI, specifically in the context of Stable Diffusion and its potential misuse, raises important questions about intentionality, responsibility, and the balance between creativity and safety. Congresswoman Anna Eshoo's letter to the US National Security Advisor and the Office of Science and Technology Policy highlights concerns over the creation of violent and pornographic content using AI. This call for regulation echoes the ongoing debate in the European Union, where the focus is on holding AI makers accountable for the end use of their technology. However, some argue that this could limit access to the technology and stifle creativity. The history of social media regulation offers a cautionary tale, as well-intentioned technology can have unintended negative consequences. Ultimately, the responsibility for AI and its output lies with its creators and users, and open dialogue and collaboration are essential to navigating the potential risks and benefits of this technology.

    • Exploring the Positive and Negative Consequences of Technology Development on Decentralized PlatformsDecentralized platforms offer opportunities for creation and community building, but require trust, safety, and transparency. Profit-driven incentives of centralized companies can be detrimental, and a decentralized approach with open access to code and models can lead to better outcomes. Content policies can help prevent harmful content spread.

      The accelerated development and use of technology, specifically in the context of decentralized platforms like Discord, can have both positive and negative consequences. The responsibility to guide and coordinate this development falls on those involved, as the potential for nefarious use is a concern. However, the benefits of creation and community building outweigh the risks. The speakers emphasized the importance of trust, safety, and transparency in technology development, and criticized the incentives of centralized companies to prioritize profits over public good. They suggested that a decentralized approach, with open access to code and models, can lead to better outcomes. Additionally, content policies of major social networks can help prevent the spread of harmful content. While there are risks, the potential for positive change and creativity makes the acceleration of technology development worthwhile.

    • Exploring the Impact of Stable Diffusion on Art Creation and Copyright InfringementWhile Stable Diffusion has led to impressive art and projects, concerns around copyright infringement and artists being replaced by AI are valid. The AI learns from relationships between words and images, and while it can't recreate specific images, it understands concepts. The user's intention determines the ethical use of the tool.

      Stable Diffusion, an AI model used on platforms like 4chan for creating art, has been in use for eight weeks and while it has led to some bad content, it hasn't caused significant harm. The model has also resulted in numerous projects and amazing art. However, concerns around copyright infringement and artists being replaced by AI are valid. The AI learns from relationships between words and images, and while it can't recreate specific images from the dataset, it understands concepts. The fear of artists being replaced by AI depends on the definition of an artist. The model can learn any style with just a few images, and it's up to the user's intention to use it ethically or illegally. The interview with Greg Rutkowski, an artist whose work is often used in prompts, highlights the issue of copyright infringement. Overall, Stable Diffusion is a tool, and its output depends on the user's intention.

    • StabilityAI: Human-like Learning Algorithm for AI ImagesStabilityAI uses a learning algorithm for AI images, prioritizes open, interrogable systems, engages with artists for feedback, implements opt-in/opt-out mechanisms, supports community initiatives, maintains a populist alternative stance, and collaborates with big tech.

      StabilityAI is using a learning algorithm, not a compression algorithm for creating AI images, making it more akin to a human brain. They are committed to building open, interrogable systems and are actively engaging with the artist community for feedback. They plan to implement opt-in and opt-out mechanisms for artists and are developing tools to support community initiatives. Despite their recent large investment, they aim to maintain their "populist alternative" stance to big tech by giving artists revenue share and involving them in the decision-making process. They are also working with big tech companies to provide them with an outlet to participate in this open infrastructure while ensuring regulation and trust and safety.

    • Balancing freedom and regulation in AI developmentSuccessful AI development requires a balanced approach, involving community collaboration, stakeholder engagement, and consideration of values and liabilities.

      Creating and managing advanced AI technologies involves navigating a complex balance between freedom and regulation. The CEO of Stability, a venture investing in AI, acknowledges the need for a middle ground between unrestricted release and complete control. They've raised funds on their terms, allowing them independence, but understand the challenges of misaligned incentives. The CEO emphasizes the importance of community involvement and collaboration with various stakeholders, including big tech, little tech, regulators, and the public. They acknowledge that their position, with fewer public eyes on them, offers more latitude but comes with increased responsibility. The CEO acknowledges that technology is not neutral but reflects the values of its creators and that they have a responsibility to ensure those values are reflected in their creations. The CEO recognizes the importance of figuring out who is liable for the use of the model and the need to separate these considerations. Overall, the successful development and implementation of advanced AI technologies require a thoughtful, collaborative, and balanced approach.

    • The Future of Generative AI: Regulation vs. DecentralizationGenerative AI's potential to revolutionize industries and accessibility to the public is driving the push for decentralization, while regulation seeks to ensure responsible use and prevent misuse.

      While social media platforms are moving towards more regulation and centralization, the market for generative AI is pushing for openness and decentralization. The speaker believes that this difference is due to the personal nature of AI-generated content and its potential to revolutionize various industries, including law, medicine, science, and journalism, in the next 10 to 20 years. The speaker also emphasizes the importance of making this powerful technology accessible to the public, rather than keeping it in the hands of a few. The conversation highlights the transformative potential of generative AI and the need for responsible use and regulation.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    Saving Democracy from & with AI ft. Nathan Sanders

    Saving Democracy from & with AI ft. Nathan Sanders

    In this episode, Nathan Sanders joins us to discuss how Artificial Intelligence technologies are impacting political processes in complex ways, including increasing disruptive risks to legislative processes but also providing enforcement mechanisms. Sanders also addresses what regulatory frameworks and Codes of Ethics should include. 

    Nathan Sanders is a data scientist and an Affiliate at the Berkman Klein Center at Harvard University where he is focused on creating open technology to help vulnerable communities and all stakeholders participate in the analysis and development of public policy.

    Cover art for this episode was generated by DALL-E.

    Links in this episode: 

    Can you build a company like Uber without being a jerk?

    Can you build a company like Uber without being a jerk?
    On this week’ interview episode of The Vergecast, editor-in-chief of The Verge Nilay Patel sits down with New York Times reporter Mike Isaac. Isaac has been reporting on the ride-sharing company Uber for over five years now and just released a book all about Uber and the stories surrounding it called Super Pumped: The Battle for Uber.   Nilay and Mike talk about how Uber got to where it is today, Uber’s interactions with companies like Apple and Google, and whether or not you have to be a “jerk” to start a company that changes the world.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Ilya Sutskever (OpenAI) - Inside OpenAI

    Ilya Sutskever (OpenAI) - Inside OpenAI

    Ilya Sutskever is the co-founder and chief scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. In this conversation with Stanford adjunct lecturer Ravi Belani, Sutskever explains his approach to making complex decisions at OpenAI and for AI companies in general, and makes predictions about the future of deep learning.


    —-----------------------------------

    Stanford eCorner content is produced by the Stanford Technology Ventures Program. At STVP, we empower aspiring entrepreneurs to become global citizens who create and scale responsible innovations.

    CONNECT WITH US

    Twitter: https://twitter.com/ECorner 

    LinkedIn: https://www.linkedin.com/company/stanfordtechnologyventuresprogram/ 

    Facebook: https://www.facebook.com/StanfordTechnologyVenturesProgram/ 

    YouTube: https://www.youtube.com/user/ecorner 

    LEARN MORE

    eCorner Website: https://ecorner.stanford.edu/

    STVP Website: https://stvp.stanford.edu/

    Support our mission of providing students and educators around the world with free access to Stanford University’s network of entrepreneurial thought leaders: https://ecorner.stanford.edu/give.

    Why ConsenSys is Suing the SEC | Joseph Lubin & Matt Corva

    Why ConsenSys is Suing the SEC | Joseph Lubin & Matt Corva

    “The U.S. is trying to disconnect from Ethereum," that’s what Joe Lubin the CEO of Consensys said in today’s conversation. He was talking about those in power trying to unplug Ethereum from the citizens.

    The SEC is going after Kraken, Coinbase, Uniswap and Metamask. They’re trying to turn every non-custodial wallet into a broker-dealer.

    We brought on Joe Lubin, a crypto OG and CEO of Consensys the company behind a number of massive crypto projects including the popular Metamask wallet, and Matt Corva, the General Counsel at Consensys, leading the charge against the SEC 

    Joe and Matt are producing evidence that the SEC is coming after Ethereum itself. Sending discovery requests to Ethereum core developers, threatening their employers - pushing a coordinated effort to claim Ether is a security so they can control it.

    So Consensys is taking them to court to settle the issue. If they’re successful it’ll be the first time we get a clear court ruling that Ether is a commodity and not a security.

    ------
    📣 TRANSPORTER - SECURED BY CHAINLINK CCIP | TRY IT OUT
    transporter.io

    ------
    BANKLESS SPONSOR TOOLS:

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://k.xyz/bankless-pod-q2     ⁠  

    🛞MANTLE | MODULAR LAYER 2 NETWORK
    https://bankless.cc/Mantle  

    🌐 CARTESI | APPLY FOR A GRANT
    https://bankless.cc/CartesiGovernance 

    🔗CELO | CEL2 COMING SOON
    https://bankless.cc/Celo  

    🏠 CASA | SECURE YOUR GENERATIONAL WEALTH
    https://bankless.cc/Casa  

    🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION
    https://bankless.cc/toku  

    🏙️ CONSENSUS | SAVE 20% WITH CODE BANKLESS
    https://bankless.cc/4aykesD  

    ------
    TIMESTAMPS

    0:00 Intro
    6:35 SEC vs. Ethereum 
    9:45 The Uniqueness of This Case
    11:15 88,000 Pages to the SEC
    12:11 SEC Going After Devs? 
    14:47 The Wells Notice
    16:27 ETH ETF
    17:58 Outcome of Consensys Winning  
    21:29 U.S. Law Process & Timeline
    25:55 Ether Isn’t a Security
    34:50 Gary’s Confidence Conspiracy  
    41:17 MetaMask Isn’t a Broker Dealer
    47:10 What is Prometheum? 
    51:10 How Can the SEC Win? What Happens to Crypto? 
    55:49 What the Crypto Community Can Do
    59:50 What Happens Next?
    1:03:15 Closing & Disclaimers

    ------
    RESOURCES

    Consensys Complaint 
    https://consensys.io/crypto-regulations/defend-ethereum 
    https://assets.ctfassets.net/gjyjx7gst9lo/Bu1bK7DF3tSig9Atde0lM/2fcaadea2b111a8c3f3ebce4a6a2386c/Consensys_sues_the_SEC_in_defense_of_the_Ethereum_ecosystem.pdf  

    Joe Lubin
    https://twitter.com/ethereumjoseph  

    Matt Corva
    https://twitter.com/MattCorva  

    ------
    Not financial or tax advice. See our investment disclosures here:
    https://www.bankless.com/disclosures 

    Stuart Russell: Long-Term Future of AI

    Stuart Russell: Long-Term Future of AI
    Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach.  Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.