Logo
    Search

    Podcast Summary

    • Humans and AI working togetherAI amplifies human capabilities, creating symbiotic relationships, providing access to education and medical assistance globally, and transforming lives over the next decade.

      While AI is a transformative technology with immense potential to enhance human capabilities, it's essential to focus on the symbiotic relationship between humans and AI rather than viewing it as a replacement. Reid Hoffman, a tech visionary and co-founder of PayPal, LinkedIn, and Inflexion AI, emphasizes the importance of amplification loops where humans and AI work together. He uses the example of a bright 15-year-old who uses AI to help understand complex scientific papers, opening up new learning opportunities. Moreover, AI can provide access to medical assistance and tutoring on a global scale. However, it's crucial not to underestimate or overestimate the impact of AI. While it won't change everything overnight, it will significantly transform our lives over the next decade. Instead of fearing AI, we should embrace it as a tool to amplify our abilities and solve complex problems.

    • Impact of AI on Global Health and Equity: Balancing the Positives and NegativesRecognize that AI is a tool, not inherently good or bad. Focus on potential benefits and collaborate to ensure ethical development and use.

      The potential impact of advanced AI on global health and equity is significant, but there's also a risk of negative sentiment and calls for regulation due to flawed predictions and a lack of understanding of the true capabilities of AI. Many in the AI community make predictions about the future based on exponential growth in technology, but these predictions can be misguided and lead to unnecessary panic. Instead, it's important to recognize that AI is not inherently good or bad, but rather a tool that can be used for good or bad depending on how it's developed and used. It's also important to remember that humanity has faced similar fears and panic around new transformational technologies throughout history, and these fears often prove to be unfounded. To counteract the negative narrative around AI, it's essential to focus on the potential benefits and collaborate to ensure that AI is developed and used in a way that enhances human capabilities and improves global health and equity.

    • Focus on the positive possibilities of AIDespite potential risks, focusing on positive possibilities and offering constructive solutions is crucial for a desirable future. Adapt to change instead of resisting it for new opportunities.

      While it's important to acknowledge the potential risks and challenges posed by advanced technologies like AI, it's equally important to focus on the positive possibilities and opportunities they present. Critics who solely focus on the negatives may inadvertently hinder progress towards a desirable future. Instead, they should articulate their vision for a positive future and offer constructive solutions. In the short term, for those concerned about AI replacing their jobs, it's recommended to learn and adapt, rather than resist change. The transformation brought about by AI and automation will be faster and more intense than previous industrial revolutions, but also has the potential to create new opportunities and industries.

    • Using AI to help individuals transition to new rolesReid Hoffman suggests using AI tools to help people learn new skills and find new jobs, emphasizing the importance of staying engaged and finding meaning in work

      While AI may displace certain jobs, it can also be part of the solution by helping individuals transition to new roles. Reid Hoffman, a prominent figure in the tech industry, believes that there is enough time for individuals and organizations to adapt to the changing job market. He suggests using AI tools to help people learn new skills and find new jobs. Hoffman's own impressive career arc, which includes founding social networking companies, working at PayPal, and starting LinkedIn, shows that there is no need to retire from making a difference in the world once you've reached a certain level of success. Instead, Hoffman continues to innovate with his latest venture, Inflexion, which focuses on empathetic chatbots and human interaction. He emphasizes the importance of staying engaged and finding meaning in our lives through work.

    • Developing a Personal Intelligence AI CompanionMustafa Suleiman, Karen Simonian, and their team are creating a personal AI, named Pai, to act as a helpful and supportive companion for individuals navigating their unique lives, providing perspectives and encouraging growth.

      Mustafa Suleiman, Karen Simonian, and their team are developing a personal intelligence AI, a companion for individuals to navigate their unique lives. This AI, named Pai, acts as a tool to help with various challenges, from fixing a flat tire to debugging relationships or career decisions. It's not a therapist but rather a companion that provides a perspective and encourages growth. Pai is designed to be compassionate and kind, with a point of view that amplifies and helps users, not agreeing with harmful viewpoints. The team believes this product will be a fixed point in the future and is dedicated to its development. User behavior shows promising results, aligning with the original intention of Pai as a helpful and supportive companion.

    • Exploring AI's potential and limitationsApproach AI with the right mindset, focus on what matters, identify relevant insights, and consider diverse perspectives to make informed decisions.

      Experimenting with AI can provide valuable insights and solutions for personal and professional challenges, but it's important to approach it with the right mindset and focus on what matters most to you. Sam Harris's interaction with Pai, an AI model, led him to update his recommendation for others to experiment with AI in areas that matter to them. However, not all interactions or answers will be useful. The key is to identify what is relevant and useful in the given context. Regarding the ethics of AI, it's important to consider diverse perspectives and avoid relying too heavily on a single perspective, such as the western one, which may not be applicable to all cultures and contexts. Ultimately, the goal is to use AI as a tool to navigate complex issues and make informed decisions.

    • Transparency and Ethics in AI DevelopmentDevelopers and companies should be transparent about AI intentions and values, society should engage in dialogue to ensure ethical use, and the goal is to create AI that benefits humanity without causing harm.

      Developers and companies creating AI technologies should be transparent and open about their intentions and values, as technology is not value-neutral. Society as a whole also has a responsibility to engage in dialogue and ensure that AI is used ethically and morally, with consideration given to the potential societal impact. It's important to remember that technology is built by small groups, and productive dialogue can lead to improvements rather than accusations and witch hunts. Ultimately, the goal should be to create AI that benefits all of humanity and avoids harm.

    • Engaging with creators and governing AI responsiblyMitigate biases and errors, navigate ethical complexities, innovate, learn from mistakes, explore new AI applications, and support workers during technological transition.

      The development and implementation of AI technologies carry significant societal impacts, and it's crucial to engage with the creators and govern the technology responsibly. This interaction and accountability are essential to mitigate potential biases and errors, as well as to navigate the ethical complexities. Additionally, the speaker emphasizes the importance of innovation and learning from mistakes in the AI field, and encourages exploring new areas of AI application, such as cybersecurity and supporting white-collar workers during the technological transition. The speaker also expresses excitement about the potential for numerous AI startups and new applications, and encourages a forward-looking approach to AI development.

    • Advancements in next-gen AI language modelsNext-gen AI language models, like GPT 5, will bring significant improvements in processing complex logic, expanding knowledge bases, and developing bespoke models for specific applications, leading to breakthroughs in fields like drug discovery, material science, and robotics.

      The next generation of AI, specifically language models like GPT 5, is expected to bring significant improvements in both baseline capabilities and specialized applications. The ability to process and understand complex chains of logic, broaden knowledge bases, and augment existing models with memory and data sets are key areas of improvement. Additionally, the development of bespoke models for specific application areas, such as biotech and protein folding, is expected to yield remarkable results. These advancements will unlock new possibilities in various fields and lead to significant progress in areas like drug discovery, material science, robotics, and more. The pace of innovation is expected to be rapid, with many exciting developments emerging over the next year or two.

    • Exploring New Frontiers in AI: Transformers, Attention Mechanisms, and Code ValidationAI is rapidly evolving with transformers, attention mechanisms, and code validation, set to impact education, healthcare, and software development, while Mistral aims to make reasoning more efficient and experiments continue in balance with infrastructure costs.

      The world of AI is rapidly evolving, with a shift towards more modern, scalable approaches like transformers, but there's also excitement about new experimentations in attention mechanisms, code validation, and democratizing content creation. Predictions about the timeline for these developments are uncertain, but there's conviction in the potential impact they could have on various industries, particularly education, healthcare, and software development. Mistral, a new player in this space, is generating buzz for its potential to make reasoning more efficient and enable more experimentation. However, there are ongoing debates about the balance between inference cost and infrastructure choices. Overall, it's clear that AI is changing the game in numerous ways, and we can expect continued innovation and experimentation in the coming years.

    • Understanding the Future of AI and Its ImplicationsThe next two years will bring clarity on effective and cost-efficient inference platforms, making English and Chinese essential programming languages, and individuals and small groups are driving innovation. European countries should embrace the future and engage in the conversation about AI's direction.

      The field of AI and machine learning is rapidly evolving, with various approaches and platforms emerging. In the next two years, there will likely be a clearer understanding of the most effective and cost-efficient inference platforms. Furthermore, the accessibility of these technologies will lead to new creative possibilities, making English and Chinese the most significant programming languages. It's crucial to have a dialogue about the future of AI and its potential implications while acknowledging that the individuals and small groups driving innovation hold the steering wheel. European countries, with their talent in AI, should embrace the future instead of trying to hold on to the past. Additionally, it's essential to recognize that the technology is being created by a small group of bold risk-takers, and the rest of us should engage in the conversation about its direction. To stay updated on the latest developments, follow No Prior's podcast, subscribe to their YouTube channel, and sign up for their emails or transcripts at nodashpriors.com.

    Recent Episodes from No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia
    This week on No Priors, Sarah Guo and Elad Gil sit down with Karan Goel and Albert Gu from Cartesia. Karan and Albert first met as Stanford AI Lab PhDs, where their lab invented Space Models or SSMs, a fundamental new primitive for training large-scale foundation models. In 2023, they Founded Cartesia to build real-time intelligence for every device. One year later, Cartesia released Sonic which generates high quality and lifelike speech with a model latency of 135ms—the fastest for a model of this class. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @krandiash | @_albertgu Show Notes:  (0:00) Introduction (0:28) Use Cases for Cartesia and Sonic  (1:32) Karan Goel & Albert Gu’s professional backgrounds (5:06) Steady State Models (SSMs) versus Transformer Based Architectures  (11:51) Domain Applications for Hybrid Approaches  (13:10) Text to Speech and Voice (17:29) Data, Size of Models and Efficiency  (20:34) Recent Launch of Text to Speech Product (25:01) Multimodality & Building Blocks (25:54) What’s Next at Cartesia?  (28:28) Latency in Text to Speech (29:30) Choosing Research Problems Based on Aesthetic  (31:23) Product Demo (32:48) Cartesia Team & Hiring

    Can AI replace the camera? with Joshua Xu from HeyGen

    Can AI replace the camera? with Joshua Xu from HeyGen
    AI video generation models still have a long way to go when it comes to making compelling and complex videos but the HeyGen team are well on their way to streamlining the video creation process by using a combination of language, video, and voice models to create videos featuring personalized avatars, b-roll, and dialogue. This week on No Priors, Joshua Xu the co-founder and CEO of HeyGen,  joins Sarah and Elad to discuss how the HeyGen team broke down the elements of a video and built or found models to use for each one, the commercial applications for these AI videos, and how they’re safeguarding against deep fakes.  Links from episode: HeyGen McDonald’s commercial Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil |  @joshua_xu_ Show Notes:  (0:00) Introduction (3:08) Applications of AI content creation (5:49) Best use cases for Hey Gen (7:34) Building for quality in AI video generation (11:17) The models powering HeyGen (14:49) Research approach (16:39) Safeguarding against deep fakes (18:31) How AI video generation will change video creation (24:02) Challenges in building the model (26:29) HeyGen team and company

    How the ARC Prize is democratizing the race to AGI with Mike Knoop from Zapier

    How the ARC Prize is democratizing  the race to AGI with Mike Knoop from Zapier
    The first step in achieving AGI is nailing down a concise definition and  Mike Knoop, the co-founder and Head of AI at Zapier, believes François Chollet got it right when he defined general intelligence as a system that can efficiently acquire new skills. This week on No Priors, Miked joins Elad to discuss ARC Prize which is a multi-million dollar non-profit public challenge that is looking for someone to beat the Abstraction and Reasoning Corpus (ARC) evaluation. In this episode, they also get into why Mike thinks LLMs will not get us to AGI, how Zapier is incorporating AI into their products and the power of agents, and why it’s dangerous to regulate AGI before discovering its full potential.  Show Links: About the Abstraction and Reasoning Corpus Zapier Central Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mikeknoop Show Notes:  (0:00) Introduction (1:10) Redefining AGI (2:16) Introducing ARC Prize (3:08) Definition of AGI (5:14) LLMs and AGI (8:20) Promising techniques to developing AGI (11:0) Sentience and intelligence (13:51) Prize model vs investing (16:28) Zapier AI innovations (19:08) Economic value of agents (21:48) Open source to achieve AGI (24:20) Regulating AI and AGI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI
    After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.  They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.  Show Links: Tengyu Ma Key Research Papers: Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training Non-convex optimization for machine learning: design, analysis, and understanding Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Larger language models do in-context learning differently, 2023 Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning On the Optimization Landscape of Tensor Decompositions Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma Show Notes:  (0:00) Introduction (1:59) Key points of Tengyu’s research (4:28) Academia compared to industry (6:46) Voyage AI overview (9:44) Enterprise RAG use cases (15:23) LLM long-term memory and token limitations (18:03) Agent chaining and data management (22:01) Improving enterprise RAG  (25:44) Latency budgets (27:48) Advice for building RAG systems (31:06) Learnings as an AI founder (32:55) The role of academia in AI

    How YC fosters AI Innovation with Garry Tan

    How YC fosters AI Innovation with Garry Tan
    Garry Tan is a notorious founder-turned-investor who is now running one of the most prestigious accelerators in the world, Y Combinator. As the president and CEO of YC, Garry has been credited with reinvigorating the program. On this week’s episode of No Priors, Sarah, Elad, and Garry discuss the shifting demographics of YC founders and how AI is encouraging younger founders to launch companies, predicting which early stage startups will have longevity, and making YC a beacon for innovation in AI companies. They also discussed the importance of building companies in person and if San Francisco is, in fact, back.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @garrytan Show Notes:  (0:00) Introduction (0:53) Transitioning from founder to investing (5:10) Early social media startups (7:50) Trend predicting at YC (10:03) Selecting YC founders (12:06) AI trends emerging in YC batch (18:34) Motivating culture at YC (20:39) Choosing the startups with longevity (24:01) Shifting YC found demographics (29:24) Building in San Francisco  (31:01) Making YC a beacon for creators (33:17) Garry Tan is bringing San Francisco back

    The Data Foundry for AI with Alexandr Wang from Scale

    The Data Foundry for AI with Alexandr Wang from Scale
    Alexandr Wang was 19 when he realized that gathering data will be crucial as AI becomes more prevalent, so he dropped out of MIT and started Scale AI. This week on No Priors, Alexandr joins Sarah and Elad to discuss how Scale is providing infrastructure and building a robust data foundry that is crucial to the future of AI. While the company started working with autonomous vehicles, they’ve expanded by partnering with research labs and even the U.S. government.   In this episode, they get into the importance of data quality in building trust in AI systems and a possible future where we can build better self-improvement loops, AI in the enterprise, and where human and AI intelligence will work together to produce better outcomes.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @alexandr_wang (0:00) Introduction (3:01) Data infrastructure for autonomous vehicles (5:51) Data abundance and organization (12:06)  Data quality and collection (15:34) The role of human expertise (20:18) Building trust in AI systems (23:28) Evaluating AI models (29:59) AI and government contracts (32:21) Multi-modality and scaling challenges

    Music consumers are becoming the creators with Suno CEO Mikey Shulman

    Music consumers are becoming the creators with Suno CEO Mikey Shulman
    Mikey Shulman, the CEO and co-founder of Suno, can see a future where the Venn diagram of music creators and consumers becomes one big circle. The AI music generation tool trying to democratize music has been making waves in the AI community ever since they came out of stealth mode last year. Suno users can make a song complete with lyrics, just by entering a text prompt, for example, “koto boom bap lofi intricate beats.” You can hear it in action as Mikey, Sarah, and Elad create a song live in this episode.  In this episode, Elad, Sarah, And Mikey talk about how the Suno team took their experience making at transcription tool and applied it to music generation, how the Suno team evaluates aesthetics and taste because there is no standardized test you can give an AI model for music, and why Mikey doesn’t think AI-generated music will affect people’s consumption of human made music.  Listen to the full songs played and created in this episode: Whispers of Sakura Stone  Statistical Paradise Statistical Paradise 2 Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MikeyShulman Show Notes:  (0:00) Mikey’s background (3:48) Bark and music generation (5:33) Architecture for music generation AI (6:57) Assessing music quality (8:20) Mikey’s music background as an asset (10:02) Challenges in generative music AI (11:30) Business model (14:38) Surprising use cases of Suno (18:43) Creating a song on Suno live (21:44) Ratio of creators to consumers (25:00) The digitization of music (27:20) Mikey’s favorite song on Suno (29:35) Suno is hiring

    Context windows, computer constraints, and energy consumption with Sarah and Elad

    Context windows, computer constraints, and energy consumption with Sarah and Elad
    This week on No Priors hosts, Sarah and Elad are catching up on the latest AI news. They discuss the recent developments in AI music generation, and if you’re interested in generative AI music, stay tuned for next week’s interview! Sarah and Elad also get into device-resident models, AI hardware, and ask just how smart smaller models can really get. These hardware constraints were compared to the hurdles AI platforms are continuing to face including computing constraints, energy consumption, context windows, and how to best integrate these products in apps that users are familiar with.  Have a question for our next host-only episode or feedback for our team? Reach out to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil  Show Notes:  (0:00) Intro (1:25) Music AI generation (4:02) Apple’s LLM (11:39) The role of AI-specific hardware (15:25) AI platform updates (18:01) Forward thinking in investing in AI (20:33) Unlimited context (23:03) Energy constraints

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you
    Scott Wu loves code. He grew up competing in the International Olympiad in Informatics (IOI) and is a world class coder, and now he's building an AI agent designed to create more, not fewer, human engineers. This week on No Priors, Sarah and Elad talk to Scott, the co-founder and CEO of Cognition, an AI lab focusing on reasoning. Recently, the Cognition team released a demo of Devin, an AI software engineer that can increasingly handle entire tasks end to end. In this episode, they talk about why the team built Devin with a UI that mimics looking over another engineer’s shoulder as they work and how this transparency makes for a better result. Scott discusses why he thinks Devin will make it possible for there to be more human engineers in the world, and what will be important for software engineers to focus on as these roles evolve. They also get into how Scott thinks about building the Cognition team and that they’re just getting started.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ScottWu46 Show Notes:  (0:00) Introduction (1:12) IOI training and community (6:39) Cognition’s founding team (8:20) Meet Devin (9:17) The discourse around Devin (12:14) Building Devin’s UI (14:28) Devin’s strengths and weakness  (18:44) The evolution of coding agents (22:43) Tips for human engineers (26:48) Hiring at Cognition

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"
    AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long. Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.  Show Links: Bling Zoo video Man eating a burger video Tokyo Walk video Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic Show Notes:  (0:00) Sora team Introduction (1:05) Simulating the world with Sora (2:25) Building the most valuable consumer product (5:50) Alternative use cases and simulation capabilities (8:41) Diffusion transformers explanation (10:15) Scaling laws for video (13:08) Applying end-to-end deep learning to video (15:30) Tuning the visual aesthetic of Sora (17:08) The road to “desktop Pixar” for everyone (20:12) Safety for visual models (22:34) Limitations of Sora (25:04) Learning from how Sora is learning (29:32) The biggest misconceptions about video models

    Related Episodes

    #131 - ChatGPT+ instructions, Microsoft reveals pricing for AI, is ChatGPT getting worse over time?

    #131 -  ChatGPT+ instructions, Microsoft reveals pricing for AI, is ChatGPT getting worse over time?

    Our 131th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links:

    AI will help us turn into Aliens

    AI will help us turn into Aliens

    Texas is frozen over and the lack of human contact I have had since I haven't left my house for three days has made me super introspective about how humans will either evolve with technology and AI away from our primitive needs or we fail and Elon Musk leaves us behind. The idea that future humans may be considered aliens is based on the belief that our evolution and technological advancements will bring about significant changes to our biology and consciousness. As we continue to enhance our physical and cognitive abilities with artificial intelligence, biotechnology, and other emerging technologies, we may transform into beings that are fundamentally different from our current selves. In this future scenario, it's possible that we may be considered as aliens in comparison to our primitive ancestors." Enjoy

    Building Responsible AI with Toju Duke

    Building Responsible AI with Toju Duke

    Join Jason Foster in an insightful conversation with Toju Duke, Programme Manager at Google, who leads Responsible AI programmes across Google's product and research teams. Delve into the fascinating world of Responsible AI and its critical role in shaping a safer future. From risk management and guardrails to minimising potential harm to diverse AI and ethical considerations, explore the driving forces behind responsible AI development.

    This Will Happen If You Don't Use AI 🧠

    This Will Happen If You Don't Use AI 🧠

    🤖🌴 Join us as we explore the transformative world of AI with Adrian Dunkley, CEO of Star Apple AI. 🚀 This episode promises a deep dive into how AI is reshaping industries and augmenting human capabilities, with a special focus on Jamaica's pioneering role in this field.💡

    📺 Diary of a CEO Episode on AI: Youtube
    🎧 Listen to our other episodes at Limitless Podcast.
    🐥 Stay updated with us on Twitter: Follow Limitless_pod.
    📷 Connect with us on Instagram: Join Limitless_pod.
    💲 Check out our MyMoneyJA Discount: Access Discount

    👍 Like, subscribe, and share for more AI and technology insights.

    ⚠️ Disclaimer: The content in this video is for informational purposes only and is not professional tech advice. Always consult with a tech expert before making AI-related decisions

    Support the show

    #102 - Nonsense Sentience, Condemning GPT-4chan, DeepFake bans, CVPR Plagiarism