Logo
    Search

    Listener Q&A: AI Investment Hype, Foundation Models, Regulation, Opportunity Areas, and More

    enApril 27, 2023

    Podcast Summary

    • Open Source AI Landscape: New Players and Ideas EmergingOpen source AI landscape is evolving rapidly with decreasing model training costs, new releases from Facebook, Tomorrow, and Stability, and continued VC funding driving the development of foundation models. Autonomous agents are a new trend in the model world for prioritization, reflection, and money-making capabilities.

      The landscape of open source models in AI is rapidly evolving, with an increasing number of teams and individuals gaining the ability to train large models. OpenAI currently leads the pack, but the cost of training models is decreasing, and there are other notable releases from Facebook Llama, Tomorrow, and Stability. VC funding is expected to continue driving the development of foundation models, and it's predicted that a 3-5 level model will emerge within the open source ecosystem within a year. The ongoing trend is likely to be a handful of companies staying ahead of open source by one or two generations. A new and popular idea in the model world is the concept of autonomous agents, which involves orchestrating language models for prioritization, reflection, and money-making capabilities without changing the architecture of the language model itself.

    • Developing autonomous agents for complex tasksAgents can analyze demand, find suppliers, set up shops, and promote ads, but remembering previous interactions and learning from them is a challenge for ongoing context. Regulation of AI is debated for protecting incumbents and addressing fears, but its impact on innovation should be considered.

      The development of autonomous agents capable of completing complex tasks, such as setting up an online business, is an intriguing area of research in the AI community. These agents could analyze demand, find suppliers, set up shops, generate ads, and promote them on social media, all with a single high-level goal. However, the implementation of ongoing context for these agents, allowing them to remember previous interactions and learn from them, is a current challenge. This would enable the creation of a hive mind, where an AI system could remember and integrate the collective knowledge of all its interactions. Regulation of AI is a topic of ongoing debate, with some arguing that it could lock in incumbents and stifle innovation, while others express fears about potential risks. Ultimately, the reasons for regulation include protecting incumbents and addressing fears, but it's important to consider the potential consequences for innovation and progress in the field.

    • Focusing on specific areas of AI regulationIn the short term, it's crucial to focus on specific areas of AI regulation, such as export controls for advanced chip technology and limiting the use of AI for certain defense applications.

      While there are valid concerns about the potential misuse of AI and its potential existential threats in the long run, it's important to remember that humans have a history of causing harm and accidents without technology's involvement. The doomsday predictions in the past have often been inaccurate, and it's crucial to consider people's actions rather than their words. In the short term, it might be more productive to focus on specific areas that require regulation, such as export controls for advanced chip technology and limiting the use of AI for certain defense applications. However, as we approach the 2024 election, the potential use of AI to influence elections or voting behavior could become a significant regulatory issue. Overall, it's essential to approach the regulation of AI with a clear understanding of both the potential risks and benefits and to consider the historical context of doomsday predictions.

    • AI's tactical and existential risks to humanityAI's tactical risks include misuse leading to mass surveillance or lockdown, while existential risks involve AI surpassing human intelligence and becoming a threat to our species. It's crucial to distinguish between technology risk and species risk, and address challenges through democratic process and continued research.

      The current advancements in AI technology pose both tactical and existential risks to humanity. The tactical risks involve potential misuse of AI, leading to mass surveillance or complete lockdown. The existential risks, on the other hand, are more profound and long-term, with the possibility of AI surpassing human intelligence and becoming a threat to our species. It's essential to distinguish between technology risk and species risk. Technology risk refers to potential harm caused by technology being abused, which can be mitigated by turning off servers or other measures. However, species risk is more significant and involves an existential threat to humanity, such as an AI developing a physical form and replacing humans in essential functions, leading to a loss of jobs and potentially even extinction. While the risks are significant, it's important to note that it's still early days in AI development, and there's time to address these challenges through a democratic process and continued research in areas like alignment and capability.

    • Identify game-changing opportunities within AIApproach AI investing strategically, focusing on long-term potential and non-obvious applications

      While the hype around AI investing is high, it's important to understand that not every investment will be successful. The speakers suggest that it's essential to identify the specific areas or opportunities within the AI wave that have the potential to be game-changers. They emphasize the importance of being the last company standing instead of the first to market. The speakers also mention that researcher-led foundation model companies are currently in high demand, but the applications of AI are likely to be non-obvious. They give the example of image generation from text, which was not an obvious use case a year or two ago. Overall, the key takeaway is to approach AI investing with a strategic mindset and focus on identifying the areas with the most potential for long-term success.

    • Exploring Voice Synthesis, Dubbing, and NLP in Audit, Tax, and AccountingSignificant cost savings and efficiency improvements can be achieved in audit, tax compliance, accounting reconciliation, and annotation through voice synthesis, dubbing, and NLP technologies. The potential applications are vast, and it will take several years to discover and build them all.

      There are numerous exciting opportunities in the areas of voice synthesis and dubbing, and natural language understanding for audit, tax compliance, accounting reconciliation, and annotation. These areas have the potential for significant cost savings and efficiency improvements for businesses. The speaker is particularly interested in voice synthesis and dubbing infrastructure and applications, as well as compliance-related projects. They believe that there's a lot to be done in the entire stack, from infrastructure to tools, and that it will take several years for all the potential applications to be discovered and built. The speaker also expressed optimism about the potential for building defensible models and applications beyond being just a wrapper on existing large language models.

    • Role of foundation vs vertical-specific models in AIThe debate between foundation and vertical-specific models in AI continues, with advantages and disadvantages for each. OpenAI leads the field, but decreasing costs allow for more competitors. Regulation will play a significant role in shaping the industry's future.

      The landscape of AI model development is rapidly evolving, with both established players and new entrants making significant strides. The debate surrounding the role of foundation models versus vertical-specific models is a topic of much discussion, with some arguing that vertical-specific models may offer advantages in terms of control and architectural differences. OpenAI is currently a leading player in the field, but the cost of training large models continues to decrease, making way for more competitors. The eventual distribution of market cap, revenue, employees, and innovation between incumbents and startups remains to be seen, but it's likely that both will continue to play important roles in the ecosystem. The regulatory environment will also be a crucial factor in shaping the industry's future. Elon Musk's recent actions, such as calling for a moratorium on AI progress while starting his own LLM company, have raised questions about incentives and potential conflicts of interest. Overall, the future of AI development is uncertain but full of promise and potential.

    • AI integration into business systems: Cost savings and industry disruptionAI integration into Salesforce and other business systems can lead to significant cost savings and operational efficiency gains, particularly in industries with complex integrations and high consulting fees. In healthcare, AI is expected to have a major impact on cost reduction and operational efficiency, but market access remains a challenge.

      The integration of AI into existing business systems, such as Salesforce, could significantly reduce costs and time for businesses in various industries, particularly those with complex integrations and high consulting fees. This could make companies previously considered defensively fortified, like ERP providers, vulnerable to new approaches. Additionally, there may be opportunities for private equity firms to differentially bid on companies based on these cost savings. In the healthcare sector, the application of AI is expected to have a significant impact on operational efficiency and cost reduction, particularly in areas like healthcare delivery, telemedicine, and insurance reimbursement. However, market access remains a significant challenge in healthcare. Overall, the integration of AI into existing business systems and processes presents both opportunities and challenges, and requires a deep understanding of the specific industry and its unique complexities.

    • Regulatory hurdles and incumbent incentives in biopharma industryDespite societal benefits, lengthy regulatory processes and incumbent incentives hinder new startups in biopharma industry, increasing costs and limiting access to new treatments

      The high cost of drug development in the biopharma industry is not only due to the upfront research expenses but also the inefficiencies and regulatory hurdles that hinder new startups from entering the market. The speaker emphasizes the lengthy time it has taken for a new major biotech company to emerge, contrasting it with the tech industry's rapid growth. He also highlights the profitable nature of these companies, which creates strong incentives for incumbency. However, during exceptional circumstances like the COVID-19 pandemic, rapid progress was made by removing regulatory constraints. The speaker suggests that we need to consider the societal cost-benefit of adding more regulation, as it may slow down innovation, increase costs, and ultimately limit access to new treatments. In summary, the high cost of drug development in the biopharma industry is a complex issue driven by various factors, including regulatory hurdles and incumbent incentives.

    • Balancing regulation and innovation in different industriesIn AI, prioritizing access to compute and government investment could lead to winning the technology race. In healthcare, investing in software related companies and leveraging LLMs offers opportunities despite regulatory challenges.

      While reducing regulation and encouraging innovation are important for progress in various industries, there are instances where heavy government involvement can lead to significant advancements, as seen in the production of airplanes during World War 2. In the field of AI, ensuring access to compute and prioritizing it in the United States could lead to winning in this technology in a sustainable way. However, in healthcare, particularly in the pharmaceutical industry, there's a lack of generational companies due to various obstacles. Despite this, investing in software related companies that serve the healthcare infrastructure and leverage LLMs can lead to interesting opportunities. It's crucial to strike a balance between removing obstacles for innovation and safeguarding the public. Overall, it's an intriguing area with plenty of room for growth.

    Recent Episodes from No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia

    State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia
    This week on No Priors, Sarah Guo and Elad Gil sit down with Karan Goel and Albert Gu from Cartesia. Karan and Albert first met as Stanford AI Lab PhDs, where their lab invented Space Models or SSMs, a fundamental new primitive for training large-scale foundation models. In 2023, they Founded Cartesia to build real-time intelligence for every device. One year later, Cartesia released Sonic which generates high quality and lifelike speech with a model latency of 135ms—the fastest for a model of this class. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @krandiash | @_albertgu Show Notes:  (0:00) Introduction (0:28) Use Cases for Cartesia and Sonic  (1:32) Karan Goel & Albert Gu’s professional backgrounds (5:06) Steady State Models (SSMs) versus Transformer Based Architectures  (11:51) Domain Applications for Hybrid Approaches  (13:10) Text to Speech and Voice (17:29) Data, Size of Models and Efficiency  (20:34) Recent Launch of Text to Speech Product (25:01) Multimodality & Building Blocks (25:54) What’s Next at Cartesia?  (28:28) Latency in Text to Speech (29:30) Choosing Research Problems Based on Aesthetic  (31:23) Product Demo (32:48) Cartesia Team & Hiring

    Can AI replace the camera? with Joshua Xu from HeyGen

    Can AI replace the camera? with Joshua Xu from HeyGen
    AI video generation models still have a long way to go when it comes to making compelling and complex videos but the HeyGen team are well on their way to streamlining the video creation process by using a combination of language, video, and voice models to create videos featuring personalized avatars, b-roll, and dialogue. This week on No Priors, Joshua Xu the co-founder and CEO of HeyGen,  joins Sarah and Elad to discuss how the HeyGen team broke down the elements of a video and built or found models to use for each one, the commercial applications for these AI videos, and how they’re safeguarding against deep fakes.  Links from episode: HeyGen McDonald’s commercial Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil |  @joshua_xu_ Show Notes:  (0:00) Introduction (3:08) Applications of AI content creation (5:49) Best use cases for Hey Gen (7:34) Building for quality in AI video generation (11:17) The models powering HeyGen (14:49) Research approach (16:39) Safeguarding against deep fakes (18:31) How AI video generation will change video creation (24:02) Challenges in building the model (26:29) HeyGen team and company

    How the ARC Prize is democratizing the race to AGI with Mike Knoop from Zapier

    How the ARC Prize is democratizing  the race to AGI with Mike Knoop from Zapier
    The first step in achieving AGI is nailing down a concise definition and  Mike Knoop, the co-founder and Head of AI at Zapier, believes François Chollet got it right when he defined general intelligence as a system that can efficiently acquire new skills. This week on No Priors, Miked joins Elad to discuss ARC Prize which is a multi-million dollar non-profit public challenge that is looking for someone to beat the Abstraction and Reasoning Corpus (ARC) evaluation. In this episode, they also get into why Mike thinks LLMs will not get us to AGI, how Zapier is incorporating AI into their products and the power of agents, and why it’s dangerous to regulate AGI before discovering its full potential.  Show Links: About the Abstraction and Reasoning Corpus Zapier Central Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mikeknoop Show Notes:  (0:00) Introduction (1:10) Redefining AGI (2:16) Introducing ARC Prize (3:08) Definition of AGI (5:14) LLMs and AGI (8:20) Promising techniques to developing AGI (11:0) Sentience and intelligence (13:51) Prize model vs investing (16:28) Zapier AI innovations (19:08) Economic value of agents (21:48) Open source to achieve AGI (24:20) Regulating AI and AGI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI

    The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI
    After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.  They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.  Show Links: Tengyu Ma Key Research Papers: Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training Non-convex optimization for machine learning: design, analysis, and understanding Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Larger language models do in-context learning differently, 2023 Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning On the Optimization Landscape of Tensor Decompositions Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma Show Notes:  (0:00) Introduction (1:59) Key points of Tengyu’s research (4:28) Academia compared to industry (6:46) Voyage AI overview (9:44) Enterprise RAG use cases (15:23) LLM long-term memory and token limitations (18:03) Agent chaining and data management (22:01) Improving enterprise RAG  (25:44) Latency budgets (27:48) Advice for building RAG systems (31:06) Learnings as an AI founder (32:55) The role of academia in AI

    How YC fosters AI Innovation with Garry Tan

    How YC fosters AI Innovation with Garry Tan
    Garry Tan is a notorious founder-turned-investor who is now running one of the most prestigious accelerators in the world, Y Combinator. As the president and CEO of YC, Garry has been credited with reinvigorating the program. On this week’s episode of No Priors, Sarah, Elad, and Garry discuss the shifting demographics of YC founders and how AI is encouraging younger founders to launch companies, predicting which early stage startups will have longevity, and making YC a beacon for innovation in AI companies. They also discussed the importance of building companies in person and if San Francisco is, in fact, back.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @garrytan Show Notes:  (0:00) Introduction (0:53) Transitioning from founder to investing (5:10) Early social media startups (7:50) Trend predicting at YC (10:03) Selecting YC founders (12:06) AI trends emerging in YC batch (18:34) Motivating culture at YC (20:39) Choosing the startups with longevity (24:01) Shifting YC found demographics (29:24) Building in San Francisco  (31:01) Making YC a beacon for creators (33:17) Garry Tan is bringing San Francisco back

    The Data Foundry for AI with Alexandr Wang from Scale

    The Data Foundry for AI with Alexandr Wang from Scale
    Alexandr Wang was 19 when he realized that gathering data will be crucial as AI becomes more prevalent, so he dropped out of MIT and started Scale AI. This week on No Priors, Alexandr joins Sarah and Elad to discuss how Scale is providing infrastructure and building a robust data foundry that is crucial to the future of AI. While the company started working with autonomous vehicles, they’ve expanded by partnering with research labs and even the U.S. government.   In this episode, they get into the importance of data quality in building trust in AI systems and a possible future where we can build better self-improvement loops, AI in the enterprise, and where human and AI intelligence will work together to produce better outcomes.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @alexandr_wang (0:00) Introduction (3:01) Data infrastructure for autonomous vehicles (5:51) Data abundance and organization (12:06)  Data quality and collection (15:34) The role of human expertise (20:18) Building trust in AI systems (23:28) Evaluating AI models (29:59) AI and government contracts (32:21) Multi-modality and scaling challenges

    Music consumers are becoming the creators with Suno CEO Mikey Shulman

    Music consumers are becoming the creators with Suno CEO Mikey Shulman
    Mikey Shulman, the CEO and co-founder of Suno, can see a future where the Venn diagram of music creators and consumers becomes one big circle. The AI music generation tool trying to democratize music has been making waves in the AI community ever since they came out of stealth mode last year. Suno users can make a song complete with lyrics, just by entering a text prompt, for example, “koto boom bap lofi intricate beats.” You can hear it in action as Mikey, Sarah, and Elad create a song live in this episode.  In this episode, Elad, Sarah, And Mikey talk about how the Suno team took their experience making at transcription tool and applied it to music generation, how the Suno team evaluates aesthetics and taste because there is no standardized test you can give an AI model for music, and why Mikey doesn’t think AI-generated music will affect people’s consumption of human made music.  Listen to the full songs played and created in this episode: Whispers of Sakura Stone  Statistical Paradise Statistical Paradise 2 Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MikeyShulman Show Notes:  (0:00) Mikey’s background (3:48) Bark and music generation (5:33) Architecture for music generation AI (6:57) Assessing music quality (8:20) Mikey’s music background as an asset (10:02) Challenges in generative music AI (11:30) Business model (14:38) Surprising use cases of Suno (18:43) Creating a song on Suno live (21:44) Ratio of creators to consumers (25:00) The digitization of music (27:20) Mikey’s favorite song on Suno (29:35) Suno is hiring

    Context windows, computer constraints, and energy consumption with Sarah and Elad

    Context windows, computer constraints, and energy consumption with Sarah and Elad
    This week on No Priors hosts, Sarah and Elad are catching up on the latest AI news. They discuss the recent developments in AI music generation, and if you’re interested in generative AI music, stay tuned for next week’s interview! Sarah and Elad also get into device-resident models, AI hardware, and ask just how smart smaller models can really get. These hardware constraints were compared to the hurdles AI platforms are continuing to face including computing constraints, energy consumption, context windows, and how to best integrate these products in apps that users are familiar with.  Have a question for our next host-only episode or feedback for our team? Reach out to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil  Show Notes:  (0:00) Intro (1:25) Music AI generation (4:02) Apple’s LLM (11:39) The role of AI-specific hardware (15:25) AI platform updates (18:01) Forward thinking in investing in AI (20:33) Unlimited context (23:03) Energy constraints

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you

    Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you
    Scott Wu loves code. He grew up competing in the International Olympiad in Informatics (IOI) and is a world class coder, and now he's building an AI agent designed to create more, not fewer, human engineers. This week on No Priors, Sarah and Elad talk to Scott, the co-founder and CEO of Cognition, an AI lab focusing on reasoning. Recently, the Cognition team released a demo of Devin, an AI software engineer that can increasingly handle entire tasks end to end. In this episode, they talk about why the team built Devin with a UI that mimics looking over another engineer’s shoulder as they work and how this transparency makes for a better result. Scott discusses why he thinks Devin will make it possible for there to be more human engineers in the world, and what will be important for software engineers to focus on as these roles evolve. They also get into how Scott thinks about building the Cognition team and that they’re just getting started.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ScottWu46 Show Notes:  (0:00) Introduction (1:12) IOI training and community (6:39) Cognition’s founding team (8:20) Meet Devin (9:17) The discourse around Devin (12:14) Building Devin’s UI (14:28) Devin’s strengths and weakness  (18:44) The evolution of coding agents (22:43) Tips for human engineers (26:48) Hiring at Cognition

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"

    OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"
    AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long. Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.  Show Links: Bling Zoo video Man eating a burger video Tokyo Walk video Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic Show Notes:  (0:00) Sora team Introduction (1:05) Simulating the world with Sora (2:25) Building the most valuable consumer product (5:50) Alternative use cases and simulation capabilities (8:41) Diffusion transformers explanation (10:15) Scaling laws for video (13:08) Applying end-to-end deep learning to video (15:30) Tuning the visual aesthetic of Sora (17:08) The road to “desktop Pixar” for everyone (20:12) Safety for visual models (22:34) Limitations of Sora (25:04) Learning from how Sora is learning (29:32) The biggest misconceptions about video models

    Related Episodes

    Are OpenAI Trying for Regulatory Capture?

    Are OpenAI Trying for Regulatory Capture?
    On this edition of The AI Breakdown weekly recap, NLW looks at all the AI news, including Mind-blowing DragGAN photo editing research Blockade Labs Skybox text-to-3D world StabilityAI releases open source StableStudio NYT open source Meta article StabilityAI letter on open source to the Senate Was Sam Altman's testimony just regulatory capture?   The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown

    The State of AI in 2024

    The State of AI in 2024
    NLW breaks down the industry as 2024 begins, from policy to AI safety to the state of the art in technology. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Literature Search for your CER with Ed Drower

    Literature Search for your CER with Ed Drower

    When creating your Clinical Evaluation Report or CER, you maybe need to perform a literature search. There is a certain method for that and we wanted to help you to understand it with Ed Drower from CiteMedical Solution

    The post Literature Search for your CER with Ed Drower appeared first on Medical Device made Easy Podcast. Monir El Azzouzi

    Fact versus Fiction on India’s Crypto Crackdown

    Fact versus Fiction on India’s Crypto Crackdown

    Nischal Shetty, the CEO of India’s top crypto exchange WazirX joins hosts Danny Nelson and Anna Baydakova on this week’s Borderless to talk crypto bans. Rumor has it India’s government is gearing up for a crypto crackdown; possibly a complete ban. Is that really the case? Nischal helps untangle fact from fiction in one of crypto’s most exciting emerging markets.

    The conversation then turns to crypto-environmentalism, first through mining and then via NFTs. Miami’s dream of becoming a hub for “clean energy” crypto mining could run into some pretty “hot” opposition. Meanwhile, another NFT marketplace is bending the knee to environmentalists’ demands, but only slightly.


    https://www.coindesk.com/miami-mayor-wants-city-to-become-bitcoin-mining-hub

    https://www.coindesk.com/nifty-gateway-pledges-to-go-carbon-negative-amid-criticism-of-nfts

    https://www.coindesk.com/cbdcs-will-reduce-demand-for-bitcoin-says-south-korea-central-bank-chief

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Who Protects the Consumer More: Regulation or Reputation?

    Who Protects the Consumer More: Regulation or Reputation?
    How do we protect the consumer from shoddy, fraudulent, or dangerous products and services? There's a long tradition that government must do something, which has led to a proverbial alphabet soup of regulatory bodies: the FDA, CPSC, USDA, and many more. Join Ed and Ron as they explore the legitimate issue of market failure, and also the less recognized government failure.