Logo
    Search

    The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research with Nur Ahmed

    enDecember 05, 2020

    Podcast Summary

    • A few large corporations dominate AI researchThe study reveals that a small number of companies control the majority of AI research, potentially impacting other organizations and the research community.

      The power in AI research, specifically in the areas of deep learning and democratization, is heavily concentrated in the hands of a few large corporations. This is according to a recent paper co-authored by Nora Ahmed, a strategy PhD candidate at IV Business School at Western University Canada, and a research fellow at the Scotia Bank Digital Banking Lab. The study was motivated by concerns that while there is much discussion about the need to democratize AI and make it more accessible, there was a lack of solid evidence on the current state of affairs. To answer this question, the researchers examined major computer science conferences, consulting csranking.org, and found that a small number of companies are dominating the scene. This concentration of power in AI research could have consequences for other organizations and the broader research community. The study provides valuable insights into the current state of AI research and highlights the importance of addressing the issue of concentration of power.

    • Industry presence in AI research has grown significantly in the last decadeThe study found that the number of papers with industry co-authors in AI conferences has significantly increased from around 0.2 to over 0.4 between 2010 and 2020, while the presence of elite universities has not seen a similar increase, raising concerns about the potential impact on academic research and the tenure decision process for assistant professors.

      The presence of large companies in AI research has significantly increased in the last decade compared to non-AI conferences. This trend is evident in figures one and two of the study, which show the share of papers with at least one co-author from companies in AI conferences. The synthetic control method was used to provide more solid evidence, and the increase in industry affiliation was found to be particularly pronounced in the case of the largest conferences. For instance, at one of the biggest conferences, the ratio of papers with at least one industry co-author jumped from around 0.2 to over 0.4 between 2010 and 2020. In comparison, the presence of elite universities in AI research has not seen a similar increase. This trend is concerning for some as it raises questions about the potential impact on academic research and the tenure decision process for assistant professors in the field of AI.

    • Elite universities dominate AI research, leaving mid-ranked institutions behindElite universities collaborate with tech companies for AI research, while mid-ranked institutions lose ground due to lack of resources and expertise. Corporations have increased their presence in AI research, focusing on different types than universities.

      While elite universities have increased their presence in Artificial Intelligence (AI) research through collaborations with large technology companies, universities ranked between 200 and 500 have lost ground. Large companies have the computing power and data, while elite universities have the expertise in deep learning. This trend is concerning as it may lead to less democratization and diversity in AI research. Despite the common belief that corporations have reduced R&D in recent years, the findings suggest that they have actually increased their presence in AI research. The analysis also revealed that companies focus on different types of AI research compared to universities, as shown in Figure 8 of the paper. These insights provide valuable information for policymakers, researchers, and industry professionals to promote more inclusive and diverse AI research and development.

    • Disparity in Deep Learning Research between Large Companies and Non-Elite UniversitiesLarge companies dominate deep learning research due to access to resources, while non-elite universities focus on traditional machine learning. Diversity in AI research is also a concern, with underrepresentation of non-elite universities potentially leading to less inclusive AI tools.

      The TFIDF analysis of papers presented at AAAI conference reveals a gap between large companies, elite universities, and non-elite universities in the field of deep learning research. Large companies are leading in deep learning methods, while non-elite universities are more active in traditional machine learning methods. This disparity may be due to the fact that deep learning research requires significant computing power and resources, which non-elite universities may not have access to. Another concern raised in the paper is the lack of diversity in AI research. The results suggest that AI research might be becoming less diverse as large technology companies are less diverse than non-elite universities. Increasing representation from non-elite universities could help ensure that AI tools developed are more inclusive and reflective of the real population. The paper also implies that governments may need to increase their efforts to provide computing power and resources to support AI research, particularly in non-elite universities, to promote diversity and ensure that the field remains inclusive and accessible to all.

    • Elite universities lead in AI research, large companies lag behindResearchers are concerned about the growing gap between elite universities and large companies in AI research, and its potential impact on diversity and innovation.

      The gap between elite universities and large companies in AI research is significant and growing. Researchers are calling for more studies on the consequences of this trend, as it could have implications for diversity and innovation in the field. The data suggests that this divergence is expected to continue in the next few years. The researchers were surprised by the extent of this gap, as they had expected some presence from large companies, but not to the degree observed. The reasons behind this trend, beyond the involvement of large tech companies, require further investigation.

    • AI research in agriculture gaining prominence at conferencesStudy reveals growing focus on AI in agriculture at major computer science conferences, raising concerns for potential digital divide between elite and non-elite institutions

      The study highlights the increasing presence of farms in AI research at major computer science conferences, which could lead to a gap between elite and non-elite institutions. The researchers identified this as an important area for further investigation, as it could have implications for the innovation ecosystem and potentially create a new form of digital divide. Another limitation of the study was the lack of data from other conferences or journals, which the researchers plan to address in future research. The researchers also plan to explore the consequences of this trend, using machine learning and advanced statistical methods. Additionally, they suggest that policy implications could include the impact on startups without the resources to compete in AI research and the potential consequences for universities and developing countries that may fall behind due to the requirements of large amounts of computing and well-trained computer scientists.

    • The de-democratization of AILarge companies and elite universities dominate deep learning research due to computational resources, potentially limiting diversity and innovation. International organizations and other universities should help bridge the gap to the benefit of AI research.

      The "de-democratization of AI" is a growing concern as large companies and elite universities dominate the field of deep learning research due to their access to vast computational resources. This divide could lead to negative consequences, including a lack of diversity and innovation in the field. International organizations and other universities may need to find ways to help bridge this gap and ensure that more institutions and actors have equal opportunities to contribute to AI research. Listeners interested in this topic can read the paper "The de-democratization of AI" for more technical details. It's available to explore online. We hope you enjoyed this episode of "Let's Talk AI," and don't forget to check out our website, SkynitToday.com, for more articles on similar topics and to subscribe to our weekly newsletter. Be sure to leave us a rating and a review if you like the show, and stay tuned for future episodes.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    With Great Power Comes Great Responsibility

    With Great Power Comes Great Responsibility

    On this episode of The Great Indoors, Matt is recording straight from the Mobile World Congress in Los Angeles as the Official Podcast of the show. During this very special, first-ever, face-to-face edition of the podcast, Matt speaks with multiple guests including the Chief Revenue Officer of Verizon, Sampath Sowmyanarayan; the CTIO of Millicom, Xavier Rocoplan; and GSMA’s Head of Intelligence, Peter Jarich. Matt and his guests cover a wide range of topics during the episode, including the future of 5G technology, security in this new IoT world, and bridging the digital divide through diversity and inclusion initiatives.

    This episode of The Great Indoors was produced by Quill

    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

    Welcome to the latest episode of our podcast, where we delve into the fascinating and sometimes terrifying world of artificial intelligence. Today's topic is AI developing emotions and potentially taking over the world.

    As AI continues to advance and become more sophisticated, experts have started to question whether these machines could develop emotions, which in turn could lead to them turning against us. With the ability to process vast amounts of data at incredible speeds, some argue that AI could one day become more intelligent than humans, making them a potentially unstoppable force.

    But is this scenario really possible? Are we really at risk of being overtaken by machines? And what would it mean for humanity if it were to happen?

    Join us as we explore these questions and more, with insights from leading experts in the field of AI and technology. We'll look at the latest research into AI and emotions, examine the ethical implications of creating sentient machines, and discuss what measures we can take to ensure that AI remains under our control.

    Whether you're a tech enthusiast, a skeptic, or just curious about the future of AI, this is one episode you won't want to miss. So tune in now and join the conversation!

    P.S AI wrote this description ;)

    Gavin Uberti - Real-Time AI & The Future of AI Hardware - [Invest Like the Best, EP.356]

    Gavin Uberti - Real-Time AI & The Future of AI Hardware - [Invest Like the Best, EP.356]
    Today, my guest is 21-year-old Gavin Uberti, who dropped out of Harvard to build Etched, which is one of the most fascinating companies I’ve seen. The topic of our conversation is the ongoing revolution in artificial intelligence, and more specifically the chips and technology that powers these incredible models. To date, general-purpose AI chips like Nvidia GPUs have powered the revolution, but Gavin’s bet is that purpose-built chips, hard-coded for the underlying model architecture, will dramatically reduce the latency and cost of running models like GPT4. We’re about to embark on what Gavin calls the “largest infrastructure buildout since the industrial revolution”, and I won’t spoil what he thinks this will unlock for all of us. It is so uplifting to me that someone so young can be working on something so big. Please enjoy this great conversation with Gavin Uberti. Check out Etched.AI Listen to Founders Podcast For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors, and provider of Canalyst. Tired of calculating fully diluted shares outstanding? Access every publicly-reported data point and industry-specific KPI through their database of over 4,000 drivable global models hand-built by a team of sector-focused analysts, 35+ industry comp sheets, and Excel add-ins that let you use their industry-leading data in your own spreadsheets. Tegus’ models automatically update each quarter, including hard-to-calculate KPIs like stock-based compensation and organic growth rates, empowering investors to bypass the friction of sourcing, building, and updating models. Make efficiency your competitive advantage and take back your time today. As a listener, you can trial Canalyst by Tegus for free by visiting tegus.co/patrick. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes.  Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes: (00:03:41) - (first question) - Born too late to explore the world, too early to explore the stars (00:05:59) - Interpreting and defining superintelligence (00:07:20) - Excitement we can have for an AI driven future  (00:09:25) - Overview and basic terminology of the transformers that power AI  (00:15:53) - What Q* is and the rumors around it  (00:20:41) - Robotics, machinery, and what’s interesting about them  (00:23:18) - The problem of latency and computing power  (00:8:55) - Needing to build physical infrastructure that doesn’t exist (00:32:18) - Inference and training AI models  (00:36:00) - Major stages of chip design and the upper limits of speed  (00:45:56) - Customers for billion dollar generative AI models (00:48:56) - A sci-fi-esque reality and the politicization of AI (00:50:38) - The Bitter Lesson and the implications of powerful AI models  (00:56:27) - The most important companies in the AI space today  (00:61:52) - Strategically building a defensible AI product  (01:04:07) - Software development and why other AI companies fail  (01:06:51) - Specialization and chip performance improvement  (01:15:34) - Why the transformer remains the leading architecture  (01:17:26) - A proliferation of models beyond the major players and data access (01:21:19) - The kindest thing anyone has ever done for Gavin

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    #151 Asa Cooper: How Will We Know If AI Is Fooling Us?

    This episode is sponsored by Celonis ,the global leader in process mining. AI has landed and enterprises are adapting. To give customers slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through systems. Creating data at every step. Celonis reconstructs this data to generate Process Intelligence. A common business language. So AI knows how your business flows. Across every department, every system and every process. With AI solutions powered by Celonis enterprises get faster, more accurate insights. A new level of automation potential. And a step change in productivity, performance and customer satisfaction Process Intelligence is the missing piece in the AI Enabled tech stack.

     

    Go to https://celonis.com/eyeonai to find out more.

     

    Welcome to episode 151 of the ‘Eye on AI’ podcast. In this episode, host Craig Smith sits down with Asa Cooper, a postdoctoral researcher at NYU, who is at the forefront of language model safety. 

    This episode takes us on a journey through the complexities of AI situational awareness, the potential for consciousness in language models, and the future of AI safety research.

    Craig and Asa delve into the nuances of AI situational awareness and its distinction from sentience. Asa, with his rich background in NLP and AI safety from Edinburgh University, shares insights from his post-doc work at NYU, discussing collaborative efforts on a paper that has garnered attention for its take on situational awareness in large language models (LLMs). 

    We explore the economic drivers behind creating AI with such capabilities and the role of scaling versus algorithmic innovation in achieving this milestone. We also delve into the concept of agency in LLMs, the challenges of post-deployment monitoring, and the effectiveness of current measures in detecting situational awareness.

    To wrap things off, we break down the importance of source trustworthiness and the model's ability to discern reliable information, a critical aspect of AI safety and functionality, so make sure to watch till the end.

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Preview and Introduction

    (02:30) Asa's NLP Expertise and the Safety of Language Models

    (06:05) Breaking Down AI's Situational Awareness 

    (13:44) Evolution of AI: Predictive Models to AI Coworkers

    (20:29) New Frontier in AI Development?

    (27:14) Measuring AI's Awareness

    (33:49) Innovative Experiments with LLMs

    (40:51) The Consequences of Detecting Situational Awareness in AI

    (44:07) How To Train AI On Trusted Sources

    (49:52) What Is The Future of AI Training?

    (56:35) AI Safety: Public Concerns and the Path Forward**