Logo
    Search

    academic research

    Explore "academic research" with insightful episodes like "Does factor investing still work?", "EP 121: Faster and More Accurate Results From ChatGPT with ScholarAI", "EP 89: AI's Role in Responsible Research", "Things Could Be Better" and "5. What Do Tom Sawyer and the Founder of Duolingo Have in Common?" from podcasts like ""Unhedged", "Everyday AI Podcast – An AI and ChatGPT Podcast", "Everyday AI Podcast – An AI and ChatGPT Podcast", "Short Wave" and "No Stupid Questions"" and more!

    Episodes (6)

    Does factor investing still work?

    Does factor investing still work?

    Factor investing came out of academic work in the 1990s, and offered a way to pick stocks without relying on judgments about stories or sectors. It’s had good years and bad years, but has recently struggled to do more than match the market. Today on the show, Ethan Wu describes his visit to see AQR’s Cliff Asness, who has been using this style of investing for decades. Also we go long Big Tech AI spending, and short gold.


    Link: Cliff Asness: AI is ‘still just statistics’


    For a free 30-day trial to the Unhedged newsletter go to: https://www.ft.com/unhedgedoffer


    Follow Ethan Wu (@ethanywu) and Katie Martin (@katie_martin_fx) on X. You can email Ethan at ethan.wu@ft.com.


    Read a transcript of this episode on FT.com




    Hosted on Acast. See acast.com/privacy for more information.


    EP 121: Faster and More Accurate Results From ChatGPT with ScholarAI

    EP 121: Faster and More Accurate Results From ChatGPT with ScholarAI

    ChatGPT plugins are a crucial way to help you get more reliable and accurate information out of ChatGPT. Hallucinations can be common when prompting so using ChatGPT plugins helps to reduce them. ScholarAI is one plugin we recommend to help with those issues. Damon Burrow, Co-Founder & CSO of ScholarAI, joins us to talk about the ScholarAI plugin and how to get better information out of ChatGPT.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Damon and Jordan questions about ChatGPT
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:30] Daily AI news
    [00:04:40] About Damon and ScholarAI
    [00:08:10] ScholarAI allows for up-to-date info
    [00:13:30] ScholarAI plugin walkthrough
    [00:18:05] Other use cases for ScholarAI
    [00:21:05] Plugins to pair with ScholarAI
    [00:22:00] How ScholarAI came to be
    [00:25:00] Audience questions
    [00:28:25] Final takeaway on ScholarAI

    Topics Covered in This Episode:
    1. Importance of Accuracy in AI Systems
    2. ScholarAI Plugin for Reliable Information Retrieval
    3. ScholarAI Plugin Demonstration
    4. Addressing Hallucinations and Lack of Transparency in AI Models
    5. Infusing Trust and Accuracy in AI Systems with ScholarAI

    Keywords:
    generative AI systems, electricity consumption, water for cooling, energy consumption, Argentina's energy usage, electricity usage in AI systems, power usage in generative AI, NVIDIA, AI and ML research, radiation therapy, cancer tumors, large language models, research and commercial settings, accuracy in AI, ScholarAI systems, creativity in generative AI, grounding in truth, domain expertise, peer-reviewed literature, semantic search, synthesizing information, multiple sources, BARD assistant, drafting emails, negotiating job offers, integration into smartphones, data centers, "show me diagrams", visual learning, simplifying information, ChatGPT, ScholarAI plugin, CheckCVT, citing papers, real-time information access, cutoff dates, context window, providing necessary information for ChatGPT, accessing abstracts, paper summaries, user demographics, due diligence, assessing technologies, mergers and acquisitions, journalists, COVID-19 pandemic, misinformation, hallucinations in language models, trust in AI-generated responses, transparency, tethering AI output to reliable sources, hyperlinks to sources, professional knowledge work

    EP 89: AI's Role in Responsible Research

    EP 89: AI's Role in Responsible Research

    How can we use AI for research without receiving false information or to get exactly what we need so it doesn't take long? Today Avi Staiman, Founder of SciWriter.ai, joins us to discuss what the future of research will be with AI.

    Newsletter: Sign up for our free daily newsletter
    More on this: Episode Page
    Join the discussion: Ask Avi and Jordan questions about AI and research
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:25] Daily AI news
    [00:05:03] About Avi and SciWriter
    [00:08:59] Traditional researching takes too long
    [00:11:31] Using AI in researching
    [00:14:21] Researching with AI after publishers block access
    [00:18:01] Issues when you don't research with AI properly
    [00:21:48] How to responsibly use AI
    [00:25:15] Free resource for better researching with AI

    Topics Covered in This Episode:
    - Challenges faced by researchers in the current research publication process:
    - Lengthy duration of research studies before entering the writing stage
    - Numerous rounds of back and forth with publishers
    - Need for a balance between quick publication and thorough review
    - Negative implications of using generative AI in academia:
    - Example 1: Professor fails students after using ChatGPT to confirm that their papers were written by AI, leading to a student rebellion
    - Example 2: Researchers copy and paste from ChatGPT without proper review, raising concerns about the peer-reviewed process
    - Torn feelings about the positive aspects and potential problems of technology in research
    - Importance of education and open dialogue on responsible use of generative AI
    - Benefits of access to information for individuals outside of the publishing realm
    - Emphasis on accuracy in social science research and the negative impact of mistakes
    - Discussion on publishers blocking large language models from accessing their information and its impact on model development
    - Challenges of limited access to information due to paywalls and licensing restrictions
    - The opportunity to use generative AI for verified and important information
    - The potential negative effects of regurgitating content from platforms like Reddit
    - Advantages of collaboration between academic publishers and AI companies to turn research into life-saving applications
    - Acknowledgement of inaccuracies and hallucinations in generative AI tools
    - Caution against substituting generative AI for accurate information in scientific research and publication

    Keywords:
    researchers, study, writing stage, time constraints, students, frustration, professors, initial draft, rewrite, delays, publisher, back and forth, acceptance, formatting, clunky process, quick publication, generative AI, academic purposes, ChatGPT, peer-reviewed process, errors, technology, responsible use, access to information, accuracy, social science research, publishers, large language models,  licensing restrictions, original research, hallucinations, language tools, scientific research, responsible research, academic workflow, research veracity, integrity

    Things Could Be Better

    Things Could Be Better
    Are humans ever satisfied? Two social psychologists, Ethan Ludwin-Peery and Adam Mastroianni, fell down a research rabbit hole accidentally answering a version of this very question. After conducting several studies, the pair found that when asked how things could be different, people tend to give one kind of answer, regardless of how the question is asked or how good life felt when they were asked. Short Wave's Scientist in Residence Regina G. Barber digs into the research—and how it might reveal a fundamental law of psychology about human satisfaction.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy