Logo
    Search

    EP 266: Stop making these 7 Large Language Model mistakes. Best practices for ChatGPT, Gemini, Claude and others

    en-usMay 07, 2024

    Podcast Summary

    • New developments in AI industryStay informed about latest news and best practices in AI industry to maximize potential of large language models

      There are common mistakes being made when it comes to using large language models, and it's essential to be aware of them to maximize their potential. Apple is reportedly developing new chips for AI software and data centers, which could lead to more efficient and powerful AI processing. OpenAI, the company behind ChatGPT, is moving closer to launching a search engine to compete with Google, and they have recently moved all chat data to chatgpt.com to prepare for this launch. Additionally, the mysterious GBT-2 chatbot model has been re-released. These developments demonstrate the growing importance and competition in the AI industry. To make the most of these tools, it's crucial to stay informed about the latest news and best practices. Sign up for Everyday AI's free daily newsletter to stay updated and learn practical advice for using large language models to boost your career, business, and everyday life.

    • New Developments in AI: OpenAI's New Chatbots and Microsoft's MAI oneOpenAI releases smaller, powerful chatbot versions, Microsoft unveils a larger language model as competition, both signaling continued advancements in AI technology

      There are new developments in the world of AI models, with OpenAI releasing two new versions of their GPT 2 chatbot and Microsoft announcing the creation of their own large language model, MAI one. OpenAI's new models, named "I'm a good GPT 2 chatbot" and "I'm also a good GPT 2 chatbot," are likely smaller, powerful versions that could be used for future free versions of chatGPT and search. Microsoft's MAI one, with around 500,000,000,000 parameters, is significantly larger than previous models and is a direct competition to OpenAI, Google, Anthropic, and Meta's state-of-the-art AI models. Microsoft's investment in MAI one could be seen as an attempt to prove their independence from OpenAI and reduce reliance on the company for future AI developments. The development is being overseen by Mustafa Suleiman, the ex-Google DeepMind co-founder and former CEO and co-founder of Inflexion AI. The future implications for popular tools like Copilot are yet to be seen, as users may have the option to choose between different models. It's an exciting time in the world of AI, with companies continuously pushing the boundaries of what's possible.

    • Understanding the knowledge cutoff in large language modelsBeing aware of the knowledge cutoff in large language models is crucial to avoid inaccuracies and ensure the information's relevancy and accuracy.

      It's crucial to understand the knowledge cutoff when working with large language models. These models gather data from the internet, which can be good or bad, and humans fine-tune them. However, there's a point where the models stop gathering new data, and this date is important to know. Older data can lead to inaccurate or outdated outputs. With the increasing use of large language models in daily work, failing to recognize the knowledge cutoff can result in publishing or sharing incorrect information. It's essential to be aware of this expiration date and the model's functioning to avoid inaccuracies. Before large language models, we relied on manual research and checking the date of sources. Similarly, when using large language models, it's necessary to consider the data's age and relevancy.

    • Understanding Knowledge Cutoff Dates in Large Language ModelsEnsure language models have up-to-date information by checking their knowledge cutoff dates. Outdated models can lead to inaccurate or irrelevant information.

      When working with large language models like chatbots, it's essential to consider the models' knowledge cutoff dates to ensure the information you receive is up-to-date. Different models have varying cutoff dates, with some being more recent than others. For instance, OpenAI's GPT 4 has a cutoff date of December 2023, while Google's Gemini reportedly has a November 2023 cutoff. Meta's Llama has a December 2023 cutoff for its newer versions. However, it's important to note that some free versions of these models may have older cutoff dates, such as the free version of ChatGPT, which has a January 2022 cutoff. Using outdated models can lead to inaccurate or irrelevant information, making it crucial to understand and investigate the knowledge cutoff dates before using a language model. Additionally, models with unclear or undisclosed cutoff dates can negatively impact trust and transparency.

    • Impact of Internet connectivity on large language modelsLarge language models without real-time Internet connectivity may provide inaccurate or outdated information due to lack of access to up-to-date data.

      While large language models like ChatGPT, Google's Gemini, and Microsoft Copilot can provide answers to queries, their level of Internet connectivity and access to real-time information significantly impact their accuracy and usefulness. During the discussion, it was highlighted that while models like ChatGPT and Microsoft Copilot have some level of Internet connectivity through Bing and Microsoft, respectively, Anthropic's Claude does not. Perplexity, on the other hand, is an answer engine that uses models like GPT or Opus, but it's not a model itself. The examples given during the discussion demonstrated this point. When asked to list the largest companies in the US by market cap, the default version of ChatGPT gave an inaccurate and outdated answer. However, when an Internet-connected version of ChatGPT was used, the answer was more accurate. Google Gemini, another Internet-connected large language model, essentially told the user to use Google instead of providing an answer. The lack of real-time Internet connectivity and access to up-to-date information can result in a model giving incorrect or outdated answers. Therefore, it's crucial to consider a model's level of Internet connectivity when evaluating its usefulness in providing accurate and up-to-date information.

    • Understanding context window limitationsLarge language models can provide accurate responses initially but forget important details as conversation progresses due to limited context window.

      While large language models like Copilot, ChatGPT, and others can provide accurate and useful information, they have limitations, specifically in terms of memory or context window. These models can only remember and process a certain amount of information before they start forgetting. For instance, ChatGPT's context window is currently limited to 32,000 tokens or approximately 28,000 words. As users interact with these models, they might experience a "love affair" due to the initial accurate responses, but as the conversation progresses and more information is shared, the models may start forgetting important details. This can lead to inaccurate or irrelevant responses. It's crucial for users to understand the context window limitations of different models and manage the flow of information accordingly. Additionally, as the cost of compute continues to decrease and models become more powerful, the context window limitation may become less of a concern.

    • Misleading information from language models through screenshotsAvoid sharing or consulting solely based on screenshots from language models, ensure to share the URL or link to the original interaction or output for transparency and accountability.

      Relying solely on screenshots from large language models like ChatGPT for information or making decisions is a common yet significant mistake. Sharing or consulting based on screenshots alone can lead to inaccurate or misleading results, as anyone can manipulate a model to produce a desired output and share a screenshot of it. The New York Times made this mistake in a high-profile lawsuit, failing to provide the public URL to verify the authenticity of the screenshots they shared. To avoid this pitfall, it's essential to share the URL or link to the original interaction or output from the language model. This ensures transparency and accountability, allowing others to verify the results for themselves.

    • Large language models are generative, not deterministicEven with the same prompt, large language models can generate different responses due to their generative nature and contextual understanding, which can be beneficial in drug discovery.

      Large language models are generative, not deterministic. This means that even with the same prompt, you could receive different responses every time you use a large language model. The models don't understand words in the way humans do, but rather use context and vast amounts of training data to generate responses. The generative nature of these models is a feature, not a bug, and is intended to provide unique and different responses each time. So, if you're expecting consistent, identical results from your prompts, that might not be the case. The generative nature depends on the specific prompt and its context. Additionally, there are settings like top p and temperature that can influence the next token prediction. Eli Lilly, a major company, even sees the "hallucinations" or unexpected responses as a feature in drug discovery. So, remember, large language models are designed to generate new and potentially surprising responses, not just repeat the same ones.

    • Understanding Few-Shot Learning for Better ChatGPT ResultsFew-shot learning involves providing multiple input-output pairings or engaging in back-and-forth conversations with a model to improve its responses.

      Large language models like ChatGPT are generative models, not deterministic like search engines. They're designed to predict the next token with a level of randomness and generate unique responses based on given prompts. Copy-pasted prompts don't work effectively with these models. Instead, providing a few examples of input and output during prompting can lead to better results. This concept is known as few-shot learning. Research consistently shows that the more input-output pairings or back-and-forth conversations you have with a model, the better the outputs will be. So, be cautious of individuals who claim to have magical solutions or sell prompt books, as they may not have a solid understanding of large language models. Instead, consider joining communities or workshops where you can learn prompt engineering techniques firsthand. Remember, the more interaction and examples you provide, the better the model's responses will be.

    • Leveraging Large Language Models for BusinessUnderstand the importance of prompting techniques for optimal results with large language models. Companies like Microsoft, Amazon, IBM, OpenAI, and others are leading the way in generative AI, and businesses should integrate these tools into their strategies to stay competitive.

      When working with large language models like ChatGPT, copy and paste prompts will not yield optimal results. Instead, it's crucial to understand that these models are the future of work and invest time in teaching them through proper prompting. Prime prompt polishing and prompt engineering are essential techniques to elicit the best outputs. Additionally, businesses of all sizes, from startups to Fortune 500 companies, are already implementing these technologies on a massive scale to save costs and stay competitive. Microsoft Copilot, Amazon Q, IBM Watson, OpenAI, Anthropics Claude, Google, and Meta are just a few examples of companies leading the way in generative AI and large language models. Therefore, it's essential to start integrating these tools into your business strategy to remain competitive and adapt to the changing business landscape.

    • Misunderstanding large language modelsAvoid common mistakes like misjudging their knowledge cutoff, neglecting internet connectivity, mismanaging memory, and forgetting they're generative systems when working with large language models to maximize productivity and collaboration.

      The future of work involves integrating large language models into our daily tasks as knowledge workers. These models will become our constant collaborators, and we'll be prompting them every day, hour, and minute. However, there are common mistakes people make when working with large language models. These include misunderstanding their knowledge cutoff, neglecting their internet connectivity, mismanaging their memory, and not realizing they're generative systems, among others. It's crucial to understand these aspects to effectively work with these models. In essence, the future of work is about humans and large language models collaborating closely, with the latter acting as intelligent assistants.

    Recent Episodes from Everyday AI Podcast – An AI and ChatGPT Podcast

    EP 284: Building A Human-Led, AI-Enhanced Justice System

    EP 284: Building A Human-Led, AI-Enhanced Justice System

    Send Everyday AI and Jordan a text message

    When we talk about AI, it's always about efficiency, more tasks, more growth. But when it comes to the legal system, can AI help law firms with impact and not just efficiency? Evyatar Ben Artzi, CEO and Co-Founder of Darrow,  joins us to discuss how AI can enhance the legal landscape.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan and Evyatar questions on AI in the justice system

    Related Episode: Ep 140: How AI Will Transform The Business of Law

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Use of generative AI in the legal system
    2. Use of LLMs in the legal system
    3. AI's impact on efficiency in the legal industry

    Timestamps:
    01:20 Daily AI news
    04:35 About Evyatar and Darrow
    06:16 Challenges accessing information for lawyers
    09:27 AI efficiency movement is responsible for workload.
    10:24 AI revolutionizing legal world for success.
    16:51 Efficiency affects law firm's ability to bill.
    21:30 Improving mental health in legal profession with agility.
    26:55 Loss of online communities raises concerns about memory.
    27:58 Law firms pursuing right to be remembered.

    Keywords:
    Generative AI, Legal System, Impact, Efficiency, AI News, Perplexity AI, Ultra Accelerator Link, NVIDIA, Apple, OpenAI, Siri, GPT Technology, Evyatar Ben Artzi, Darrow, Justice Intelligence Platform, Case Analysis, Herbicides, Pesticides, Cancer Rates, Large Language Models, ChatGPT, Right to be Remembered, Internet Deletion Practices, Fortune 500 Companies, Legal Industry, Billable Hours, Strategic Thinking, Legal Development, Mental Health, Multiverse in Law.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 283: WWT's Jim Kavanaugh GenAI Roadmap for Business Success

    EP 283: WWT's Jim Kavanaugh GenAI Roadmap for Business Success

    Send Everyday AI and Jordan a text message

    Businesses are working out how to use GenAI in the best way. One company that's acing it? World Wide Technology. WWT's CEO, Jim Kavanaugh, is sharing their plan for implementing GenAI into business smoothly.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan and Jim questions on GenAI

    Related Episodes:
    Ep 197: 5 Simple Steps to Start Using GenAI at Your Business Today
    Ep 146: IBM Leader Talks Infusing GenAI in Enterprise Workflows for Big Wins

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Impact of Generative AI
    2. Role of GenAI in World Wide Technology
    3. AI Adoption for Business Leaders
    4. Large Language Models and AI Impact
    5. Challenges in the Generative AI Space
    6. Organization Culture and AI Implementation

    Timestamps:
    01:30 About WWT and Jim Kavanaugh
    06:59 Connecting with users for effective AI.
    10:06 Advantage of working with NVIDIA for digital transformation.
    13:10 Discussing techniques and client example.
    18:35 CEOs implementing AI, seeking solutions.
    20:46 Creating awareness, training, and leveraging technology efficiently.
    25:27 AI increasingly important, impacts all industries' outcomes.
    27:18 Use secure, personalized language models for efficiency.
    32:32 Streamlining data access for engineers and sales.
    35:42 CEOs need to prioritize technology and innovation.
    37:02 NVIDIA is the game-changing leader.

    Keywords:
    generative AI, challenges of AI implementation, Jim Kavanaugh, CEO, Worldwide Technology, digital transformation, value-added reseller, professional services, comprehensive solution, AI strategies, NVIDIA, OpenAI's ChatGPT, large language models, GenAI, Advanced Technology Center, data aggregation, real-time data access, intelligent prompts, business leaders, AI technologies, data science, Jensen and NVIDIA, multimodal languages, AI-first organization, financial performance, go-to-market strategies, software development efficiency, RFP process

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 282: AI’s Role in Scam Detection and Prevention

    EP 282: AI’s Role in Scam Detection and Prevention

    Send Everyday AI and Jordan a text message

    If you think you know scammers, just wait.
     
    ↳ Voice cloning will fool the best of us.
    ↳ Deepfakes are getting sophisticated.
    ↳ Once-scammy emails now sound real.
     
    How can AI help? In a lot of ways. Yuri Dvoinos, Chief Innovation Officer at Aura, joins us to discuss AI's role in scam detection and prevention.
     
    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode page
    Join the discussion: Ask Jordan and Yuri questions on AI and scam detection

    Related Episodes: Ep 182: AI Efficiencies in Cyber – A Double-Edged Sword
    Ep 202: The Holy Grail of AI Mass Adoption – Governance

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Sophistication of AI in Scams
    2. Countermeasures to Combat AI Scams
    3. Deepfakes and Their Increasing Prevalence

    Timestamps:
    01:20 Daily AI news
    04:45 About Yuri and Aura
    07:32 Growing impact of impersonation and trust hacking.
    12:35 Consumer app with state-of-the-art protection.
    13:48 New technology scans emails to protect users.
    19:36 Need for awareness of sophisticated multi-platform scams.
    20:33 Be cautious of potential multichannel scams
    26:44 Scams are getting sophisticated, AI may worsen.
    30:05 Different organizations need varying levels of security.
    31:25 Deepfakes raise concerns about truth and trust.
    34:47 It's hard to detect scam communication online.

    Keywords:
    AI Scams, Jordan Wilson, Yuri Dvoinos, Deepfakes, AI Technology, Verification System, Online Interactions, Cyberattacks, Business Security, Scam Detection, Communication Channel Verification, Language Models, AI impersonation, Small Business Scams, Scammer Automation, Aura, Message Protection Technology, Call Analysis, Email Scanning, Voice Synthesizer Technology, Multichannel Scams, 2FA, Cybersecurity Training, Digital Trust, Cybersecurity, Sophisticated Corporate Scams, OpenAI, NVIDIA, Aura Cybersecurity Company, Online Safety.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 281: Elon Musk says AI will make jobs 'optional' – Crazy or correct?

    EP 281: Elon Musk says AI will make jobs 'optional' – Crazy or correct?

    Send Everyday AI and Jordan a text message

    Will AI make jobs... optional? Elon Musk seems to think so. His comments struck a chord with some. And rightfully so. As polarizing as Elon Musk can be, does he have a point? Let's break it down.
     
    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode page
    Join the discussion: Ask Jordan questions on AI and jobs

    Related Episodes: Ep 258: Will AI Take Our Jobs? Our answer might surprise you.
    Ep 222: The Dispersion of AI Jobs Across the U.S. – Why it matters

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Elon Musk's statement and its implications
    2. Future of work with AI advancements
    3. AI's impact on human purpose and employment
    4. Job displacement and AI investment over human employment

    Timestamps:
    01:40 Daily AI news
    07:47 Exploring the implications of generative AI.
    10:45 Concerns about AI impact on future jobs
    13:20 Elon Musk's track record: genius or random?
    17:57 Twitter's value drops 72% to $12.5B.
    22:34 Elon Musk predicts 80% chance of job automation.
    25:54 AI advancements may require universal basic income.
    29:03 AI systems rapidly advancing, surpassing previous capabilities.
    32:11 Bill Gates worries about AGI's misuse.
    35:04 AI advancements foreshadowing future efficiency and capabilities.
    39:42 Ultra-wealthy and disconnected elite shaping AI future.
    43:32 AI will dominate future work, requiring adaptation.
    46:19 US government may not understand future of work.

    Keywords:
    Elon Musk, AI, XAI, funding, chatbot, Grok, OpenAI, legal battle, Apple, Siri, integration, core apps, model, capabilities, safety, speculation, future, work, Tesla, market cap, value, investor sentiment, vision, promises, performance, Viva Tech, conference, robots, job market, society, purpose.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 280: GenAI for Business - A 5-Step Beginner's Guide

    EP 280: GenAI for Business - A 5-Step Beginner's Guide

    Send Everyday AI and Jordan a text message

    Everyone is trying to wrap their heads around how to get GenAI into their business. We've had chats with over 120 experts and leaders from around the globe, including big companies, startups, and entrepreneurs. We're here to give you the lowdown on how you can start using GenAI in your business today.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode page
    Join the discussion: Ask Jordan questions on AI

    Related Episodes:Ep 189: The One Biggest ROI of GenAI
    Ep 238: WWT’s Jim Kavanaugh Gives GenAI Blueprint for Businesses

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. AI in Business
    2. Implementing AI
    3. AI Guidelines and Guardrails
    4. Practical Application of AI

    Timestamps:
    02:00 Daily AI news
    06:20 Experienced in growing companies of all sizes
    11:45 AI not fully implemented yet
    19:13 Generative AI changing workforce dynamics, impact discussion.
    21:32 Rapidly adapt to online business, seek guidance.
    31:19 AI guardrails and guidelines
    34:25 Companies overcomplicating generative AI, driven by peer pressure.
    37:45 Focus on measurable impact in AI projects.
    45:17 Leverage vendors and experts for AI education.
    51:48 AI may replace jobs - plan for future.
    54:48 Ethical AI implementation involves human and AI cooperation.
    01:00:42 Culmination of extensive work to simplify generative AI.

    Keywords:
    AI training, Employee education, Generative AI tools, Communication skills, Job displacement, AI implementation, Business ethics, AI in business, Guidelines for AI, Data Privacy, AI statistics, Transparency in AI, Bottom-up approach,  AI impact on work, Everyday AI Show

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 279: Google’s New AI Updates from I/O: the good, the bad, and the WTF

    EP 279: Google’s New AI Updates from I/O: the good, the bad, and the WTF

    Send Everyday AI and Jordan a text message

    Did Google say 'AI' too many times at their I/O conference? But real talk – it's hard to make sense of all of Google's announcements. With so many new products, updated functionality, and new LLM capabilities, how can you make sense of it all?  Oh.... that's what we're for.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions on Google AI

    Related Episode:  Ep 204: Google Gemini Advanced – 7 things you need to know

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Google's Updates and Announcements
    2. Google AI Evaluations
    3. Concerns Over Google's AI Development and Marketing

    Timestamps:
    01:30 Daily AI news
    05:30 What was announced at Google's I/O
    09:31 Microsoft and Google introduce AI for teams.
    12:38 AI features not available for paid accounts.
    16:12 Doubt Google's claims about their Gemini model.
    19:26 Speaker live-drew with Pixel phone, discussed code.
    22:24 Exciting city scene, impressive Vio and Astra.
    24:44 Gems and GPTs changing interactions with language models.
    29:26 Accessing advanced features requires technical know-how.
    33:45 Concerns about availability and timing of Google's features.
    36:14 Google CEO makes joke about overusing buzzwords.
    41:06 Google Gems: A needed improvement for Google Gemini.
    41:50 GPT 4 ranks 4th behind Windows Copilot.

    Keywords:
    Google IO conference, AI updates, NVIDIA revenue growth, Meta acquisition, Adapt AI startup, OpenAI deal, News Corp, Project Astra, Gemini AI agent, Gemini 1.5 pro, Ask photos powered by Gemini, Gemini Nano, Android 15, GEMS, Google AI teammate, Microsoft team copilot, Google Workspace, Google search, Veo, Imagine 3, Lyria, Google's AI music generator, Wyclef Jean, Large language models, GPT 4.0, Google Gemini, AI marketing tactics, Deceptive marketing, Discoverability issues, Branding issues.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 278: Microsoft Build AI Recap - 5 things you need to know

    EP 278: Microsoft Build AI Recap - 5 things you need to know

    Send Everyday AI and Jordan a text message

    To end a week-ish full of AI happenings, Microsoft has thrown all kinds of monkey wrenches into the GenAI race. What did they announce at their Microsoft Build conference? And how might it impact you? Our last takeaway may surprise you.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions on Microsoft AI

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Microsoft Build Conference Key AI Features
    2. Microsoft Copilot Updates
    3. On-device AI and its future

    Timestamps:
    01:50 Startup Humane seeks sale amid product criticism.
    09:00 Using Copilot increases latency and potential errors.
    11:15 Copilot changing work with edge AI technology.
    13:51 Cloud may be more secure than personal devices.
    19:26 Recall technology may change required worker skills.
    20:24 Semantic search understands context, improving productivity.
    28:41 Impressive integration of GPT-4 in Copilot demo.
    31:41 New Copilot technology changes how we work.
    36:13 Customize and deploy AI agent to automate tasks.
    38:08 Uncertainty ahead for enterprise companies, especially Apple.
    46:09 Recap of 5 key announcements from build conference.

    Keywords:
    Microsoft CEO, Satya Nadella, Copilot stack, personal Copilot, team's Copilot, Copilot agents, Copilot Studio, Apple ecosystem, enterprise companies, Microsoft Teams, OpenAI, Jordan Wilson, Microsoft Build Conference, edge AI, Copilot Plus PC, recall feature, gpt4o capabilities, iPhone users, AI technology, data privacy and security, GPT 4 o desktop app, AI systems, recall, mainstream AI agents, Humane AI, Scarlett Johansson, ChatGPT, Anthropic Claude, COPilot Studio Agent, Microsoft product.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 277: How Nonprofits Can Benefit From Responsible AI

    EP 277: How Nonprofits Can Benefit From Responsible AI

    Send Everyday AI and Jordan a text message

    Generative AI offers significant benefits to nonprofits. What obstacles do they encounter, and how can they utilize this innovative technology while safeguarding donor information and upholding trust with stakeholders? Nathan Chappell, Chief AI Officer at DonorSearch AI, joins us to explore the responsible use of AI in the nonprofit sector.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode page
    Join the discussion: Ask Jordan and Nathan questions on AI and nonprofits

    Related Episodes:
    Ep 105: AI in Fundraising – Building Trust with Stakeholders
    Ep 148: Safer AI – Why we all need ethical AI tools we can trust

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    01:50 About Nathan and DonorSearch AI
    05:52 Decreased charity giving, AI aids nonprofit efficiency.
    09:39 AI enhances nonprofit efficiency, prioritizes human connections.
    13:35 Nonprofits need to embrace AI for advancement.
    16:22 Use AI to create engagement stories, scalable.
    18:59 Internet equalized access to computing power.
    25:02 Nonprofits rely on trust, need responsible AI.
    29:52 Ensuring trust and accountability in generative AI.
    33:35 AI is about people leveling up work.
    34:16 Daily exposure to new tech terms essential.

    Topics Covered in This Episode:
    1. Impact of Generative AI for Nonprofits
    2. Digital Divide in Nonprofit Sector
    3. Role of Trust in Nonprofits and responsible AI usage
    4. Traditional Fundraising vs. generative AI
    5. Future of AI in Nonprofits

    Keywords:
    Nonprofits, generative AI, ethical use of AI, Jordan Wilson, Nathan Chappell, DonorSearch AI, algorithm, gratitude, machine learning, digital divide, AI employment impact, inequality, LinkedIn growth, Taplio, trust, Fundraising AI, responsible AI, AI explainability, AI accountability, AI transparency, future of nonprofits, AI adaptation, predictive AI, personalization, data for donors, generosity indicator, precision and personalization, AI efficiency, human-to-human interaction, AI tools.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 276: AI News That Matters - May 20th, 2024

    EP 276: AI News That Matters - May 20th, 2024

    Send Everyday AI and Jordan a text message

    OpenAI and Reddit’s data partnership, will Google’s AI plays help them catch ChatGPT, and what’s next for Microsoft?  Here's this week's AI News That Matters!

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions on AI

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Key Partnerships and Deals in AI
    2. Google's New AI Developments
    3. Microsoft's Upcoming Developer Conference
    4. Apple's Future AI Implementation

    Timestamps:
    02:00 Reddit partners with OpenAI for AI training, content.
    04:28 Large companies lack transparency in model training.
    06:58 Reddit becoming preferred search over Google, value in partnerships.
    12:08 OpenAI announced GPT 4 o and new feature.
    14:48 Google announced live smart assistance, leveraging AI.
    18:19 Customize data/files, tap into APIs, virtual teammate.
    21:06 Impressed by Google's new products and features.
    26:33 Apple to use OpenAI for generative AI.
    29:08 Speculation around AI safety, resignation raises questions.
    32:22 Concerns about OpenAI employees leaving is significant.
    34:20 Google and Microsoft announce AI developments, drama at OpenAI.

    Keywords:
    Jan Leakey, smarter than human machines, Reddit, OpenAI, data deal, model training, Google, AI project Astra, Microsoft's Build developer conference, AI developments, Apple partnership, safety concerns, everydayai.com, Ask Photos, Gemini Nano, Android 15, AI powered search, Gemini AI assistant, Google AI teammate, Microsoft developer conference, Copilot AI, AI PCs, Intel, Qualcomm, AMD, Seattle, Jordan Wilson, personal data, Reddit partnership, Google IO conference.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    EP 275: Be prepared to ChatGPT your competition before they ChatGPT you

    EP 275: Be prepared to ChatGPT your competition before they ChatGPT you

    Send Everyday AI and Jordan a text message

    If you're not gonna use AI, your competition is. And they might crush you. Or, they might ChatGPT you. Barak Turovsky, VP of AI at Cisco, gives us the best ways to think about Generative AI and how to implement it. 

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan and Barak questions on ChatGPT

    Related Episodes: Ep 197: 5 Simple Steps to Start Using GenAI at Your Business Today
    Ep 246: No that’s not how ChatGPT works. A guide on who to trust around LLMs

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Large Language Models (LLMs) and Business Competitiveness
    2. Understanding LLMs for Small to Medium-Sized Businesses
    3. Use Cases and Misconceptions of AI
    4. Data Security and Privacy

    Timestamps:
    01:35 About Barak and Cisco
    05:44 AI innovation concentrated in big tech companies.
    07:14 Large language models can revolutionize customer interactions.
    12:01 ChatGPT fluency doesn't guarantee accurate information.
    13:41 Considering use cases over two dimensions
    18:16 OLM is good fit for specific industries.
    21:17 Emphasizing the importance of large language models.
    23:20 Maintaining control over unique AI model elements.
    28:50 Questioning the data use in large models.
    31:27 Barak discusses leveraging AI for various use cases.
    33:50 Industry leader shared great insights on AI.

    Keywords:
    AI, Large Language Models, Jordan Wilson, Barak Turovsky, Cisco, Google Translate, Transformer Technology, Generative AI, Democratization of Access, Customer Satisfaction, Business Productivity, Business Disruption, Internet Search, Sales Decks, Scalable Businesses, Fluency-Accuracy Misconception, AI Use Cases, Data Privacy, Data Security, Model Distillation, Domain-Specific AI Models, Small AI Models, Gargantuan AI Models, Data Leverage, AI for Enterprises, Data Selling, Entertainment Use Case, Business Growth, Professional Upskilling, AI Newsletter.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

    Related Episodes

    EP 153: Knowledge Cutoff - What it is and why it matters for large language models

    EP 153: Knowledge Cutoff - What it is and why it matters for large language models

    Why do AI chats lie? It probably starts with understanding the model's knowledge cutoff. Why does an AI's knowledge have an expiration date, and how does this impact our interaction with technology?  We're cutting through the tech jargon to give you a clear view of how AI thinks and learns.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions about AI and LLMs
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:50] Daily AI news
    [00:05:50] Importance of knowledge cutoff in LLMs
    [00:07:55] How LLMs are trained
    [00:10:00] Knowledge cutoff is like a text book
    [00:14:30] ChatGPT modes and knowledge cutoff dates
    [00:21:50] Anthropic Claude knowledge cutoff date
    [00:27:35] Microsoft Bing Chat modes and knowledge cutoff dates
    [00:31:30] Google Bard knowledge cutoff date
    [00:33:40] Recap of LLM knowledge cutoff dates
    [00:35:30] Final thoughts

    Topics Covered in This Episode:
    1. Understanding the Knowledge Cutoff in Large Language Models
    2. Understanding Learning Models and Knowledge Cutoffs
    3. Knowledge Cutoff Dates in Different Generative AI Models

    Keywords:
    AI, generative AI, Sports Illustrated, investigation, fake author names, AI-generated profile images, Symphony, Google, voice analytics, financial firms, natural language processing, Amazon, reInvent conference, Bedrock service, knowledge cutoff, large language models, web scraping, training, transparency, Anthropic Claude, Microsoft Bing Chat, human confirmation, GPT 4, Bing Chat modes, Google Bard, Palm 2, learning models, textbook, GPT 3.5, prompting, ChatGPT.

    The State of AI Report: Research, Industry, Politics and Safety

    The State of AI Report: Research, Industry, Politics and Safety
    On today's episode, NLW reviews Air Street's epic 6th annual State of AI report - https://www.stateof.ai/ Before that on the Brief, Character AI launches group chats; an AI startup is doing layoffs; and the RIAA wants voice cloning sites to be considered piracy. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  Netsuite | The leading business management software | Get no interest and no payments for 6 months https://netsuite.com/breakdown TAKE OUR SURVEY ON EDUCATIONAL AND LEARNING RESOURCE CONTENT: https://bit.ly/aibreakdownsurvey ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    The Main Scoop, Episode 11: When Generative AI Takes Over Healthcare and the World

    The Main Scoop, Episode 11: When Generative AI Takes Over Healthcare and the World

    It’s been said that 40% of all businesses will die in the next 10 years… if they don’t figure out how to change their entire company to accommodate new technologies. Large language models are driving the next generation of AI with predictions of massive industry disruption. But like any new technology, this new class of generative AI is raising issues of trust along with the need for more data types–especially in highly regulated industries like healthcare. Join Greg and Daniel as they host Dr. Andreea Bodnari from UnitedHealth Group to explore the pervasive use of generative AI across the healthcare ecosystem, and possible impacts across other industries.

    It was a great conversation and one you don’t want to miss. Like what you’ve heard? Check out Episode One of The Main Scoop, Episode Two of the Main ScoopEpisode Three of The Main ScoopEpisode Four of The Main ScoopEpisode Five of The Main Scoop,  Episode Six of The Main ScoopEpisode Seven of The Main ScoopEpisode Eight of The Main Scoop, Episode Nine of The Main Scoop, and Episode Ten of The Main Scoop, and be sure to subscribe to never miss an episode of The Main Scoop series.

    Large Language Models

    Large Language Models
    Large Language Models verändern grundlegend, wie wir an natürliche Sprachverarbeitung herangehen. Sie bieten neue Möglichkeiten für Unternehmen aller Branchen und sind ein technologischer „alle zehn Jahre”-Durchbruch. Neben komfortablem API-Zugriff stehen alternativ zahlreiche Open-Source-Modelle bereit, die lokal und on-premise einsetzbar sind. Lohnt sich ein eigenes Modell zu trainieren also nicht mehr? Welche Chancen und Risiken ergeben sich aus diesem Durchbruch? Darüber sprechen Stefan und Marcel (neunetz.com) in dieser Folge.