Logo
    Search

    #110 - We’re back! ChatGPT, ChatGPT, ChatGPT, and some other stuff

    enFebruary 05, 2023

    Podcast Summary

    • BuzzFeed uses AI for content personalizationBuzzFeed integrates OpenAI's tools for personalized quizzes and brainstorming, marking a trend for AI in content creation, despite initial controversy over errors in AI-generated articles.

      AI is increasingly being integrated into various industries and businesses, with BuzzFeed being the latest example. The media company announced it will use OpenAI's tools to personalize content, focusing on quizzes and informing brainstorming. This move is significant as it sets a precedent for the use of AI in content creation, although BuzzFeed clarified it won't be used in its newsroom. Previously, CNET faced controversy for publishing AI-generated articles with errors, leading to a 50% stock jump for BuzzFeed upon the announcement. These developments underscore the growing importance of AI and its potential impact on various sectors, making it essential for everyone to stay informed.

    • Exploring new ways for AI in media industryAI-generated content raises questions about transparency, ethics, and future of journalism, with potential for personalized, dynamic news blurring lines between news and gaming, and challenges for staff morale and ownership of AI platforms.

      The media industry is exploring new ways to utilize generative AI for content creation, raising questions about the future of journalism and the ethics of hiding AI-generated content from audiences. The BuzzFeed example showcases the potential for personalized, dynamic content that could blur the lines between news and gaming. However, transparency and honesty about AI involvement are crucial to maintain trust with audiences and employees. The CNET case illustrates the challenges of implementing AI-generated content in a news organization and the potential impact on staff morale. The ongoing debate around ownership and control of generative AI platforms adds another layer of complexity to this evolving story. The implications of these developments extend beyond the media industry, affecting various sectors and industries as generative AI continues to advance.

    • Generative AI Ecosystem: Apps, Model Providers, and Infrastructure ProvidersThe generative AI ecosystem is composed of apps, model providers, and infrastructure providers, with each segment facing unique challenges and revenue streams.

      The generative AI ecosystem is still evolving, and it's unclear who the big winners will be as the revenue streams are not yet defined. The discussion breaks down the ecosystem into three parts: apps, model providers, and infrastructure providers. App developers, such as those creating user-facing generative AI apps, may not accrue all the value as models and infrastructure can be easily replicated. Model providers, like OpenAI, currently hold an advantage due to their early market presence and user-facing apps. However, competition is emerging at the model provider level, and margins may erode as prices drop. Infrastructure providers, who build the hardware and processing power, face challenges due to the tough business landscape and new entrants. The future of massive companies and trillion-dollar valuations in this space remains uncertain.

    • Generative AI Market: Growth, Challenges, and Future PerspectivesThe generative AI market is growing rapidly, with multiple companies investing, but safety concerns and competition may impact growth and valuation.

      The landscape of generative AI is rapidly evolving, with multiple companies entering the space and offering various benefits and costs. Google, Microsoft, and others are investing in this technology, but the process of getting it up to speed is complex and costly. This trend raises concerns for those focused on AI safety, as investments in safety measures can be seen as marginal costs that may be compromised in a race to the bottom for access and alignment. The generative AI market is expected to become increasingly crowded, with no clear winner, and safety standards may vary between players. Another question that arises is whether startups offering new generative AI products will be properly valued given the rapid pace of innovation and the potential for competitors to quickly overtake them. This trend mirrors the experience of Web 2.0 startups, where massive growth in user base could be short-lived due to intense competition and rapidly improving underlying models. Overall, the generative AI market is poised for significant growth, but the challenges of safety, competition, and valuation will need to be addressed as the industry matures.

    • Race to dominate generative AI marketMicrosoft invests $10B in OpenAI, indicating significant potential in generative AI. $1B+ invested annually, market expected to grow significantly.

      The race to develop and dominate the generative AI market is heating up, with significant investments being made by tech giants like Microsoft and a surge in funding for related startups. Microsoft's $10 billion investment in OpenAI, the creator of ChatGPT, is a strong indication that they believe they're on the brink of something significant and plan to maintain control, suggesting a potentially game-changing impact on the industry. This trend of investment in generative AI has seen a massive increase in recent years, with over $1 billion invested in 2021 and 2022 alone, and this year is expected to bring even more funding. However, it's important to note that these systems are not yet reliable or accurate enough to fully automate industries, but they do offer the potential for humans to work more efficiently. The alignment of AI with human values and safety remains a critical concern, but the market's investment in short-term alignment solutions could be a promising sign. Overall, the generative AI market is showing massive potential and is expected to continue to grow significantly in the coming years.

    • Rapid advancements in AI technology for text, image, and music generationNew AI tools like Chad GPT, Shutterstock's generative AI, and Google's text-to-music technology are pushing boundaries in content generation, potentially disrupting industries and creating new opportunities.

      We are witnessing a rapid advancement in AI technology, particularly in the areas of text, image, and music generation. The most recent example comes from Chad GPT, which differentiates itself from OpenAI with its human-in-the-loop alignment system, resulting in improved output. Shutterstock's new generative AI toolkit is another example of this trend, allowing users to generate images based on text prompts, potentially disrupting the stock image market. In the research realm, Google has made strides in generating music from text descriptions, creating long, coherent melodies and compositions. This development, along with advancements in text and image generation, could lead to significant impacts on various industries and jobs within the next few years. The ability to generate high-quality, long-form content in multiple media types is a significant leap forward for AI, and we can expect to see more innovations in this space.

    • Advancements in AI music generation: Separate models for music componentsNew AI music generation systems use separate models for music analysis, decision making, and sound synthesis, reflecting the complexity of music and potential for more advanced AI applications.

      The latest advancements in AI music generation involve a more complex system of models compared to the traditional end-to-end transformer models. This new approach includes separate models for breaking down music components, deciding what to include, and converting back to sound. These models are built upon recent research and may be indicative of the need for more complex systems for longer text and video generation. The debate between those favoring large-scale transformer models and those advocating for custom brain-like architectures continues, and it remains to be seen which approach will prevail in handling more complex data. The new AI classifier for indicating AI in text, which was recently released, may help alleviate concerns about the increasing prevalence of AI-generated content. However, the future of AI-generated music and other complex data types is still uncertain and will likely involve ongoing research and development. The use of pre-trained language models for music generation is also a possibility, but further exploration is needed. Overall, the advancements in AI music generation demonstrate the ongoing evolution of AI technology and the potential for new applications and innovations.

    • The Debate Over AI in Classrooms: To Ban or Embrace?While the debate over AI use in classrooms continues, educators should focus on teaching students responsible use. Detection tools like OpenAI's classifier can help but have false positives. Ongoing research will evolve, requiring a balanced approach of teaching ethics and developing effective detection methods.

      As educators and institutions grapple with the increasing use of AI tools like ChatGPT in classrooms, the debate rages on about whether to ban or embrace these technologies. OpenAI's new classifier, while not perfect, can help detect AI-generated text but has a significant false positive rate. The battle between AI generation and detection is likely lost, and educators should focus on teaching students how to use these tools responsibly. The ongoing research in this area, including Stanford's "Detect GPT," will continue to evolve, making it a highly active and complex issue. Inevitably, the future will involve a combination of detection and generation technologies, and it remains to be seen how effective these will be in distinguishing between human and AI-generated text. The conversation around this topic also brings to mind past debates about deep fakes and the potential for watermarking or other methods to establish the origin of content. The challenge of implementing such solutions on a large scale, particularly for APIs like ChatGPT, adds another layer of complexity to this ongoing discussion. Ultimately, it seems that the best approach may be a balanced one, where we teach students to use these tools ethically while continuing to research and develop more effective detection methods.

    • New advancements in robotics and AIRobots like the Quadra Pedal Robot can now keep up with humans on sandy terrain, while AI applications like graph GPT convert text into JSON-formatted graphs for advanced analytics and fact-checking. Medical AI startups are designing bacteria-killing proteins, marking a decade of progress in these fields.

      Technology is making significant strides in various fields, from robotics to artificial intelligence. A recent development in robotics is the Quadra Pedal Robot, a dog-like robot created by Kais that can keep up with a person running on sandy terrain at three meters a second. This demonstrates the growing robustness and usability of such robots. In the realm of artificial intelligence, a new application called graph GPT has emerged, which extracts relationships between characters and entities from text and presents them in a graph format. This technology, which converts text into a JSON-formatted graph, opens up possibilities for advanced analytics and fact-checking. Another area of development is medical AI, where startups like Medani are designing bacteria-killing proteins from scratch and testing their effectiveness. These advancements, which have been in the works for about a decade, are starting to bear fruit and could lead to significant improvements in various industries. While there are potential risks and ethical considerations associated with these technologies, they represent exciting progress in the realm of technology and innovation.

    • Navigating the Intersection of Technology and MedicineAdvancements in AI and proteomics offer exciting possibilities for healthier proteins and new drugs, but also come with risks like misuse and ethical concerns. Proper regulations and oversight are necessary to harness these advances responsibly.

      Advancements in technology, particularly in the fields of AI and proteomics, are leading to exciting discoveries and potential solutions for various challenges, from designing healthier proteins for humans to identifying drugs that could help people quit smoking. However, these advances also come with potential risks, such as the misuse of technology for creating harmful bioweapons or generating hate speech. It's crucial for society to navigate these advancements responsibly, with proper regulations and ethical considerations in place. For instance, the AI startup 11 Labs has had to add restrictions on their text-to-speech tool after it was used to generate hate speech on 4chan. Overall, the intersection of technology and medicine holds great promise for improving lives and solving complex health issues, but it also requires careful attention and oversight.

    • Growing concerns about misuse of AI technology in deepfakes and voice cloningCompanies developing AI language models and text-to-speech tools must consider ethical implications and prevent malicious uses. Policymakers need to stay informed and take action to address potential risks before they become more widespread.

      As AI technology, specifically language models like ChatGPT and text-to-speech tools, become more accessible and easier to use, there is a growing concern about their potential misuse, particularly in the realms of deepfakes and voice cloning. These technologies can be used to create convincing fake speech and voices, leading to the spread of misinformation and privacy violations. Companies developing these technologies need to consider these ethical implications and work to prevent malicious uses. Additionally, policymakers must stay informed and take action to address these issues before they become more widespread. The recent public availability of these tools serves as a warning of the potential risks and highlights the need for proactive measures.

    • Exploring the Opportunities and Challenges of AI in Creativity and EducationAI integration in creativity and education offers benefits like personalized teaching and richer projects, but also raises concerns over potential misuse, job displacement, and academic integrity.

      The integration of AI in various forms, such as image generation and chatbots like Chat GPT, presents both opportunities and challenges. For small creators, AI can make their projects richer by enabling character speech without the need for expensive voice actors. However, it also raises concerns, such as potential malicious applications and job displacement for voice actors. In education, Chat GPT offers benefits like personalized teaching and streamlining the learning process, but it also poses challenges related to academic dishonesty, plagiarism, and determining the truthfulness of AI-generated content. Educational institutions are encouraged to establish clear policies regarding the use of AI in education and ensure adequate teacher oversight. Additionally, the reliance on AI for jobs, including teaching, is uncertain, and humility and adaptability are essential in navigating these changes.

    • AI in Academia: The New Plagiarism Standard?AI use in academic work is increasing, with ChatGPT being a popular tool. While some argue it's similar to previous plagiarism methods, others stress the need for adjustments. A survey shows most students use it, but not for direct submissions. Educators must focus on non-AI assignments and find ways to detect misuse.

      The use of AI in academic work is becoming more prevalent and easier to conceal, leading to concerns about academic integrity. OpenAI, the company behind ChatGPT, has acknowledged this issue and released guidelines for disclosing its use. The conversation around this topic is active, with some arguing that it's not much different from previous changes that made plagiarism easier, while others emphasize the need for adjustments as AI becomes more advanced. A recent survey from Stanford suggests that a majority of students have used ChatGPT for academic work, but most have not submitted directly generated content without edits. The economic incentive for finding solutions to help teachers navigate this issue is significant. Despite some concerns, there's also a strong honor code and dedication to academic integrity among students. The impact of AI on the quality of submissions is still unclear, but it's important for educators to focus on assignments that cannot be easily completed by AI and to find ways to detect and address potential misuse.

    • Ethical concerns in education and military applications of AIAI tools in education can make learning easier but less meaningful, while military use raises ethical concerns for lethal autonomous weapons. Clear guidelines and regulations are necessary to ensure safe and ethical use.

      The use of AI and automation, whether it's in education or military applications, raises ethical concerns that need clear guidelines and regulations. In the education sector, while the use of AI tools like ChatGPT and GitHub co-pilot can make life easier for students, they also have the potential to make the learning process less meaningful if students rely too heavily on these tools. In the military sector, the use of lethal autonomous weapons, which was previously ambiguous, now has clearer guidelines with the need for approval and review processes. However, the deployment of such systems still raises ethical concerns and the need for international agreements. In a less serious but still relevant context, the deployment of robot cars in San Francisco has led to numerous false 911 calls, highlighting the need for clearer communication and understanding between humans and AI systems. Overall, while the use of AI and automation offers many benefits, it's essential to consider the ethical implications and establish clear guidelines to ensure their safe and ethical use.

    • Balancing safety concerns and commercial interests with self-driving cars and advanced technologiesRegulators face challenges in balancing commercial interests and public safety with the deployment of self-driving cars and other advanced technologies, while ongoing innovation is needed to address evolving safety concerns and regulatory challenges.

      The deployment of self-driving cars and other advanced technologies brings new and evolving safety concerns that require constant attention and adaptation. The discussion highlighted issues with self-driving cars obstructing emergency responders and the potential impact of new technologies, such as scooters or robots, on their operation. Regulators face a challenging task in balancing commercial interests and public safety, while writers and AI systems are exploring new ways to collaborate on creative projects. Despite the ongoing challenges, the potential benefits of these technologies are significant. The use of AI in writing, for example, offers opportunities for collaboration and experimentation, but also requires careful consideration of tone and style. Ultimately, the successful deployment of self-driving cars and other advanced technologies will require ongoing effort and innovation to address the evolving landscape of safety concerns and regulatory challenges.

    • AI tools can collaborate with humans in creative processes but can't replace human imagination and ideationAI tools enhance creativity by generating ideas, but humans refine and shape them to align with their vision. Effective collaboration depends on clear direction and well-defined ideas, as well as user-friendly tools.

      While AI tools like ChatGPT and Chad Jupyter can assist and augment creative processes, they cannot fully replace human imagination and ideation. These tools can act as collaborators, especially when there's a general direction or idea in mind. However, they might not be as effective when there's a lack of clear direction or a well-defined idea. The value lies in the generation side, the ideation piece, where humans can refine and shape the output to align with their vision. The interaction between humans and AI can lead to interesting and effective collaborations. As we explore more use cases for these tools, it's important to remember that the human touch is still essential. Additionally, the UI and tools available for these AI models can greatly impact their usefulness. For instance, a tool called "pseudo-write" can help facilitate more effective collaboration between humans and AI. Boston Dynamics' latest demo with their robot Atlas showcases impressive capabilities, but the extent to which these demos are scripted remains an intriguing question. As we continue to experiment with these technologies, we'll discover new ways to harness their potential while respecting the importance of human creativity.

    • Robots' advanced capabilities are not fully autonomousWhile Boston Dynamics robots can walk stably and manipulate objects, they are not yet fully autonomous systems. They are programmed to perform specific tasks and rely on human intervention.

      While the demonstrations of advanced robotics from companies like Boston Dynamics may appear impressive, they are largely staged and not fully autonomous. The robots are programmed to perform specific tasks and manipulate objects, but they do not make decisions on their own. The focus is on showcasing their capabilities, such as stable walking and object manipulation, which have been a challenge in robotics for decades. The use of machine learning is becoming more prevalent in areas like perception and manipulation, but the walking ability of these robots is not primarily based on machine learning. Overall, these advancements are fascinating and a significant step forward, but it's important to keep in mind that they are not fully autonomous systems. For more details, check out the links to the full stories in our newsletter and podcast description.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    EP 126: Real Business Use Cases for AI

    EP 126: Real Business Use Cases for AI

    One of the most common phrases we hear is how can I actually use AI? What use cases exist for me to put AI to work for my business? Isar Meitis, CEO of Multiplai, joins us to discuss real business use cases for AI. From data analysis to marketing tasks, we cover a wide variety of topics. 

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Isar and Jordan questions about AI
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:30] Daily AI news
    [00:04:00] About Isar and Multiplai
    [00:06:00] How to implement AI in a business
    [00:10:00] Roadblocks for companies adopting AI
    [00:15:25] Using AI for data analysis
    [00:19:30] Data security guidelines for AI
    [00:23:30] Using Gen AI for content creation
    [00:30:12] Ideation with LLMs
    [00:35:05] Isar's advice to start using AI

    Topics Covered in This Episode:
    1. Integrating AI into business organizations
    2. Use cases of AI in data analysis
    3. Role of the committee in exploring and implementing AI
    4. Content creation and repurposing with AI tools
    5. Consulting with experts and utilizing their content

    Keywords:
    financial data, session, accessibility, Google, database, sensitive information, sharing, continuous education, exploration, AI, podcast, consulting, teaching courses, staying updated, business owners, limited time, learning, experimenting, new tools, advancements, Microsoft, Google, Salesforce, HubSpot, integrating AI features, business leaders, committee, framework, boundaries, misuse of AI, company values, salesperson, compensation, guidelines, boundaries, education, training sessions, third-party tools, ChatGPT, advanced data analysis, insights, marketing, scattered data, automation tools, Zapier, Maker, data security issues, Gen AI tools, pain points, video recording, content repurposing, transcription, SEO purposes, books, advisory board, committee, bouncing ideas, multiple perspectives, CEO, experienced professionals, tech startups, AI impact, education, data security, commercial version, immediate results, low hanging fruits, data analysis, finance, marketing, sales, HR, fear of publishing private data, large language models, MidJourney, Niji app, AI image generation, anime style, version 5, image generations, DALL E, ideation, content creation process

    The AI Revolution Could Be Bigger and Weirder Than We Can Imagine

    The AI Revolution Could Be Bigger and Weirder Than We Can Imagine
    Derek unpacks his thoughts about GPT-4 and what it means to be, possibly, at the dawn of a sea change in technology. Then, he talks to Charlie Warzel, staff writer at The Atlantic, about what GPT-4 is capable of, the most interesting ways people are using it, how it could change the way we work, and why some people think it will bring about the apocalypse. If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com. You can find us on TikTok at www.tiktok.com/@plainenglish_ Host: Derek Thompson Guest: Charlie Warzel Producer: Devon Manze Learn more about your ad choices. Visit podcastchoices.com/adchoices

    EP 14: Infusing Some AI Into the HVAC Space

    EP 14: Infusing Some AI Into the HVAC Space

    Infusing some AI into the HVAC space: An Everyday AI conversation with Rob West.

    Make sure to tune in for the latest AI news and trends. The Everyday AI show helps make your job easier, get your work done faster, and grow your career.

    Time Stamps:

    [00:00:55] Generative AI for Personalized Ads

    [00:03:15] Organizing Your Work with AI 

    [00:03:45] Musicians are Using AI More Than We Think

    [00:05:08] At Home AI Market Will Grow to $500 Billion

    [00:07:45] Armstrong Fluid Technology - Innovative tech in an old-school industry

    [00:9:50] How To Infuse AI into the HVAC Industry

    [00:13:19] The Future of AI and Energy Efficiency in Homes

    [00:16:10] Using AI to Boost Slow Moving Industries

    For full show notes, head to YourEverydayAI.com

    Open The Pod Bay Doors, Sydney

    Open The Pod Bay Doors, Sydney
    What does the advent of artificial intelligence portend for the future of humanity? Is it a tool, or a human replacement system? Today we dive deep into the philosophical queries centered on the implications of A.I. through a brand new format—an experiment in documentary-style storytelling in which we ask a big question, investigate that query with several experts, attempt to arrive at a reasoned conclusion, and hopefully entertain you along the way. My co-host for this adventure is Adam Skolnick, a veteran journalist, author of One Breath, and David Goggins’ Can’t Hurt Me and Never Finished co-author. Adam writes about adventure sports, environmental issues, and civil rights for outlets such as The New York Times, Outside, ESPN, BBC, and Men’s Health. Show notes + MORE Watch on YouTube Newsletter Sign-Up Today’s Sponsors: House of Macadamias: https://www.houseofmacadamias.com/richroll Athletic Greens: athleticgreens.com/richroll  ROKA:  http://www.roka.com/ Salomon: https://www.salomon.com/richroll Plant Power Meal Planner: https://meals.richroll.com Peace + Plants, Rich

    Generative AI is gross

    Generative AI is gross

    Generative AI is gross to me.

    I have a visceral reaction to it. It makes my skin crawl. It causes my blood pressure to rise.

    Some days I wish I could “agree to disagree” about it, but, honestly, it feels like a real moral challenge of our age.

    I don’t necessarily see AI becoming sentient and taking over. What I do see is a lot of people using it for deeply cynical purposes.

    You can head over to JohnLacey.com for more information about today’s show.