Logo
    Search

    Podcast Summary

    • Understanding Generative AI with Google's HeadGoogle's generative AI team identifies patterns and creates scalable packages to make AI integration easier and more effective for businesses

      Viator is a valuable tool for travelers looking to book guided tours, excursions, and more in one place. With over 300,000 travel experiences to choose from, Viator offers free cancellation and 24-7 customer support for worry-free travel planning. Meanwhile, in the world of technology, AI and generative AI continue to be buzzwords, but what exactly do they mean? During a podcast episode, David interviewed Danube Banga, Google's head of generative AI, who explained that their team works on incubating generative AI solutions into production-grade applications for companies. In simpler terms, they identify patterns and create scalable packages to make AI integration easier and more effective for businesses. Comparing it to the early days of programming, Danube explained that they aim to understand the design patterns of AI and generative AI and package them into technology and educational resources for consistent use. Overall, Viator simplifies travel planning, while Google's generative AI team focuses on making AI integration more accessible and efficient for businesses.

    • Bringing human cognitive capabilities to computersAI enables machines to learn from data, understand and respond to the world, plan, schedule, make decisions, and recognize patterns using techniques like machine learning and deep learning.

      Artificial Intelligence (AI) is a system that brings human cognitive capabilities to computers, enabling them to accelerate various processes in technology. It is a collection of tools and techniques, including machine learning and deep learning, borrowed from mathematical fields like statistics and probability. Machine learning is a subset of AI that uses statistical methods to enable machines to learn from data, while deep learning is a subset of machine learning that uses neural networks to model and process data, inspired by the structure and function of the human brain. AI enables machines to understand and respond to the world around them, plan, schedule, and make decisions, while machine learning and deep learning focus on enabling machines to learn and process data. Deep learning, in particular, allows machines to recognize patterns and make decisions based on that recognition. So, in summary, AI is the overarching concept of enabling machines to mimic human intelligence, while machine learning and deep learning are specific techniques used to achieve that goal.

    • Learn about the differences between AI, ML, and DLML is a subset of AI that uses math and stats, DL is a subset of ML that uses neural networks, and the Transformer architecture has boosted DL's performance in processing text and sequences of data.

      Artificial Intelligence (AI) is a broad field that includes various techniques, with Machine Learning (ML) being a subset that focuses on mathematical and statistical methods. Deep Learning (DL), on the other hand, is a subset of ML that uses neural networks to process data and identify objects or classify them. ML techniques include Nearest Neighbors, Support Vector Machines, and Regression Classification, among others. Nearest Neighbors, for instance, determines the four closest data points to a given data point. DL became popular due to its ability to process large amounts of data and its scalability, unlike traditional ML techniques that plateau when given too much data. The advent of the Transformer architecture in 2017 was a significant capability addition that has boosted the performance of DL models, especially in processing text and sequences of data. Before the Transformer architecture, sequence data processing required putting the entire data into the model, which was inefficient. In the last year, the application of these techniques has seen significant advancements due to the availability of powerful hardware like GPUs and TPUs, enabling parallel processing and increased performance. These techniques have been around for years but have become increasingly effective due to the additional capabilities that support their performance.

    • Transformer Architecture: Understanding Text Data at ScaleThe transformer architecture revolutionized text analysis by enabling context maintenance and scalability through the attention mechanism, leading to the development of large, multimodal models capable of understanding various tasks and modalities.

      The transformer architecture, introduced in 2017, revolutionized the way large amounts of text data are analyzed while maintaining the contextual relationships between words. Prior to this, synthesizing and analyzing extensive text was computationally expensive and challenging due to the need to maintain grammatical structure and keep track of word relationships. The attention mechanism, a key component of transformer architecture, enables maintaining context by allowing the neural network to understand how specific words are related within a text. This breakthrough made it possible to process vast amounts of data in a scalable way, paving the way for the development of extremely large, internet-scale models capable of understanding multiple tasks and modalities, such as text, images, audio, and video. This multimodal approach allows models to learn from various types of data and interact with users through text, providing benefits like understanding context and generating insights from a large corpus of crawled web data. Additionally, as these models continue to improve, they are able to exhibit emerging abilities beyond their initial design, offering new and unexpected capabilities. Overall, the transformer architecture and its attention mechanism have significantly advanced the field of natural language processing and AI, enabling more sophisticated and contextually aware models.

    • Emergent Properties in Modern AI: Multitasking, Attention, In-Context Learning, and Chain of ThoughtModern AI exhibits emergent properties like multitasking, attention, in-context learning, and chain of thought, enabling it to handle various tasks, learn from context, and explain its reasoning.

      Modern data, which can now perform tasks with general purpose intelligence, exhibits emergent properties when it is trained to multitask. This means it can handle various tasks such as mathematical derivations, SAT exams, text summarization, coding, and optimization all at once. These capabilities are considered emergent because they go beyond the initial programming and are not present when the model is trained on a single task. The attention mechanism plays a significant role in enabling these emergent properties. It allows specific elements fed to the model to learn about each other and interact, leading to higher-level skills and behaviors that were not anticipated. Another emerging property is in-context learning, where models learn from demonstrations and remember the context to provide answers in a specific manner. This capability allows systems like ChatGPT and Bard to adopt different roles and personalities based on the context of the conversation. Lastly, the ability to provide a step-by-step breakdown of its reasoning, known as chain of thought, is another fascinating emergent property. This capability enables the model to explain how it arrived at its answers, providing transparency and trustworthiness. These emergent properties are not necessarily physical artifacts but rather an emergence of the underlying complex interactions within the model. They make modern AI systems more versatile, adaptable, and capable of providing human-like responses.

    • Transformer models revolutionize industries beyond chatbotsTransformer models lead to innovations and improvements in various industries through generating images, text, and more, enabling personalized experiences and simplified operations for businesses

      AI, specifically transformer models, have revolutionized industries beyond just chatbots and language models. Since the transformer's introduction in 2017, there has been significant evolution and creativity in applying this technology to various industries and applications, leading to emergent properties and transformative impacts. Old school AI still exists and is useful for large companies with the resources to build and scale highly tuned systems. However, for smaller businesses, new opportunities have emerged with the availability of generative AI systems. These systems can generate images, text, and more, enabling industries to innovate and improve their offerings. For instance, Viator uses AI to offer personalized travel experiences, while Mercury simplifies financial operations for startups. These are just a few examples of how AI is transforming industries beyond the everyday experiences people have with chatbots.

    • Accelerating application development with generative AIGenerative AI enables users to interactively create product requirements, design systems, and write code in a matter of hours, leading to a productivity explosion across various industries.

      The use of AI, specifically generative AI, has significantly accelerated the application development process from months to weeks or even hours. This transformation is achieved through interactive sessions with AI models like chatbots, where users can iterate on ideas, create product requirements, design systems, and even write code. The AI assists in various aspects such as writing design documents, creating outlines, and even generating creative content. Generative AI is a deep learning technique that focuses on creating specific artifacts, setting it apart from other AI techniques like machine learning and deep learning, which are more mathematical and neural network-focused, respectively. This technology has been adopted across various industries, including media, healthcare, and financial services, leading to a productivity explosion. Users can interact with AI models in different ways, either by providing a specific outline for content creation or by formulating questions to build a prototype from scratch. The ability to generate ideas and build solutions in a matter of hours is a game-changer, enabling individuals and teams to be more creative and efficient in their work.

    • Transformer architecture for generative AIThe transformer architecture enables the creation of content in various modalities by learning relationships between different types of data, such as images and text.

      The transformer architecture is the foundation for generative AI, enabling the creation of content in various modalities such as images, text, and audio. This is achieved by feeding different types of data into the transformer and allowing it to learn relationships between them. For instance, in image generation, the transformer is given a set of frames or a combination of images and text, and it generates new content based on that input. The input can be broken down into vectors using a tokenizer or encoder, and these vectors are then combined using algebra. The transformer learns to recognize images and text together as a joint entity, which is the foundation of many AI image generation models like Dali. The ease of acquiring text data and the impressive results from generating text have led to the prominence of large language models in the field. However, the transformer architecture can handle any sequential data, making it applicable to various industries and creative fields.

    • Transformers convert data into vectors for comparisonTransformers tokenize data, project it into shared vector space for comparison and analysis between different media types

      Transformers in machine learning allow for the conversion of various types of data, such as text, images, and audio, into vectors, enabling comparison and analysis between different media types. This process, known as tokenization, involves encoding data into tokens, which can be more complex than a one-to-one word mapping. For images, this may include considering the structure of objects and their relationships within the image. Once data is tokenized, it undergoes embedding, which projects the vector onto a shared vector space, allowing for comparison and analysis between different types of data. This shared vector space allows for the extraction of information, which can be defined as patterns or contextualized to the specific artifact being analyzed. The ultimate goal is to preserve information and enable understanding and comparison between different types of data, making it a valuable tool for multimodal models and information extraction exercises. The conversion of data into vectors and the shared vector space can be thought of as a Rosetta stone, allowing for the translation and comparison of different languages or media types.

    • Extracting and synthesizing information from various data sourcesAI recognizes patterns and relationships to create new actions, but context matters, and AGI progressively improves models to extract and synthesize information from various data sources.

      Information is derived from identifying patterns and differences within various modalities of data, be it images, text, or other forms. This process of extracting information involves understanding the evolution of patterns and the relationships between them. According to our discussion, this ability to recognize and synthesize information to create new actions is the definition of intelligence. However, it's important to note that the context in which this intelligence operates matters. For instance, a robot's intelligence used for surgery would differ from that used in a restaurant setting. While the concept of Artificial General Intelligence (AGI) implies an AI that can perform any task a human can, the practical realization of this goal may involve progressively improving models and their impact on the world. In essence, the fundamental elements of a general AI lie in its ability to extract and synthesize information from various data sources, and this information can be combined, compared, and even applied to other modalities, such as text and images. This is a significant step towards AGI, but it's essential to remember that the full realization of this goal may require further advancements in our understanding of intelligence and its applications.

    • Multiple smaller AIs for specific tasksInstead of one all-encompassing AGI system, focus on creating multiple smaller, specialized AIs for specific tasks, improving their planning, scheduling, sensing, and real-world execution abilities.

      Dana Girshauskas, a researcher in artificial intelligence, believes we won't have one all-encompassing AGI (Artificial General Intelligence) system handling everything, but rather multiple smaller, specialized AIs managing specific tasks. This perspective is similar to the debate around the capabilities of the Tesla bot – instead of one human-like robot, we could have numerous smaller robots handling daily tasks. Girshauskas also emphasizes the importance of improving AI's ability to plan, schedule, sense, and act in real-world environments, which can be challenging due to the vast number of possibilities. He suggests focusing on simpler problem spaces to create AGI systems that can effectively learn and execute tasks within those constraints.

    • Exploring new forms of intelligence in AGIWe should consider expanding AGI beyond daily human tasks and be open to new methods of thinking and cognitive strengths as neural networks grow stronger.

      While we can define intelligence in AGI as including intuition, deduction, and the ability to extract information from multiple contexts, there are other forms of intelligence, such as spatial reasoning and dialectical thinking, that we have observed in ourselves. The question is whether we should limit a general-purpose AI to tasks our brains perform daily or if new methods of thinking and cognitive strengths will emerge as neural networks get stronger. Ellis agrees with David's definition of intelligence for its mechanistic implementation in software but acknowledges the potential for emergent abilities. However, we may not fully understand how to program these abilities, and our current AI tools, like deep learning models, may not be the only way to achieve cognitive capabilities. The possibility of discovering new cognitive routes in the way these systems learn is exciting, and we should remain open to the idea that we may stumble upon new scaling mechanisms or interaction modes that could lead to more advanced AI. Science, according to Richard Feynman, is the belief in the ignorance of the expert, so we should keep an open mind and be prepared to incorporate new information as it arises. Practically, deep learning tools like transformers and neural network architectures are currently the best tools in our AI laboratory, but adding interaction mode and information retrieval capabilities within these models could lead to new emerging cognitive abilities.

    • Ensuring truth and ethics in AI responsesWhile AI models can generate human-like text, they don't have the ability to distinguish truth from fiction or understand ethical implications. It's our responsibility to ensure the output is truthful and ethical through pre-processing and post-processing activities.

      While large language models can generate responses based on probabilities, they don't guarantee truth or accuracy. These models work by predicting the next word or completing a sentence based on the context given, but they don't ensure that the output is grounded in reality or adheres to responsible AI principles. The challenge lies in ensuring the output is truthful, real, and less toxic. This requires additional pre-processing and post-processing activities, such as checking the output against a source of truth or a database, and ensuring it meets certain ethical standards. The models themselves may give you something, but it's up to us to make sure that thing is true. This is why there are mechanisms like the Google button in Bard, which allows for double-checking of responses. In essence, while these models can generate human-like text, they don't have the ability to distinguish truth from fiction or to understand the ethical implications of their responses. Therefore, it's important to approach these technologies with a critical and contextualized perspective, and to remember that it's our responsibility to ensure the output is truthful and ethical.

    • Google's approach to responsible AI and lowering barrier to entryGoogle's Vertex AI turns responsible AI principles into metrics and guardrails, enabling consumers and professionals to benefit from AI in writing and content creation, while the lowering barrier to entry allows more people to generate, prototype, and commercialize ideas, potentially leading to a new economy.

      While advancements in AI, such as transformers, have been known for a long time, the development and implementation of responsible AI principles and technologies to ensure deterministic outcomes have been a significant challenge. Google has been working on turning these principles into metrics and guardrails, which have been turned into product capabilities like Vertex AI. Consumers and professionals in various domains can benefit greatly from the consumer applications of AI, such as writing and content creation. Moreover, the barrier to entry for creating valuable items is lowering due to the availability of assistive AI technologies. This could lead to a new form of economy where more people can generate, prototype, and commercialize ideas without needing extensive technical knowledge. It's an exciting time as AI continues to transform the way we create and innovate. Additionally, the speaker mentioned that the consumer applications of AI, like chatbots, are becoming increasingly popular and accessible to everyone. However, an often overlooked aspect is the developer experience and the lowering barrier to entry for creating valuable products using these technologies. The future economy could see a shift as more people are able to generate, prototype, and commercialize ideas with the help of AI, regardless of their technical background. The speaker expressed optimism about the possibilities and potential impact of these advancements.

    • Mechanical keyboard vs Apple keyboard: Improved typing performanceUsing the right tool can lead to improved performance and productivity, as demonstrated by the faster typing speed achieved with a mechanical keyboard compared to a default Apple keyboard.

      During a speed typing challenge, using a mechanical keyboard resulted in significantly improved performance compared to the default Apple keyboard, with a time of 8.73 seconds, which was faster than some notable figures in the industry. The importance of having options and finding the right tool for the job was emphasized, as was the need for continuous improvement and learning in the field of technology. The Vertex AI platform is a current project of the speaker, with a focus on designing patterns for deploying large models in enterprise environments.

    Recent Episodes from Waveform: The MKBHD Podcast

    Smartphone Season is Coming Early This Year!

    Smartphone Season is Coming Early This Year!
    This week, Marques is back to discuss all of the news he missed last week (and a bunch of new stuff too). With David out on vacation, Marques and Andrew do Headlines in a Hat and go over everything from the Surface phone that never was to icon theming in iOS 18. They also talk about all of the gadgets that are coming this summer including the Pixel 9 and the Galaxy Watch Ultra. Then we wrap it all up with some trivia questions provided by James Carter of the PodQuiz podcast! It's a fun one, enjoy! Links:  The hat: https://bit.ly/4cioPQR PodQuiz weekly: https://www.podquiz.com/ New CMF Teaser: https://bit.ly/3VYsTQq iOS Icon Tinting: https://bit.ly/45GI4kz Surface Duo 3 Patent: https://bit.ly/3WbMVap New Motorola Phones: https://bit.ly/3W0nGY9 Pixel 9 Renders: https://bit.ly/4bnIAVy Volkswagen Group Invests in Rivian: https://cnb.cx/3zmRBRA Samsung Galaxy Watch Ultra Rumors: https://bit.ly/4chnutt Galaxy Watch Renders: https://bit.ly/4bvUxcc Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    20 Years, 1000 Episodes: The Man Behind PodQuiz

    20 Years, 1000 Episodes: The Man Behind PodQuiz
    We have another bonus episode! In this one, Andrew sits down and talks with James Carter from PodQuiz who began his popular trivia podcast back in 2005. He just published his 1000th episode so Andrew took the opportunity to pick his brain on how he comes up with his questions and the current state of podcasting.  Links:  PodQuiz Weekly: https://www.podquiz.com/ James Carter: https://www.jfc.org.uk/ Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Is YouTube Adding Community Notes to Videos?

    Is YouTube Adding Community Notes to Videos?
    This week, Marques is out working on a big video project so he left Andrew and David in charge of going over the news of the week. First they give their quick impressions on the new Surface devices that showed up right before we sat down to record. Then they discuss the new Threads API before getting into a new Instagram competitor. After that, they discuss the DOJ suing Adobe and YouTube experimenting with community notes. Lastly, they talk about the CMF Phone 1 teaser before wrapping it all up with trivia. Enjoy! Links:  Threads Launches API: https://bit.ly/3z4Cmwr YouTube Community Notes: http://tcrn.ch/3xsKEhi TikTok Launches Whee: https://bit.ly/4ca44qa DOJ Sues Adobe: https://bit.ly/3RBdFOG CMF Phone Teaser: https://bit.ly/3RBZt7X TLD Health Update Video: https://bit.ly/4eyxYGj Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Everything From WWDC 2024!

    Everything From WWDC 2024!
    This week was WWDC, and the podcast crew has some thoughts! Marques chats with Andrew and David about everything from Apple Intelligence to the new iPad calculator app. There's so much to get into so we hope you enjoy! Links:  MKBHD Recap video: https://bit.ly/3KOHc3r MKBHD vs Tim Cook: https://bit.ly/3x70kGZ Snazzy Labs Spatial Photos: https://bit.ly/4eeDPAq Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    How Much AI Will We WWDC?

    How Much AI Will We WWDC?
    There was a lot to get into this week! First, Marques, Andrew, and David discuss Instagram testing unskippable ads before getting into some Microsoft Recall news. Then they go deep on what they expect to see from Apple's WWDC 2024 next week. Then we finish it up with a call to action: we want to add some sounds to our soundboard so make sure to leave a comment on YouTube with your favorite soundbite. We then of course round it out with some trivia. Links:  Instagram Unskippable Ads: https://tcrn.ch/4ecanuL Kevin Beaumont Micosoft Recall Security Issues: https://twitter.com/GossiTheDog MacRumors WWDC Predictions: https://bit.ly/4bLeX1y Thread radios in Mac: https://bit.ly/4aQGrSe Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Don’t Ask Google's AI for Advice!

    Don’t Ask Google's AI for Advice!
    A lot happened this week! Marques, Andrew, and David jump into a ton of different topics ranging from the special edition Nothing Phone 2a and the death of the Spotify Car Thing. Then they get into Google AI overview telling people to eat rocks and the Coffeezilla series about the Rabbit R1. Lastly, Miles comes on to talk about the new hybrid Porsche 911 with Marques and David. Links:  Thanksgiving Pea video: https://bit.ly/3VmLpkZ Marques Apple Testing: https://bit.ly/4aH2b2S Verge Spotify Car Thing: https://bit.ly/450vRqB Louis Rossmann Spotify Car Things video: https://bit.ly/4aKaPh8 Emma Roth Disable AI overview: https://bit.ly/3R79WIh Coffeezilla Rabbit R1 Part 1: https://bit.ly/3X3N7cl Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin Miles:https://www.youtube.com/@CarswithMiles TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Waveform Recommendations Gameshow!

    The Waveform Recommendations Gameshow!
    It's bonus episode time! What started as a small segment for Ellis to recommend something, turned into an entire dedicated episode where Marques, Andrew, and David suggest some of their favorite things! The topics range from some of their favorite YouTubers and YouTube videos, all the way to something that you can buy that has to come with a handle. It's a bit chaotic but a ton of fun! YouTube Videos Mentioned: Formula Addict - https://www.youtube.com/@formulaaddict/videos Internet Shaquille: https://www.youtube.com/@internetshaquille/videos Natasha Adams: https://www.youtube.com/@NatashasCars Mii's Daily: https://www.youtube.com/@IAmMiisDaily Roy Hibbert: https://www.youtube.com/@RoyHibbertYT Ben Jordan: https://www.youtube.com/watch?v=2DOd4RLNeT4 Digital Spaghetti w/Gaux: https://www.youtube.com/watch?v=fDAqyQIaUPY Ken (Denky): Exploring tokyo's biggest tech store Time for Sushi: https://www.youtube.com/watch?v=bcXiwNjkhxU& Settled Swamppletics: https://www.youtube.com/results?search_query=runescape+settled+swampletics The Answer is Not a Hut in the Woods: https://www.youtube.com/watch?v=PK2SMIOHYig Products Mentioned: Anker 2-in-1 USB-C Memory Card reader: https://geni.us/C5JdR Omnitype IRL edition: https://geni.us/bC2N Air Blower Dust Bulb: https://geni.us/iD8Ni2f Red XLR Cable: https://geni.us/ebrXo Ryobi One Cordless Drill: https://geni.us/HZLIZ Headlamp: https://geni.us/wVYT DXA Micro Pro: https://geni.us/DbZh Gaffer Tape: https://geni.us/Wafj Mousepad: https://geni.us/ovvUtUd Smallrig Tripod Fluid Head: https://geni.us/oqxgEc Logitech MX Master 3: https://geni.us/k0xNZh Pulsar X2: https://geni.us/HPGW2d8 Artisan Obscura: https://geni.us/p2fI EUFY Keyboard Deadbolt: https://geni.us/Hlbo2M Vacuum Pad Camera Opener: https://geni.us/0eyonvg Neewer Panoramic Pannic: https://geni.us/noblAn dbrand skin: https://geni.us/VA3M RTIC Cooler: https://geni.us/hCPC Ridge Commuter backpack: https://geni.us/szzU01z Brevite backpack: https://geni.us/NpQptdj Moment Everything Backpack: https://geni.us/ggxAylR Peak Design Everyday backpack: https://geni.us/VaMIhn Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Microsoft’s MacBook Killer?

    Microsoft’s MacBook Killer?
    This week, Marques jumps right into it with Andrew and David about the OpenAI vs Scarlett Johansson drama regarding one of the voices from ChatGPT 4o. Then they dig into the new Surface products from the Microsoft event and David explains why the move to ARM is such a big deal. To close it out they talk about a feature that Rivian is adding to its cars and a different feature Tesla is removing from its cars. Enjoy! Links:  Scarlett Johansson statement: https://bit.ly/4axIUkb Wired article David mentions: https://bit.ly/4c8UpjB Sci-Fi joke: https://bit.ly/3UTutkP MacObserver Circle to Search news: https://bit.ly/452CGrx Decoder interview with Sundar Pichai: https://bit.ly/44S63wF Dave2D Video: https://bit.ly/44YWhJk Rivian Adding YouTube and Cast: https://bit.ly/44XgmQ6 Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Hey ChatGPT, Summarize Google I/O

    Hey ChatGPT, Summarize Google I/O
    This was a week full of AI events! First, Marques gives a few thoughts on the new iPads since he missed last week and then Andrew and David bring him up to speed with all the weirdness that happened during Google I/O and the OpenAI event. Then we finish it all up with trivia. Enjoy! Links:  MKBHD iPad Impressions: https://bit.ly/3WzFFWk MacStories iPadOS: https://bit.ly/3V1G0Qq The Keyword: https://bit.ly/4blfFm5 OpenAI GPT-4o Announcements: https://bit.ly/3V3Sabv 9to5Google I/O 2024 Article: https://bit.ly/3V2rDLv Merch tweet: https://bit.ly/4bnhNcV Shop products mentioned: Apple iPad Air: https://geni.us/SsXTRLt Apple iPad Pro M4: https://geni.us/HXDlXo Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    New OLED iPad and Pixel 8a!

    New OLED iPad and Pixel 8a!
    Marques was out sick when we recorded, so Andrew and David take over the pod and talk about all of the newest gadgets that came out this week! They start with the new Nintendo Switch rumors before digging into the new iPads that were announced. Then they get into the Pixel 8a before we wrap it all up with trivia. It's a surprisingly busy month and we're just getting started. Enjoy! Links:  Cam James Channel: https://bit.ly/3WyhyHq David Imel Sparkle Video: https://bit.ly/4btBVK4 Nintendo Switch 2 pre-announcement : https://bit.ly/4btt2QK Nintendo Switch 2 Joycons: https://bit.ly/3WCjij6 MKBHD iPad Impressions: https://bit.ly/3WzFFWk New iPads: https://apple.co/3QDvbRW MKBHD Pixel 8a: https://bit.ly/4bxvNjW Shop the merch: https://shop.mkbhd.com Socials: Waveform: https://twitter.com/WVFRM Waveform: https://www.threads.net/@waveformpodcast Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David Imel: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin TikTok:  https://www.tiktok.com/@waveformpodcast Join the Discord: https://discord.gg/mkbhd Music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Related Episodes

    Government Shutdown Averted, A Surge in Hate Speech, and Guest Adrian Aoun

    Government Shutdown Averted, A Surge in Hate Speech, and Guest Adrian Aoun
    Kara and Scott discuss Nikki Haley's social media stance, Truth Social's waning prospects, and the revelation that Google sends a third of Safari ad revenue to Apple. And don’t worry, they also talk about the Jeff Bezos and Lauren Sanchez photoshoot. You’re welcome/we’re sorry. Then, in the fight club that is DC, a government shutdown has been averted for now. Also, a look at how hate speech has surged on social media since the start of the Israel-Hamas war. Then we’re joined by Friend of Pivot, Adrian Aoun, CEO and Founder of the health tech startup Forward, which is launching what it calls, “the world’s first AI doctor’s office.” You can find Forward on Twitter at @goforward. Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    BM103: Transformery w uczeniu maszynowym - możliwości i ograniczenia

    BM103: Transformery w uczeniu maszynowym - możliwości i ograniczenia
    W uczeniu maszynowym istnieje wiele różnych rozwiązań i są grube książki, które opisują je wszystkie, ale z grubsza rzecz ujmując jest kilka rzeczy, które trzeba poznać.

    Jedną z nich jest na pewno Performance i Transformery, który zostały stworzone z myślą o usprawnieniach NLP, czyli dziedziny nauki i technologii poświęconej przetwarzaniu języka naturalnego, ale w tej chwili ich wykorzystanie jest znacznie szersze.

    Moim gościem jest Krzysztof Choromański, który zrobił doktorat na Uniwersytecie w Kolumbii. Pracuje już od 7 lat w Google Brain Robotics. Jest autorem ciekawych publikacji naukowych i ma wiele wartościowego do powiedzenia w temacie dzisiejszego odcinka.

    EP 124: 5 Ways Generative AI Shows Up in 2024

    EP 124: 5 Ways Generative AI Shows Up in 2024

    2023 has been the year of generative AI. So what's to come in 2024 for AI and what should you expect? Josh Cavalier, founder of Josh Cavalier.AI, joins us as we break down the top 5 ways that Gen AI will show up in 2024.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Josh and Jordan questions about AI
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:01:15] Daily AI news
    [00:03:54] About Josh and JoshCavalier.AI
    [00:06:40] #1 Data Collection
    [00:09:50] #2 Automation
    [00:13:45] #3 Personalization
    [00:18:20] #4 Multimodal
    [00:26:05] #5 Prompt Engineering
    [00:31:56] Audience Questions

    Topics Covered in This Episode:
    1. Data Usage and Organization in 2024
    2. Automation Trends in 2024
    3.  Multimodal Workflows and Future Interactions
    4. Prompt Engineering and its Future
    5. Governance and Legal Aspects of AI

    Keywords:
    McKinsey report, personalized experience, AI, call center, automation, meaningful interactions, accessibility, workflows, AI tools, automations, top line growth, bottom line, personal productivity, ExCel report, AI-powered companies, US tech leaders, market caps, generative AI, NVIDIA's chips, Gen AI, China, Baidu, ErnieBot 4.0, OpenAI's GPT-4, search market, free daily newsletter, LinkedIn, governance of AI, legal cases, copyright, nefarious use of AI, content creation, multimodal, prompt engineering, automation, Zapier, Make, video transcription, AI workflows, Gen AI, data usage, customer privacy, Adobe, FireFly, Sensei, point of sale information, event, NVIDIA cards, tensor chips, multimodal technology, garbage content, AI-generated content, storytelling skills, ChatGPT

    Big Tech Breakups?

    Big Tech Breakups?
    Wall Street reacts to reports that U.S. regulators are preparing to investigate Amazon, Apple, Facebook, and Google over potential antitrust concerns. How worried should investors be? Analysts Andy Cross, Ron Gross, and Jason Moser tackle that topic and debate the age-old investing question, value play or value trap? Plus, we revisit Motley Fool co-founder David Gardner’s conversation with best-selling author Dan Pink about the science of perfect timing. Learn more about your ad choices. Visit megaphone.fm/adchoices

    EP 243: 5 Simple Ways To Use Generative AI Every Day

    EP 243: 5 Simple Ways To Use Generative AI Every Day

    If I had to count, I probably use Generative AI at least 100 times a day.  Some are advanced. Some are super simple. I'm going to be sharing some of the simplest ways that I use Generative AI to create Everyday AI and tell you the ways that you can use my same tactics to start winning back time today.
     
    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Learn more in today's newsletter
    Join the discussion: Ask Jordan questions on AI

    Related Episodes:
    Ep 189: The One Biggest ROI of GenAI
    Ep 197: 5 Simple Steps to Start Using GenAI at Your Business Today

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    01:30 Daily AI news
    07:17 Knowledge workers adapt to succeed in AI era.
    10:44 Behind the scenes of Everyday AI production.
    18:21 Steal this method to learn about your career.
    19:10 Customize Google search for efficient news updates.
    23:01 Using shortcuts to gather information efficiently.
    27:15 Using CastMagic and its features.
    29:13 Automated transcription
    33:57 Microsoft 365 Copilot helps with team work.
    37:05 Addicted to Chrome, but now liking Edge.
    40:20 Request to find specific generative AI statistics in PDF.
    44:17 Chat prompt reused for specific AI training.
    45:02 Optimize chat model use with refined method.
    50:12 Use perplexity for efficient research queries, not Google.
    52:03 Generative AI can save knowledge workers time.

    Topics Covered in This Episode:
    1. Practical uses of Generative AI
    2. Tips for saving time using AI
    3. Summarizing web pages for quick updates
    4. Organizing and training ChatGPT for specific tasks

    Keywords:
    Microsoft 365 Copilot, Microsoft 365 Teams, Cast Magic, Chat GPT, Microsoft Edge, Chromium, Chrome Extensions, Copilot Integration, Perplexity, Google Search, Generative AI, McKinsey Digital, AI News, Amazon Web Services, Voila, Organizing Chat GPT, AI in Research, Custom Prompts, Knowledge Workers, New York City AI-powered Chatbot, Google's AI Reply Suggestions, Premium AI Features, AI Automation, Chat GPT Summaries, AI Content Generation, Everyday AI, AI News Headlines, Third-Party Information Connection, Bullet Points Summaries, Web Content Summarization.

    Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/