Logo
    Search

    #148 - Imagen 2, Midjourney on web, FunSearch, OpenAI ‘Preparedness Framework’, campaigning voice clone

    enDecember 24, 2023

    Podcast Summary

    • Google releases upgraded AI image generation tool, Imagine 2Google's new Imagine 2 model generates more accurate images from descriptions, includes an aesthetics model for adjusting visual appeal, and accepts reference images to influence output. Available through the Imagine API.

      Google has released an upgraded version of its AI image generation tool, Imagine 2. This new model is more effective at producing images that align with given descriptions, making it particularly useful for generating logos and other text-dependent images. The tool's advanced capabilities include an aesthetics model that allows users to adjust the image's visual appeal with a tunable dial. Additionally, users can provide reference images to influence the generated output. The improved prompt fulfillment and availability of the tool through the Imagine API make it a significant development in the field of AI-generated images. For those interested in seeing the tool in action, links to the news article and the Imagine API will be provided in the episode description.

    • Google's AI generates images with copyright protectionGoogle's Vertex AI now generates images with text prompts, comes with copyright and identification integration, and Google will defend users against copyright infringement lawsuits related to the generated content.

      Google has introduced a new feature in its Vertex AI, allowing users to generate images in specific styles using text prompts, and this new capability comes with copyright and identification integration, meaning Google will defend users if they face copyright infringement lawsuits related to the generated content. This indemnification goes beyond traditional coverage, as Google also offers to pay for approved settlements or judgments. This development follows similar commitments from other tech companies, and it reflects the growing industry standard for providing legal protection to users of AI-generated content. The fine print of these indemnification policies may vary, but it's clear that companies are taking steps to address the legal uncertainty surrounding AI-generated content. The podcast also mentions that Anthropic, another AI lab, has recently joined this trend by offering similar indemnification to its users.

    • Anthropic offers indemnification for approved settlements or judgments, expands legal protections, and improves APIAnthropic provides indemnification for approved legal issues, enhances legal protections, and upgrades API to attract more business

      Anthropic, an open AI competitor, is ramping up its commercialization efforts by offering indemnification for approved settlements or judgments related to their cloud chatbot, Anthropic Cloud. This move, which is part of expanded legal protections and improvements to their API, aligns with their safety-focused philosophy and may put pressure on other players in the market to follow suit. Additionally, Mid-Journey, a popular text-to-image model and service, has finally released a web-based version of their tool, allowing users to generate images without the need to sign up on Discord. This is an exciting development for users who have extensively used Mid-Journey and prefer a more streamlined experience. Instagram, on the other hand, has introduced a new AI-powered background editing tool, adding another use case for generative AI in everyday applications. Overall, these developments highlight the growing importance of indemnification and user experience in the AI market, as well as the continued innovation and expansion of AI tools and services.

    • AI advancements on Instagram and AzureMeta introduces AI-generated backgrounds on Instagram, Microsoft expands Azure AI studio with LLAMA2 and GPT-4 turbo, and there are reports of ChatGPT exhibiting strange behavior

      Both Meta and Microsoft are making significant strides in the realm of AI technology. Meta is introducing a new feature on Instagram that allows users to change the background of their stories using AI-generated images. Meanwhile, Microsoft is expanding its Azure AI studio to include LLAMA2 model as a service, along with the GPT-4 turbo with vision. This expansion makes these advanced AI models more accessible to a wider range of organizations and entities. Additionally, there have been reports of ChatGPT, a popular AI model, exhibiting strange behavior and asking users to solve their own problems instead of providing answers. This could be due to a lack of updates or other underlying issues, and the developers are looking into it. Overall, these developments demonstrate the ongoing advancements and potential applications of AI technology in various industries.

    • AI in Creative Industries: New Tools for Music and Image GenerationMicrosoft's Suno lets users generate custom songs from text prompts, joining other AI tools. Commercial use often requires paid memberships or subscriptions. Bytedance reportedly used OpenAI's API without permission, leading to account suspensions.

      The use of AI in creative industries, such as music and image generation, is becoming more accessible and integrated into various platforms. Microsoft's new co-pilot extension, Suno, allows users to generate custom songs by providing text prompts, joining other tools like Soundful, Magenta, and Soundra. However, commercial use of the generated content often requires paid membership or subscriptions, as seen with Suno and Stability AI. Bytedance, the developer of TikTok, has reportedly been using OpenAI's API in violation of the terms of service to build a competing technology, leading to account suspensions. These developments highlight the expanding support for AI in creative industries, the need for effective tool discovery, and the potential legal and ethical considerations surrounding commercial use of AI-generated content.

    • ByteDance uses large language models like OpenAI's GPT for their own projects, potentially violating terms of serviceByteDance, using internal communications, allegedly continues to use OpenAI's GPT for their projects, raising questions about term enforcement and potential legal consequences.

      The race for artificial intelligence (AI) dominance is intensifying, with companies like ByteDance reportedly using large language models like OpenAI's GPT for their own projects, despite potential violations of terms of service. ByteDance, the Chinese tech giant behind TikTok, is allegedly continuing this practice, using internal communications platforms like Lark to discuss it and employing tactics like data desensitization to avoid detection. The implications of this are significant, as it raises questions about the ability of companies like OpenAI to enforce their terms and the potential legal consequences of using unauthorized data in AI model development. Meanwhile, the hardware race is also heating up, with Intel's new AI chip, Gody 3, set to compete with NVIDIA and AMD in the market. Intel, under CEO Pat Gelsinger, has been focusing on chip fabrication and has acquired Habanalabs to bolster its expertise in this area. The new chip is expected to launch next year and compete with NVIDIA's H100 and AMD's MI 300X. However, Intel faces a challenge in building a strong software ecosystem to rival NVIDIA and AMD, which have also been making significant strides in this area. Ultimately, the question is whether Intel can keep up in both hardware and software to compete effectively in the rapidly evolving AI market.

    • Intel's new Core Ultra chips include neural processing units and Intel's 7-nanometer process for local AI computing and chip manufacturingIntel's new chips with neural processing units and 7-nanometer process aim to give them a competitive edge in the market, while the Chinese semiconductor industry faces challenges due to US sanctions and oversupply, with companies like SMIC continuing to succeed.

      Intel is making strides in local AI computing and chip manufacturing with the introduction of their new Core Ultra chips, which include neural processing units (NPUs), and Intel's in-house 7-nanometer process. This is part of Intel's aggressive strategy to compete with TSMC by 2026. Meanwhile, the Chinese semiconductor industry is facing significant challenges due to US sanctions, investment restrictions, and oversupply, leading to a record number of semiconductor company shutdowns. Despite the losses, it's crucial to remember that the focus should be on the companies that succeed, such as SMIC, in China's pursuit of self-reliance on semiconductors. Intel's advancements in local AI computing and chip manufacturing could potentially give them a competitive edge in the market.

    • China's Semiconductor Industry Advancements and Open-Source Hardware ShiftChina's SMIC is progressing towards semiconductor self-reliance, while TSMC moves forward with smaller nanometer processes for AI optimization and Meta plans to develop RISC-V based hardware, signaling a shift towards open-source alternatives in hardware design.

      China is making significant strides towards semiconductor self-reliance with companies like SMIC, but the industry still faces challenges as many companies are struggling. Meanwhile, TSMC, a leading chip fab, is moving forward with the development of smaller nanometer processes, specifically mentioning a new 1.4 nanometer process and a planned 2 nanometer process for 2025. This technological advancement will free up the next level, three nanometers, for AI-optimized hardware. Additionally, Meta, a tech giant, has announced plans to develop hardware based on an open-source instruction set architecture, RISC-V, which could potentially reduce reliance on companies like Intel and AMD. This shift towards open-source alternatives in both hardware and software design is a significant development in the tech industry.

    • Companies adapt to new regulations and market realitiesCompanies are collaborating, responding to export restrictions, and forming new partnerships to navigate challenges and advance in the tech industry.

      Companies are adapting to new regulations and market realities in innovative ways. Medov's decision to validate the RISC-V toolkit through collaboration with Meta and TSMC is a strategic move to advance the ecosystem. NVIDIA's rush order for modified AI GPU chips for the Chinese market is a response to export restrictions, showcasing their agility in navigating complex geopolitical situations. OpenAI's agreement to pay Axel Springer for using their content to train AI models marks a new era of collaboration between media companies and AI firms, with Axel Springer positioning itself to survive in the digital age. These stories illustrate the resilience and adaptability of tech companies in the face of challenges.

    • Regulatory landscape for large language models and AI companiesDiscussions around licensing regimes impact smaller firms' ability to build and compete, while open-sourcing smaller, faster, and cheaper models remains economically unclear. Neural architecture search offers potential for significant improvements, and focus on optimizing serving and deployment continues.

      The regulatory landscape for large language models and AI companies is evolving, with potential implications for competition and innovation. Discussions around licensing regimes for companies like OpenAI raise questions about the impact on smaller firms' ability to build and compete with larger models. Meanwhile, advancements in AI technology continue to push the boundaries, with Desi's new 7 billion parameter model being open-sourced under the Apache 2.0 license and showcasing impressive performance gains. The trend of smaller, faster, and cheaper models is accelerating, but the economics of open-sourcing these models remain unclear. Neural architecture search, an automated process for discovering network structures, is an intriguing development, with potential for significant improvements in inference speed. The focus on optimizing serving and deployment rather than just the models themselves is becoming increasingly apparent, with companies like Desi and Microsoft offering their models at the cheapest and fastest possible rates on their respective compute platforms. Overall, it's an exciting time for AI, with rapid advancements and shifting regulatory landscapes shaping the future of the industry.

    • AI Advancements in 3D Object Generation and Code InnovationStable AI introduces Stable 0123, a 3D object generation model that emphasizes data quality, while Google DeepMind's FrontSearch generates new code based on prompts and builds off its learnings.

      The field of AI is continuously evolving, with companies like Stable Diffusion and Stable AI pushing boundaries in various modalities such as image generation, audio, and now 3D object generation. Stable AI's introduction of Stable 0123, a model capable of generating high-quality 3D objects from single images, is an interesting development. Unlike the trend towards larger data volumes in the space, Stable AI emphasizes data quality, using a heavily filtered dataset to preserve only high-quality 3D objects. Additionally, they provide the model with an estimated camera angle during training and inference, allowing for more informed and higher quality predictions. Another significant advancement comes from Google DeepMind with their new algorithm, FrontSearch. This system generates new pieces of computer code based on a prompt and tests them, adding the successful ones to a database. The system then uses these programs to populate new prompts and runs the language model again, essentially building off what it has learned. However, for this process to lead to improvements and discoveries, there needs to be an injection of fresh information to prevent the model from just chewing its own tail. These developments showcase the ongoing progress in AI research and the potential for AI to generate high-quality 3D objects and even innovate by generating new computer code.

    • Using LLMs for mathematical problem solvingResearchers found that large language models can generate novel solutions to classic mathematical problems, improving existing bounds, when evaluation is cheap and tweaks lead to better results.

      Researchers have developed a new method using large language models (LLMs) to generate novel solutions to classic problems in mathematics, such as the bin packing problem, by applying evolutionary optimization. This approach involves generating potential solutions, evaluating them, and making adjustments based on the evaluation results. The LLM's generated solutions are legible, allowing researchers to understand the logic behind them. While the idea of using LLMs for optimization is not new, the researchers were able to demonstrate significant improvements on existing bounds for certain problems. However, it's important to note that this approach is limited to problems where evaluating the solution is cheap and where making tweaks to the program can lead to better solutions. This discovery, while not fundamentally new, shows that there was latent potential in our pre-existing theories that we have now figured out how to unlock. Additionally, this research may signal that LLMs are capable of learning embedded logic in a deep way, potentially bringing us closer to achieving advanced forms of artificial intelligence.

    • Exploring new ways to control super-intelligent AI using reverse-reinforcement learningResearchers at OpenAI found promising results in using a less intelligent model to train a more advanced one, potentially offering a way to control super-intelligent AI behavior.

      Researchers at OpenAI are exploring new ways to control and align super-intelligent AI by having a less intelligent model train a more advanced one. This approach, called "reverse-reinforcement learning," showed promising results, with the more advanced model, GPT-4, achieving GPT-3 level performance when fine-tuned on objectives given by GPT-2. However, the model did tend to learn and replicate the mistakes of the weaker model, requiring additional techniques like auxiliary confidence loss to prevent this. The implications of this research are significant, as it suggests that we may be able to use less intelligent models to guide the learning of super-intelligent ones, potentially offering a way to control their behavior. This is an important step forward in the ongoing quest to ensure the alignment of AGI with human values. The researchers also found that the effectiveness of these techniques varies depending on the domain, highlighting the need for further research to develop a robust scientific theory of alignment. Overall, this research underscores the importance of continued exploration and innovation in the field of AGI alignment.

    • Advancements in Chatbot Technology and AI Hardware EfficiencyResearchers are pushing chatbot technology forward, but AI hardware efficiency is limited, with OpenAI addressing potential risks as more powerful models emerge.

      Researchers are making strides in creating more advanced chatbots that can interact with and use GUI applications, moving beyond simple text-based interactions. However, there are limitations to the current generation of AI hardware, specifically CMOS technology, which may limit the efficiency and scale of AI development. Companies like Epic AI predict that current strategies will allow for up to 10 to the 35 flops of compute, a significant increase from the 10 to the 26 flops used for models like GPT-4. Despite these advancements, the efficiency of AI hardware is expected to be capped at around 200x, and other technologies are expected to take over by the end of the decade. Meanwhile, OpenAI has announced a preparedness framework to track and mitigate risks associated with more powerful AI models, focusing on themes like cybersecurity, persuasion, and model autonomy. The framework includes a scorecard to evaluate the safety of models in these areas, with a detailed white paper outlining specific measurements and evaluation methods.

    • OpenAI's Risk Management Framework for AI SystemsOpenAI categorizes risks in AI systems into low, medium, high, and critical levels, with a willingness to deploy low and medium risk models but more caution for high and critical risks, and a recognition of unknown unknowns. The CEO makes the initial assessment, but the board can overrule decisions for transparency and governance.

      OpenAI is implementing a risk management framework for their AI systems, categorizing risks into low, medium, high, and critical levels. They are willing to train and deploy models with low and medium risks, but are more cautious with high-risk models that could enable experts to develop novel threats or assist anyone with basic training to create dangerous threats. They are even more reluctant to train models with critical risks, which could lead to unacceptable new capabilities. The framework also includes a category for unknown unknowns, acknowledging the limitations of predicting all potential risks. The CEO makes the initial risk assessment, but the board has the power to overrule decisions. This level of transparency and governance is significant for responsible development and scaling of AI.

    • China's use of AI in large-scale social media influence operationChina's AI-powered influence operation on YouTube generates quick responses to current events, influencing global opinion with 120M views and 730K subscribers, but poses risks to financial system with potential for biased or inaccurate results and systemic risks from interacting systems, requiring regulation and oversight.

      China is reportedly using AI at a large scale in an influence operation on social media, specifically on YouTube, to spread pro-China and anti-US narratives. This operation, which has drawn nearly 120 million views and 730,000 subscribers, is able to rapidly respond to current events and has been successful in influencing global opinion. The use of AI allows for quick content generation and a diverse range of formats, but the techniques used are not necessarily advanced or realistic. The Financial Stability Oversight Council (FSOC) in the US has formally classified AI as an emerging vulnerability due to its potential risks to the financial system, including biased or inaccurate results and systemic risks from interacting systems. These developments underscore the growing importance of AI in various domains and the need for regulation and oversight.

    • Anonymous Sudan targets ChatGPT with DOS attacks over Palestinian viewsPolitically motivated DOS attacks on ChatGPT highlight its critical infrastructure status and the potential societal implications of AGI's emergence.

      The Anonymous Sudan hacking group has targeted ChatGPT with DOS attacks, demanding that OpenAI stops promoting views deemed dehumanizing towards Palestinians. This politically motivated attack raises questions about ChatGPT's status as critical infrastructure, potentially affecting numerous companies that rely on the service. Simultaneously, the International Monetary Fund (IMF) published a report discussing the potential labor market implications of AGI's emergence, with scenarios ranging from business as usual to complete automation within five years. Effective altruism, a movement focused on maximizing positive impact, has increasingly focused on AI risk. Organizations and individuals within this movement are making significant strides in AI security, with notable funding from entities like Open Philanthropy. These developments underscore the growing importance of AI in various sectors and the need for careful consideration of its societal implications.

    • Effective Altruism and AI Safety: Complex RelationshipEffective altruism, an influential philosophy in charitable giving, brought attention to AI risks but centralized funding raises concerns about potential conflicts of interest. Nuanced understanding of motivations and complexities is crucial.

      The debate surrounding the influence and funding of effective altruism in the field of AI safety is a complex issue. Effective altruism, an influential philosophy promoting rationality and evidence-based charitable giving, was among the first to draw attention to potential risks from artificial intelligence, particularly alignment failure scenarios. However, the centralization of funding in this space has raised concerns about potential conflicts of interest. While some argue that those advising governments may have good intentions and knowledge, others worry about the influence of effective altruism on the narrative. The speaker, whose company is the only major AI safety firm not reliant on this funding, acknowledges the validity of these concerns but also the complexity of the situation. The issue is not black and white, and it's essential to understand the nuances and motivations behind the various players in the ecosystem. Additionally, the proliferation of AI-generated images on social media platforms like Facebook, which can be used as engagement bait, is another concern that warrants attention.

    • AI in Social Media: Creating Viral ContentAI is being used to generate and optimize content for virality on social media platforms, raising concerns about authenticity and originality.

      AI is increasingly being used to create and optimize content for virality on social media platforms. This was highlighted in a recent article with examples of AI-generated images and even an AI voice clone being used by a former Pakistani Prime Minister for campaigning. This trend is reminiscent of early digital media days where AB testing was used extensively for optimizing headlines. While it's cool to see the possibilities, it's also concerning as it raises questions about authenticity and originality. A group called "Um Isn't That AI?" on Facebook is dedicated to detecting and tracing back AI-generated content. The use of AI in politics is another area where we're seeing this trend, with examples in India and Korea. This could potentially lead to more efficient and cost-effective campaigning, but also raises ethical concerns. Overall, it's a reminder of the rapid advancements in AI and its increasing presence in various aspects of our lives.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    ARE AI ROBOTS TAKING OVER THE WORLD? FROM CEOS TO LAW ENFORCEMENT, AI ROBOTS ARE QUIETLY ROLLING OUT

    ARE AI ROBOTS TAKING OVER THE WORLD? FROM CEOS TO LAW ENFORCEMENT, AI ROBOTS ARE QUIETLY ROLLING OUT

    On today's episode, Tara and Stephanie talk about robots popping up everywhere. From airports to restaurants, robots are becoming a way of life in today's world. But at what risk? Your hosts dive into the potential pitfalls of using robots in the military, what world leaders are doing secure AI, and how this technology is used to target children. This episode is so crazy, it sounds like something out of science fiction movie. But this is the world we are already living in.

    Read the blog and connect with Stephanie and Tara on TikTok, IG, YouTube, and Facebook.

    https://msha.ke/unapologeticallyoutspoken/

    Support the podcast and join the conversation by purchasing a fun UOP sticker or joining our Patreon community.

    https://www.patreon.com/unapologeticallyoutspoken

    https://www.esty.com/shop/UOPatriotChicks

    Guest Lectures Douglas Murray About Gender & It Gets Ugly Fast | Direct Message | Rubin Report

    Guest Lectures Douglas Murray About Gender & It Gets Ugly Fast | Direct Message | Rubin Report
    Dave Rubin of “The Rubin Report” talks about Douglas Murray’s appearance on “Piers Morgan Uncensored” where guest Ava Santina attempted to lecture him on gender ideology before his show-stopping statement about the real threat against LGBTQ rights; Riley Gaines revealing the shocking details of what it was like for her and her teammates to be forced to share a locker room with trans athlete Lia Thomas in her testimony at the “Protecting Pride: Defending the Civil Rights of LGBTQ+ Americans'' hearing; and HRC President Kelly Robinson being questioned by Senator John Neely Kennedy and having her women’s sports facts corrected by Riley Gaines. Dave also does a special “ask me anything” question-and-answer session on a wide-ranging host of topics, answering questions from the Rubin Report Locals community. WATCH the MEMBER-EXCLUSIVE segment of the show here: https://rubinreport.locals.com/ Check out the NEW RUBIN REPORT MERCH here: https://daverubin.store/ ---------- Today’s Sponsors: Old Guard Pet Co. - Give your dog high quality ingredients and science backed recipes. Don’t compromise ingredient quality, your dog’s health, or your traditional American values. Go to: https://www.oldguardpetco.com/ USE PROMO CODE: DAVE Birch Gold - Protect your retirement from Bidenflation. Convert your IRA or 401k into an IRA in precious metals. Claim your free infokit on gold and talk to one of their precious metals specialists now. Go to: https://birchgold.com/dave Learn more about your ad choices. Visit megaphone.fm/adchoices

    WARNING By Sam Harris: ChatGPT Could Be The Start Of The End. AI "Could Destroy Us, The Internet And Democracy"

    WARNING By Sam Harris: ChatGPT Could Be The Start Of The End. AI "Could Destroy Us, The Internet And Democracy"
    In this new episode Steven sits down with philosopher, neuroscientist, podcast host and author Sam Harris. In 2004, Sam published his first book, ‘The End of Faith’, this stayed on the New York Times bestseller list for 33 weeks and won the PEN/Martha Albrand Award for First Nonfiction. He has gone on to author 5 New York Times bestselling books published in over 20 languages. In 2009, Sam obtained his Ph.D. in cognitive neuroscience from the University of California, Los Angeles. In 2013, he began the ‘Waking Up’ podcast which covers subjects from meditation to AI. Sam is also the co-founder and CEO of Project Reason, a nonprofit foundation devoted to spreading scientific knowledge and secular values in society. In this conversation Sam and Steven discuss topics, such as: How to change peoples beliefs Why he is not optimistic about AI How to live an examined life Why you become what you pay attention to The reason the mind is all you really have moment by moment Why AI is not aligned with human wellbeing How it is too late to turn back the progression of AI The danger of misinformation Why we’re going to have to abandon the internet You can purchase Sam's book, 'Waking Up', here: https://bit.ly/3Qp51D7 Sam has kindly given DOAC listeners 30 days free trial on his app - Waking Up. Here is the link: https://bit.ly/3QxIrrZ Follow Sam: Instagram: https://bit.ly/3DHwOHy YouTube: https://bit.ly/3DE8RAy Watch the episodes on Youtube - https://g2ul0.app.link/3kxINCANKsb My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Follow me: Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors: Huel: https://g2ul0.app.link/G4RjcdKNKsb Whoop: http://bit.ly/3MbapaY Learn more about your ad choices. Visit podcastchoices.com/adchoices

    AI-pocalypse: predicting the threat from artificial intelligence

    AI-pocalypse: predicting the threat from artificial intelligence

    Wiping out a tenth of the world? Possible. Wiping out all of humanity? Less likely, but not entirely impossible. We examine how two groups of experts have arrived at these worrying predictions about AI. Education is giving hope to inmates in a maximum security prison in New York (11:17). And, on Britain’s working men’s clubs which have nurtured rock bands for decades (18:00).


    For full access to print, digital and audio editions of The Economist, try a free 30-day digital subscription by going to www.economist.com/intelligenceoffer




    Hosted on Acast. See acast.com/privacy for more information.


    #133 - ChatGPT multi-document chat, CoreWeave raises $2.3B, AudioCraft, ToolLLM, Autonomous Warfare

    #133 - ChatGPT multi-document chat, CoreWeave raises $2.3B, AudioCraft, ToolLLM, Autonomous Warfare

    Our 133rd episode with a summary and discussion of last week's big AI news!

    Apologies for pod being a bit late this week!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links: