Detecting deepfakes through heartbeats and addressing ethical concerns in AI: Researchers discovered a method to detect deepfakes with high accuracy using heartbeats, while Google plans to offer AI ethics services to help companies navigate ethical issues. Ethical considerations and potential harms are crucial as AI technology advances.
While AI technology continues to advance, it's important to address the ethical implications and potential risks. Last week, researchers found a way to detect deepfakes with 97% accuracy by analyzing heartbeats through PPG cells on a person's face. However, language models like GPT-3 can mimic human language but lack understanding, raising concerns about trustworthiness. Google plans to launch new AI ethics services to help companies navigate ethical issues, learning from its past experiences with ethical dilemmas. Meanwhile, TikTok's sale attempt may face complications due to new restrictions on AI technology exports from China. Overall, it's crucial to prioritize ethical considerations and mitigate potential harms as AI technology continues to evolve.
US-China tech dispute and new rules limiting tech exports: Scientists discovered a new way to detect deep fake videos using heartbeat detection, offering a promising solution in the ongoing cat-and-mouse game between creators and detectors.
The US-China tech dispute has escalated with new rules limiting the export of certain technologies, including those used by TikTok, which could lead to increased pressure on both sides. Meanwhile, in the world of AI research, scientists have found a new way to detect deep fake videos using heartbeat detection. This innovative approach, which involves interpreting residuals of biological signals, shows that sometimes, using prior knowledge and intuition can lead to effective neural net techniques. The cat-and-mouse game between deep fake creators and detectors continues, but this research offers a promising solution. It's a reminder that the field of AI is constantly evolving, and staying informed about the latest developments is crucial. Stay tuned for more in-depth discussions on these topics and others.
Advancements in deepfake detection and identifying source models: Researchers are making progress in detecting deepfakes and identifying the models responsible. Knowledge graphs, which extract and organize internet facts, are gaining importance in AI for enhancing reliability and consistency.
Researchers have made strides in not only detecting deepfakes but also identifying the source models responsible for generating them. This is significant as it allows for more accurate identification and mitigation of deepfakes. Additionally, the use of knowledge graphs, which involve extracting and organizing facts from the internet, is becoming increasingly important in the field of AI. Companies like Diffbot are building AI systems that continuously crawl the web and create knowledge graphs, providing true and useful information for various purposes. Google is also investing in this technology, aiming to provide knowledge graphs for every query. While knowledge graphs may not be a new concept, they are gaining renewed attention due to their potential to enhance the reliability and consistency of AI systems. Google is even offering to help other companies navigate the ethical considerations of AI through a new service. These developments highlight the ongoing advancements and evolving priorities in the field of AI.
Google's Ethics as a Service and IBM's AI-driven Chemical Synthesis: Google enters the market with Ethics as a Service, aiming to help organizations navigate ethical considerations in AI implementation. IBM demonstrates a complex system for chemical and drug synthesis, combining robotics, AI, and cloud computing to supercharge the process.
Google is entering the market with an Ethics as a Service (EaaS) offering, aiming to help organizations navigate ethical considerations in AI implementation. This move comes after facing controversies and changing products with AI. While the specifics of the service are vague, it may include classes and consulting. The idea of outsourcing ethics raises concerns, but Google's expertise in AI ethics and potential impact on smaller organizations and governments is valuable. The service could be particularly useful for startups lacking resources to consider ethical implications thoroughly. The article does not mention if the service will be free or not. Meanwhile, in a separate development, IBM demonstrated a complex system for chemical and drug synthesis, combining robotics, AI, and cloud computing. This system is expected to supercharge the process, reducing time and costs significantly. The integration of these technologies is a significant step towards industrial-scale, automated chemical and drug synthesis. Both of these developments represent significant strides in their respective fields, with Google focusing on ethical considerations and IBM on efficiency and automation in chemical and drug synthesis. The intersection of technology and ethics is becoming increasingly important as AI continues to become more prevalent in various industries.
IBM's AI and robotics in drug discovery: IBM's AI and robotics integration in drug discovery aims to cut discovery and verification time in half, potentially revolutionizing the field during the pandemic.
IBM is utilizing AI and robotics to revolutionize the field of drug discovery and chemical research. The AI component is used to predict chemical reactions and suggest potential experiments, while the robotics component automates the setup and running of these experiments. This integration aims to reduce typical drug discovery and verification time by half. IBM's efforts in this area, specifically with their Robo RXN system, demonstrate how robotics can accelerate science and pave the way for future advancements. Despite IBM's past struggles with commercialized AI services, the potential impact of this technology in the field of drug discovery is significant, especially during the ongoing COVID-19 pandemic. Other companies like Daphne Kohler's startup are also exploring similar AI-driven drug discovery approaches. Overall, the combination of AI and robotics in this context represents a more embodied AI, going beyond typical software applications, and is a symbol of what may come in various sectors in the next few decades.
Experts Discuss the Role of AI in Crisis Management: AI and automation are vital tools in managing crises, but we must be cautious of unintended consequences and ensure responsible development and implementation.
The experts discussed on this week's episode of Skynet Today's Let's Talk AI Podcast agree that AI and automation have become essential tools in managing and responding to crises, including pandemics. However, they also warn that we must be cautious and prepared for potential unintended consequences, such as job displacement and ethical dilemmas. They emphasized the importance of responsible AI development and implementation, as well as the need for ongoing education and collaboration between various stakeholders. In the end, they concluded that while AI can help us navigate current challenges, it's crucial to approach it with a long-term perspective and a commitment to ethical and sustainable use. To stay informed about the latest developments in AI and related topics, be sure to check out the articles we discussed on today's episode and subscribe to our weekly newsletter at skynettoday.com. Don't forget to subscribe to our podcast and leave us a review if you enjoyed the show. Tune in next week for more insights and discussions on AI and its impact on our world.
Heartbeat DeepFake Detection, Robot Drug Tests, Ethics as a Service
Recent Episodes from Last Week in AI
# 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved
Our 182nd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and Jeremie Harris.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
Sponsors:
- Agent.ai is the global marketplace and network for AI builders and fans. Hire AI agents to run routine tasks, discover new insights, and drive better results. Don't just keep up with the competition—outsmart them. And leave the boring stuff to the robots 🤖
- Pioneers of AI, is your trusted guide to this emerging technology. Host Rana el Kaliouby (RAH-nuh el Kahl-yoo-bee) is an AI scientist, entrepreneur, author and investor exploring all the opportunities and questions AI brings into our lives. Listen to Pioneers of AI, with new episodes every Wednesday, wherever you tune in.
In this episode:
- OpenAI's move into hardware production and Amazon's strategic acquisition in AI robotics. - Advances in training language models with long-context capabilities and California's pending AI regulation bill. - Strategies for safeguarding open weight LLMs against adversarial attacks and China's rise in chip manufacturing. - Sam Altman's infrastructure investment plan and debates on AI-generated art by Ted Chiang.
Timestamps + Links:
- (00:00:00) Intro / Banter
- (00:05:15) Response to listener comments / corrections
- Tools & Apps
- Applications & Business
- (00:14:56) Ilya Sutskever’s startup, Safe Superintelligence, raises $1B
- (00:22:20) TSMC’s A16 Process Creates a Buzz Before Mass Production, as OpenAI Reportedly Secures Capacity
- (00:29:13) Amazon hires the founders of AI robotics startup Covariant
- (00:33:33) OpenAI weighs changes to corporate structure amid latest funding talks
- (00:37:43) Chinese GPU-maker XCT, once valued at $2.1B, is on the verge of collapse — shareholders now suing founder
- (00:40:34) TSMC aims to ready next-gen silicon photonics for AI in 5 years
- Projects & Open Source
- Research & Advancements
- (00:48:32) Fire-Flyer AI-HPC: A Cost-Effective Software-Hardware Co-Design for Deep Learning
- (00:55:52) 100M Token Context Windows
- (01:03:50) Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
- (01:06:16) AnyGraph : An Effective and Efficient Graph Foundation Model Designed to Address the Multifaceted Challenges of Structure and Feature Heterogeneity Across Diverse Graph Datasets
- Policy & Safety
- (01:08:16) California Legislature Approves Bill Proposing Sweeping A.I. Restrictions
- (01:11:14) Tamper-Resistant Safeguards for Open-Weight LLMs
- (01:17:12) China's chip capabilities just 3 years behind TSMC, teardown shows
- (01:20:50) China Threatens to Cut Off ASML Over New US Chip Curbs
- (01:23:22) Altman Infrastructure Plan Aims to Spend Tens of Billions in US
- Synthetic Media & Art
- (01:34:28) Outro
#181 - Google Chatbots, Cerebras vs Nvidia, AI Doom, ElevenLabs Controversy
Our 181st episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov and Jeremie Harris
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
In this episode:
- Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress. - Microsoft's Inflection usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia. - Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer. - AI regulation discussions including California’s SB1047, China's AI safety stance, and new export restrictions impacting Nvidia’s AI chips.
Timestamps + Links:
- (00:00:00) Intro / Banter
- (00:03:08)Response to listener comments / corrections
- Tools & Apps
- (00:09:19) Google’s custom AI chatbots have arrived
- (00:12:52) Google releases three new experimental AI models
- (00:17:14) Google Gemini will let you create AI-generated people again
- (00:22:32) Five months after Microsoft hired its founders, Inflection adds usage caps to Pi
- (00:26:42:) Plaud takes a crack at a simpler AI pin
- Applications & Business
- (00:30:31) Cerebras Systems throws down gauntlet to Nvidia with launch of ‘world’s fastest’ AI inference service
- (00:41:06) Nvidia announces $50 billion stock buyback
- (00:46:24) OpenAI in talks to raise funding that would value it at more than $100 billion
- (00:50:44) OpenAI Aims to Release New AI Model, ‘Strawberry,’ in Fall
- (00:52:53) 3 Co-Founders Leave French AI Startup H Amid ‘Operational Differences’
- (00:57:29) Samsung to Adopt High-NA Lithography Alongside Intel, Ahead of TSMC
- (01:02:11) Unitree's $16,000 G1 could become the first mainstream humanoid robot
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- (01:47:12) U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI
- (01:50:46) China’s Views on AI Safety Are Changing—Quickly
- (01:56:27) Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes
- (02:01:31) Elon Musk voices support for California bill requiring safety tests on AI models
- (02:03:55) Chinese Engineers Reportedly Accessing NVIDIA’s High-End AI Chips Through Decentralized “GPU Rental Services”
- (02:08:25) U.S. gov't tightens China restrictions on supercomputer component sales
- Synthetic Media & Art
- (02:14:06) Outro
#180 - Ideogram v2, Imagen 3, AI in 2030, Agent Q, SB 1047
Our 180th episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
Episode Highlights:
- Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.
- Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results.
- Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.
- Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.
Timestamps + Links:
- (00:00:00) Intro / Banter
- (00:01:08) Response to Listener Comments / Corrections
- Tools & Apps
- (00:03:58) Ideogram AI expands its features with v2 model and color palette options
- (00:07:48) Google Releases Powerful AI Image Generator You Can Use for Free
- (00:11:41) Perplexity adds Flux.1 model for Pro users alongside Playground v3 update
- (00:13:58) Luma drops Dream Machine 1.5 — here’s what’s new
- (00:17:49) Runway’s Gen-3 Alpha Turbo is here and can make AI videos faster than you can type
- (00:20:21) Perplexity’s latest update improves code interpreter, charts included
- Applications & Business
- (00:24:14) AMD buying server maker ZT Systems for $4.9 billion as chipmakers strengthen AI capabilities
- (00:28:55) Ars Technica content is now available in OpenAI services
- (00:34:08) Anysphere, a GitHub Copilot rival, has raised $60M Series A at $400M valuation from a16z, Thrive, sources say
- 00:38:32 Stability AI appoints new Chief Technology Officer
- (00:41:45) Cruise’s robotaxis are coming to the Uber app in 2025
- Projects & Open Source
- (00:44:16) AI21 Introduces the Jamba Model Family: The most powerful and efficient long-context models for the enterprise
- (00:53:47) Microsoft reveals Phi-3.5 — this new small AI model outperforms Gemini and GPT-4o
- (00:57:33) Nvidia’s Llama-3.1-Minitron 4B is a small language model that punches above its weight
- (01:00:58) Open source Dracarys models ignite generative AI fired coding
- Research & Advancements
- Policy & Safety
- (01:38:20) California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic
- (01:48:14) Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online
- (01:52:44) Showing SAE Latents Are Not Atomic Using Meta-SAEs
- Synthetic Media & Art
- (02:01:43) Outro
#179 - Grok 2, Gemini Live, Flux, FalconMamba, AI Scientist
Our 179th episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
Episode Highlights:
- Grok 2's beta release features new image generation using Black Forest Labs' tech.
- Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.
- Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.
- Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.
Timestamps + Links:
- (00:00:00) Intro / Banter
- (00:02:15) Response to listener comments / corrections
- Tools & Apps
- (00:04:24) Grok-2 is out in beta, now with added AI image generation
- (00:11:28) OpenAI reveals an updated GPT-4o model - but can't quite explain how it's better
- (00:13:48) Google Gemini’s voice chat mode is here
- (00:16:18) Google’s Pixel Buds Pro 2 bring Gemini to your ears
- (00:19:55) Google’s AI-generated search summaries change how they show their sources
- (00:23:13) Prompt Caching is Now Available on the Anthropic API for Specific Claude Models
- Applications & Business
- (00:26:56) Meet Black Forest Labs, the startup powering Elon Musk’s unhinged AI image generator
- (00:26:56) Huawei readies new AI chip to challenge Nvidia in China, WSJ reports
- (00:37:53) ASML and Imec Announce High-NA Lithography Breakthrough
- (00:43:07) Chinese startup WeRide gets nod to test robotaxis with passengers in California
- (00:45:49) Perplexity’s popularity surges as AI search start-up takes on Google
- (00:51:55) Lisa Su formally welcomes Silo AI team to AMD after completing $665 million acquisition
- Projects & Open Source
- (00:54:31) FalconMamba 7B Released: The World’s First Attention-Free AI Model with 5500GT Training Data and 7 Billion Parameters
- (00:59:25) OpenAI has introduced SWE-bench Verified to evaluate AI performance
- (01:04:21) Nous Research presents Hermes 3
- (01:11:07) New supercomputing network could lead to AGI, scientists hope, with 1st node coming online within weeks
- Research & Advancements
- (01:14:40) The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery
- (01:30:24) Imagen 3
- (01:32:48) The Data Addition Dilemma
- (01:37:35) LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
- Policy & Safety
- Synthetic Media & Art
- (01:56:21) AI Song Outro
#178 - More Not-Acquihires, More OpenAI drama, More LLM Scaling Talk
Our 178th episode with a summary and discussion of last week's big AI news!
NOTE: this is a re-upload with fixed audio, my bad on the last one! - Andrey
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
In this episode: - Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible. - OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk. - Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis. - Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.
Timestamps + Links:
- (00:00:00) Intro / Banter
- (00:03:14) Response to listener comments / corrections
- Applications & Business
- (00:06:56) Google’s hiring of Character.AI’s founders is the latest sign that part of the AI startup world is starting to implode
- (00:15:12) Investors in Adept AI will be paid back after Amazon hires startup’s top talent
- (00:22:36) AI chip start-up Groq’s value rises to $2.8bn as it takes on Nvidia
- (00:29:22) OpenAI co-founder Schulman leaves for Anthropic, Brockman takes extended leave
- (00:36:18) Elon Musk files new lawsuit against OpenAI and Sam Altman
- (00:41:40) Figure’s new humanoid robot leverages OpenAI for natural speech conversations
- (00:47:01) ASML, Tokyo Electron dodge new US chip export rules, for now
- (00:53:10) OpenAI reportedly leads $60M round for webcam startup Opal
- Tools & Apps
- Research & Advancements
- (01:06:35) Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
- (01:16:27) Achieving Human Level Competitive Robot Table Tennis
- (01:20:19) Self-Compressing Neural Networks
- (01:28:30) Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models
- (01:32:43) Berkeley Humanoid: A Research Platform for Learning-based Control
- Policy & Safety
- (01:33:35) METR announces results of study on comparative capabilities of humans and agents
- (01:39:35) ‘The Godmother of AI’ says California’s well-intended AI bill will harm the U.S. ecosystem
- (01:49:13) Google Monopolized Search Through Illegal Deals, Judge Rules
- (01:54:56) Amazon faces UK merger probe over $4B Anthropic AI investment
- (01:55:44) GPT-4o System Card
- (02:03:09) Outro
#177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2
Our 177th episode with a summary and discussion of last week's big AI news!
NOTE: apologies for this episode again coming out about a week late, next one will be coming out soon...
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast
If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.
In this episode, hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
- (00:00:00) AI Song / Intro Banter
- (00:05:32) Response to listener comments / corrections
- Tools & Apps
- (00:10:16) Apple Intelligence to Miss Initial Launch of Upcoming iOS 18 Overhaul
- (00:16:35) Instagram starts letting people create AI versions of themselves
- Lighting round
- Applications & Business
- (00:31:44) Character.AI CEO Noam Shazeer returns to Google
- (00:39:41) Perplexity is cutting checks to publishers following plagiarism accusations
- Lighting round
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- (01:50:10) AI Outro
#176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2
Our 176th episode with a summary and discussion of last week's big AI news!
NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
- (00:00:00) Intro Song
- (00:00:34) Intro Banter
- Tools & Apps
- (00:03:39) OpenAI announces SearchGPT, its AI-powered search engine
- (00:08:03) Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model
- (00:09:10) X launches underwhelming Grok-powered ‘More About This Account’ feature
- (00:11:36) Kuaishou Launches Full Beta Testing for 'Kling AI' to Global Users, Elevates Model Capabilities
- (00:13:39) Adobe rolls out more generative AI features to Illustrator and Photoshop
- (00:14:25) Meta AI gets new ‘Imagine me’ selfie feature
- Projects & Open Source
- (00:15:19) Meta releases open-source AI model it says rivals OpenAI, Google tech
- (00:28:23) Mistral AI Unveils Mistral Large 2, Beats Llama 3.1 on Code and Math
- (00:34:00) Groq’s open-source Llama AI model tops leaderboard, outperforming GPT-4o and Claude in function calling
- (00:36:35) Apple shows off open AI prowess: new models outperform Mistral and Hugging Face offerings
- Applications & Business
- (00:40:25) Elon Musk wants Tesla to invest $5 billion into his newest startup, xAI — if shareholders approve
- (00:43:01) Nvidia said to be prepping Blackwell GPUs for Chinese market
- (00:46:28) Toronto AI company Cohere to indemnify customers who are sued for any copyright violations
- (00:49:09) AI startup Cohere raises US$500-million, valuing company at US$5.5-billion
- Research & Advancements
- Policy & Safety
- (01:02:56) Improving Model Safety Behavior with Rule-Based Rewards
- (01:06:39) Senators demand OpenAI detail efforts to make its AI safe
- (01:10:59) OpenAI reassigns top AI safety executive Aleksandr Madry to role focused on AI reasoning
- (01:13:08) As new tech threatens jobs, Silicon Valley promotes no-strings cash aid
- (01:17:33) Democratic senators seek to reverse Supreme Court ruling that restricts federal agency power
- Synthetic Media & Art
- (01:23:03) Outro
- (01:23:58) AI Song
#175 - GPT-4o Mini, OpenAI's Strawberry, Mixture of A Million Experts
Our 175th episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
In this episode of Last Week in AI, hosts Andrey Kurenkov and Jeremy Harris explore recent AI advancements including OpenAI's release of GPT 4.0 Mini and Mistral’s open-source models, covering their impacts on affordability and performance. They delve into enterprise tools for compliance, text-to-video models like Hyper 1.5, and YouTube Music enhancements. The conversation further addresses AI research topics such as the benefits of numerous small expert models, novel benchmarking techniques, and advanced AI reasoning. Policy issues including U.S. export controls on AI technology to China and internal controversies at OpenAI are also discussed, alongside Elon Musk's supercomputer ambitions and OpenAI’s Prover-Verify Games initiative.Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
Timestamps + links:
- (00:00:00) AI Song Intro
- (00:00:40) Intro / Banter
- Tools & Apps
- (00:03:57) OpenAI unveils GPT-4o mini, a small AI model powering ChatGPT
- (00:11:38) Meet Haiper 1.5, the new AI video generation model challenging Sora, Runway
- (00:16:32) Anthropic releases Claude app for Android
- (00:18:59) Google Vids is available to test out Gemini AI-created video presentations
- (00:20:27) YouTube Music sound search rolling out, AI ‘conversational radio’ in testing
- Applications & Business
- (00:23:30) OpenAI working on new reasoning technology under code name ‘Strawberry’
- (00:30:45) Inside Elon Musk’s Mad Dash To Build A Giant xAI Supercomputer In Memphis
- (00:37:15) Apple, NVIDIA and Anthropic reportedly used YouTube transcripts without permission to train AI models
- (00:41:05) After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education
- (00:43:40) Menlo Ventures and Anthropic team up on a $100M AI fund
- Projects & Open Source
- (00:46:27) Mistral releases Codestral Mamba for faster, longer code generation
- (00:50:36) Mistral AI and NVIDIA Unveil Mistral NeMo 12B, a Cutting-Edge Enterprise AI Model
- (00:52:51) Hugging Face Releases SmoLLM, a Series of Small Language Models, Beats Qwen2 and Phi 1.5
- (00:56:11) Stable Diffusion 3 License Revamped Amid Blowback, Promising Better Model
- Research & Advancements
- Policy & Safety
- (01:20:50) Prover-Verifier Games improve legibility of language model outputs
- (01:28:05) Trump allies draft AI order to launch ‘Manhattan Projects’ for defense
- (01:34:40) On scalable oversight with weak LLMs judging strong LLMs
- (01:36:24) Google, Microsoft offer Nvidia chips to Chinese companies, the Information reports
- (01:38:26) U.S. planning 'draconian' sanctions against China's semiconductor industry: Report
- (01:48:47) OpenAI illegally barred staff from airing safety risks, whistleblowers say
- (01:44:59) Outro + AI Song
#174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues
Our 174rd episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
In this episode of Last Week in AI, we delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure. Key topics include AMD's acquisition of Silo AI, Elon Musk's GPU cluster plans for XAI, unique AI model training methods, and the nuances of AI copying and memory constraints. We discuss developments in AI's visual perception, real-time knowledge updates, and the need for transparency and regulation in AI content labeling and licensing.
See full episode notes here.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
Timestamps + links:
- (00:00:00) Intro AI Song
- (00:00:41) Pre News Banter
- Tools & Apps
- (00:07:09) Odyssey Building 'Hollywood-Grade' AI Text-to-Video Model to Compete With Sora, Gen-3 Alpha
- (00:10:28) Anthropic’s Claude adds a prompt playground to quickly improve your AI apps
- (00:15:06) Figma pauses its new AI feature after Apple controversy
- (00:18:30) Quora’s Poe now lets users create and share web apps
- (00:20:54) Suno launches iPhone app — now you can make AI music on the go
- Applications & Business
- (00:21:42) Groq unveils lightning-fast LLM engine; developer base rockets past 280K in 4 months
- (00:27:03) Microsoft and Apple ditch OpenAI board seats amid regulatory scrutiny
- (00:29:39) OpenAI and Arianna Huffington are working together on an ‘AI health coach’
- (00:33:38) AI coding startup Magic seeks $1.5-billion valuation in new funding round, sources say
- (00:37:01) Sequoia and Andreessen Horowitz Clash Over AI Chip Supplies Amid Gen AI Boom
- (00:43:30) Elon Musk Reveals Plans To Make World’s “Most Powerful” 100,000 NVIDIA GPU AI Cluster
- (00:46:25) AMD plans to acquire Silo AI in $665 million deal
- (00:48:00) AI robotics startup raises US$300 million, including from Jeff Bezos
- (00:52:11) Intel begins groundwork on Magdeburg chip fab despite 13 remaining regulatory and environmental objections
- Research & Advancements
- (00:55:21) Learning to (Learn at Test Time): RNNs with Expressive Hidden States
- (01:03:12) Data curation via joint example selection further accelerates multimodal learning
- (01:09:11) CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation
- (01:13:25) Just read twice: closing the recall gap for recurrent language models
- (01:15:25) CodeUpdateArena: Benchmarking Knowledge Editing on API Updates
- (01:18:31) Composable Interventions for Language Models
- (01:24:09) Mind-reading AI recreates what you're looking at with amazing accuracy
- Policy & Safety
- (01:26:49) Covert Malicious Finetuning
- (01:31:23) OpenAI’s week of security issues
- (01:36:39) Here’s how OpenAI will determine how powerful its AI systems are
- (01:39:56) Me, Myself and AI: The Situational Awareness Dataset for LLMs
- (01:44:34) Exclusive: OpenAI partners with Los Alamos to study AI in the lab
- (01:47:36) Judge dismisses coders’ DMCA claims against Microsoft, OpenAI and GitHub
- (01:49:55) A former OpenAI safety employee said he quit because the company's leaders were 'building the Titanic' and wanted 'newer, shinier' things to sell
- Synthetic Media & Art
- (02:02:05) Outro + AI Song
#173 - Gemini Pro, Llama 400B, Gen-3 Alpha, Moshi, Supreme Court
Our 173rd episode with a summary and discussion of last week's big AI news!
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
See full episode notes here.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies.Timestamps + links:
- (00:00:00) Intro / Banter
- Tools & Apps
- (00:03:24) Google opens up Gemini 1.5 Flash, Pro with 2M tokens to the public
- (00:08:47) Meta is about to launch its biggest Llama model yet — here’s why it’s a big deal
- (00:12:38) Runway’s Gen-3 Alpha AI video model now available – but there’s a catch
- (00:16:28) This is Google AI, and it's coming to the Pixel 9
- (00:17:30) AI Firm ElevenLabs Sets Audio Reader Pact With Judy Garland, James Dean, Burt Reynolds and Laurence Olivier Estates
- (00:20:06) Perplexity’s ‘Pro Search’ AI upgrade makes it better at math and research
- (00:23:12) Gemini’s data-analyzing abilities aren’t as good as Google claims
- Applications & Business
- (00:26:38) Quora’s Chatbot Platform Poe Allows Users to Download Paywalled Articles on Demand
- (00:32:04) Huawei and Wuhan Xinxin to develop high-bandwidth memory chips amid US restrictions
- (00:34:57) Alibaba’s large language model tops global ranking of AI developer platform Hugging Face
- (00:39:01) Here comes a Meta Ray-Bans challenger with ChatGPT-4o and a camera
- (00:43:35) Apple’s Phil Schiller is reportedly joining OpenAI’s board
- (00:47:26) AI Video Startup Runway Looking to Raise $450 Million
- Projects & Open Source
- (00:48:10) Kyutai Open Sources Moshi: A Real-Time Native Multimodal Foundation AI Model that can Listen and Speak
- (00:50:44) MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation
- (00:53:47) Anthropic Pushes for Third-Party AI Model Evaluations
- (00:57:29) Mozilla Llamafile, Builders Projects Shine at AI Engineers World's Fair
- Research & Advancements
- (00:59:26) Researchers upend AI status quo by eliminating matrix multiplication in LLMs
- (01:05:55) AI Agents That Matter
- (01:12:09) WARP: On the Benefits of Weight Averaged Rewarded Policies
- (01:17:20) Scaling Synthetic Data Creation with 1,000,000,000 Personas
- (01:24:16) Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
- Policy & Safety
- (01:26:32) With Chevron’s demise, AI regulation seems dead in the water
- (01:33:40) Nvidia to make $12bn from AI chips in China this year despite US controls
- (01:37:52) Uncle Sam relies on manual processes to oversee restrictions on Huawei, other Chinese tech players
- (01:40:57) U.S. government addresses critical workforce shortages for the semiconductor industry with new program
- (01:42:42) Bridgewater starts $2 billion fund that uses machine learning for decision-making and will include models from OpenAI, Anthropic and Perplexity
- (01:47:57) Outro
Related Episodes
167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson
In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.
Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all. Robin is here to provide a different perspective.
------
✨ DEBRIEF | Unpacking the episode:
https://www.bankless.com/debrief-robin-hanson
------
✨ COLLECTIBLES | Collect this episode:
https://collectibles.bankless.com/mint
------
✨ NEW BANKLESS PRODUCT | Token Hub
https://bankless.cc/TokenHubRSS
------
In this episode, we explore:
- Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
- The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
- Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
- A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
- Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.
Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.
------
BANKLESS SPONSOR TOOLS:
⚖️ ARBITRUM | SCALING ETHEREUM
https://bankless.cc/Arbitrum
🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
https://bankless.cc/kraken
🦄UNISWAP | ON-CHAIN MARKETPLACE
https://bankless.cc/uniswap
👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
https://bankless.cc/phantom-waitlist
🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
https://bankless.cc/MetaMask
------
Topics Covered
0:00 Intro
8:42 How Robin is Weird
10:00 Are We All Going to Die?
13:50 Eliezer’s Assumption
25:00 Intelligence, Humans, & Evolution
27:31 Eliezer Counter Point
32:00 Acceleration of Change
33:18 Comparing & Contrasting Eliezer’s Argument
35:45 A New Life Form
44:24 AI Improving Itself
47:04 Self Interested Acting Agent
49:56 Human Displacement?
55:56 Many AIs
1:00:18 Humans vs. Robots
1:04:14 Pause or Continue AI Innovation?
1:10:52 Quiet Civilization
1:14:28 Grabby Aliens
1:19:55 Are Humans Grabby?
1:27:29 Grabby Aliens Explained
1:36:16 Cancer
1:40:00 Robin’s Thoughts on Crypto
1:42:20 Closing & Disclaimers
------
Resources:
Robin Hanson
https://twitter.com/robinhanson
Eliezer Yudkowsky on Bankless
https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky
What is the AI FOOM debate?
https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate
Age of Em book - Robin Hanson
https://ageofem.com/
Grabby Aliens
https://grabbyaliens.com/
Kurzgesagt video
https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s
-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
https://www.bankless.com/disclosures
Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era
Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community.
Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform.
"Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
Artificial Intelligence
BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat
You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless. And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a bit of a ride. Because ultimately, I hope you agree that the topics I share are always very important.
Mo Gawdat, who you may remember from episode #91 Solving Happiness is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.
Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.
During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence. During the pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now, having contributed to AI's development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.
Today we discuss:
Pandemic of AI and why the handing COVID is a lesson to learn from
The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence
How machines started creating and coding other machines
The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us
Machines will become emotional sentient beings with a Superconsciousness
To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to say it) love, compassion, connection and respect.
Full show notes for this and all other episodes can be found on The Doctor's Kitchen.com website
Hosted on Acast. See acast.com/privacy for more information.