Logo
    Search

    technological advancement

    Explore "technological advancement" with insightful episodes like "How venture capital built Silicon Valley", "Xi Jinping Opens a New Chapter for China", "Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power" and "#83 – Nick Bostrom: Simulation and Superintelligence" from podcasts like ""The Indicator from Planet Money", "The Daily", "The Ezra Klein Show" and "Lex Fridman Podcast"" and more!

    Episodes (4)

    How venture capital built Silicon Valley

    How venture capital built Silicon Valley
    In 1957, a group of scientists fed up with their boss set the modern venture capital model in motion. Today, the story of the unconventional investment idea behind Silicon Valley startup culture and so much of the technology we use today.

    For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy

    Xi Jinping Opens a New Chapter for China

    Xi Jinping Opens a New Chapter for China

    Four years ago, Xi Jinping set himself up to become China’s leader indefinitely.

    At last week’s Communist Party congress in Beijing, he stepped into that role, making a notable sweep of the country’s other top leaders and placing even greater focus on national security.

    Guest: Chris Buckley, chief China correspondent for The New York Times.

    Background reading: 

    For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday. 

    Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power

    Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power

    “The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel,” writes Sam Altman in his essay “Moore’s Law for Everything.” “This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.”

    Altman is the C.E.O. of OpenAI, one of the biggest, most important players in the artificial intelligence space. His argument is this: Since the 1970s, computers have gotten exponentially better even as they’re gotten cheaper, a phenomenon known as Moore’s Law. Altman believes that A.I. could get us closer to Moore’s Law for everything: it could make everything better even as it makes it cheaper. Housing, health care, education, you name it.

    But what struck me about his essay is that last clause: “if we as a society manage it responsibly.” Because, as Altman also admits, if he is right then A.I. will generate phenomenal wealth largely by destroying countless jobs — that’s a big part of how everything gets cheaper — and shifting huge amounts of wealth from labor to capital. And whether that world becomes a post-scarcity utopia or a feudal dystopia hinges on how wealth, power and dignity are then distributed — it hinges, in other words, on politics.

    This is a conversation, then, about the political economy of the next technological age. Some of it is speculative, of course, but some of it isn’t. That shift of power and wealth is already underway. Altman is proposing an answer: a move toward taxing land and wealth, and distributing it to all. We talk about that idea, but also the political economy behind it: Are the people gaining all this power and wealth really going to offer themselves up for more taxation? Or will they fight it tooth-and-nail?

    We also discuss who is funding the A.I. revolution, the business models these systems will use (and the dangers of those business models), how A.I. would change the geopolitical balance of power, whether we should allow trillionaires, why the political debate over A.I. is stuck, why a pro-technology progressivism would also need to be committed to a radical politics of equality, what global governance of A.I. could look like, whether I’m just “energy flowing through a neural network,” and much more.

    Mentioned: 

    “Moore’s Law for Everything” by Sam Altman

    Recommendations: 

    Crystal Nights by Greg Egan

    The Last Question by Isaac Asimov

    The Gentle Seduction by Marc Stiegler

    “Meditations on Moloch” by Scott Alexander 

    If you enjoyed this episode, check out our previous conversation “Is A.I. the Problem? Or Are We?”

     

     

    You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    “The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld, audience strategy by Shannon Busta. Special thanks to Kristin Lin.

    #83 – Nick Bostrom: Simulation and Superintelligence

    #83 – Nick Bostrom: Simulation and Superintelligence
    Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: - Cash App - use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick's website: https://nickbostrom.com/ Future of Humanity Institute: - https://twitter.com/fhioxford - https://www.fhi.ox.ac.uk/ Books: - Superintelligence: https://amzn.to/2JckX83 Wikipedia: - https://en.wikipedia.org/wiki/Simulation_hypothesis - https://en.wikipedia.org/wiki/Principle_of_indifference - https://en.wikipedia.org/wiki/Doomsday_argument - https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:48 - Simulation hypothesis and simulation argument 12:17 - Technologically mature civilizations 15:30 - Case 1: if something kills all possible civilizations 19:08 - Case 2: if we lose interest in creating simulations 22:03 - Consciousness 26:27 - Immersive worlds 28:50 - Experience machine 41:10 - Intelligence and consciousness 48:58 - Weighing probabilities of the simulation argument 1:01:43 - Elaborating on Joe Rogan conversation 1:05:53 - Doomsday argument and anthropic reasoning 1:23:02 - Elon Musk 1:25:26 - What's outside the simulation? 1:29:52 - Superintelligence 1:47:27 - AGI utopia 1:52:41 - Meaning of life