Logo
    Search

    This Is How Algorithms Impact Every Aspect Of Our Lives, from News to Credit Scores to Stocks

    enDecember 18, 2017

    Podcast Summary

    • The Value of Human Insights in a Data-Driven WorldIn a world dominated by algorithms and quantitative strategies, it's essential to remember the importance of human insights and empathy in navigating complex situations where algorithms may fall short.

      While algorithms and quantitative strategies dominate various industries, including finance, it's essential to remember their limitations and the importance of human insights. Principal Asset Management, in real estate, demonstrates this approach by combining local knowledge and global expertise. Meanwhile, in the realm of podcasting, hosts Tracy Alloway and Joe Weisenthal discuss the need to question the assumptions behind algorithms and their potential shortcomings. Algorithms, prevalent in various sectors, from finance to content curation, should not be viewed as infallible. Instead, we must recognize the value of human empathy and understanding, which can help us navigate complex situations where algorithms may fall short. This perspective is crucial as we continue to rely on algorithms to shape our experiences and make decisions.

    • Algorithms and their unintended consequencesAlgorithms, while beneficial, can create opaque processes and unintended consequences, impacting our daily lives in non-financial contexts, leading to concerns over bias and online bubbles. Regulatory bodies are starting to address these issues.

      Algorithms, while offering numerous benefits, can also create unintended consequences and opaque processes that shape our society in significant ways. Frank Pascale, a professor of law at the University of Maryland and author of "The Black Box Society," has explored these issues extensively, starting with his early work on search engines in the mid-2000s. He's seen parallels between the use of algorithms in technology and finance, and he's concerned about the creation of "stealth health profiles" based on our digital footprints. These profiles, compiled from data brokers and online companies like Facebook, Google, and Twitter, can impact our daily lives in non-trading financial contexts. As awareness grows about the origins and implications of the ads, bots, and other content we encounter online, regulatory bodies are starting to take notice and address these concerns. The debate around algorithms feels like a deja vu of the early days of search engines, and the potential for reinforcement of biases and creation of online bubbles is a cause for concern. Pascale testified before the House Judiciary Committee in 2008 and, more recently, before another committee, indicating a shift in understanding and a growing recognition of the need for transparency and accountability in the use of algorithms.

    • History of data-driven algorithms raising concernsData-driven algorithms, while offering objective decision-making, can lead to concerns around accuracy, transparency, and potential manipulation. Learning from the past with credit scores, it's vital to ensure these models are transparent, accurate, and used ethically.

      The use of data-driven algorithms and scoring models, while intended to provide objective decision-making, can lead to concerns around accuracy, transparency, and potential for manipulation. The history of credit scores serves as a parallel, with their development in the 1950s and increasing demand in the 1960s and 70s due to concerns over discriminatory loan decisions. However, as these models became more complex and added more data, they also became more secretive, leading to issues with accountability and accuracy. The emergence of new data-driven algorithms in various domains, from marketing to employment, raises similar concerns. It's crucial to ensure that these models are transparent, accurate, and used ethically to avoid negative consequences for individuals.

    • AI systems can have inherent biases despite being objective solutionsThe data used to train AI systems can contain human biases and algorithms can create unintended discriminatory side effects, requiring ongoing research and vigilance to ensure fairness.

      While the use of algorithms and AI in systems like credit scoring and law enforcement may seem like an objective solution to potential human biases, these systems are not immune to biases themselves. The data used to train these algorithms is still collected by humans and can contain inherent biases. Furthermore, the algorithms can create systemic effects that are difficult to anticipate. The controversy surrounding these systems is not new, as it echoes the concerns over credit scoring systems that emerged decades ago. While these systems can lead to increased credit availability and efficiency, it is crucial to address the potential biases and discriminatory side effects. As the author of "A Call For Judgment" argued, these systems can give a false sense of accuracy and lead to unintended consequences. The ongoing research in the fairness and machine learning community aims to strike a balance between maintaining efficiency and eliminating discriminatory side effects. It is essential to remain vigilant and continue the conversation around these issues to ensure that these systems are fair and unbiased.

    • Timeliness of payments is primary driver of credit scoresEmpirical research shows that timely payments significantly impact credit scores, while transparency and access ensure fairness. However, bespoke scores and targeted advertising using extensive data raise concerns about privacy and potential misuse.

      While there are ongoing debates about manipulating credit scores through obscure data or signals, empirical research suggests that timeliness of payments is the primary driver of credit scores. Transparency and access to credit scores have been crucial in ensuring fairness and accountability. However, companies are now finding ways to offer bespoke scores that go beyond the core credit score, which raises concerns about the need for financial regulators and legislators to keep up with industry trends and prevent arbitrage. When it comes to targeted advertising, the focus is often on specific attributes or vulnerabilities rather than targeting individuals directly. The visibility of individual-level data is limited, but the data used for targeting can be quite extensive and detailed, allowing for precise targeting based on demographics, interests, and behaviors. The use of such data raises concerns about privacy and potential misuse, highlighting the need for clear regulations and transparency.

    • Growing concerns about individual privacy and potential negative consequences of data analysis in marketing and financeAdvancements in data analysis and algorithmic processes in marketing and finance bring benefits but also raise concerns about privacy, fairness, transparency, and potential negative impacts on market stability and trader frustration.

      While marketing and finance have seen significant advancements in data analysis and algorithmic processes on an aggregate level, there are growing concerns about individual privacy and the potential for re-identification of anonymized data. In finance, the rise of algorithmic trading has led to increased speed and efficiency, but also raised questions about fairness, transparency, and potential negative side effects on market stability and trader frustration. Despite some initial concerns about financial stability, it seems that the market has adapted to these changes. However, the emphasis on speed in finance may have unintended consequences and could lead to confusion and frustration among traders. In marketing, the ability to identify and target individuals raises privacy concerns, and the field of re-identification research continues to advance, posing a threat to individual privacy. Overall, while these developments offer benefits, it's important to consider the potential risks and negative consequences.

    • The Importance of Human Touch in Complex ProfessionsWhile algorithms can be useful in simpler roles, complex professions require human touch and decision-making abilities, such as teaching, medicine, and personal training. Algorithms in finance and risk management have limitations and may not accurately reflect future conditions based on biased or discriminatory past data.

      While there are valid concerns about automation and algorithms replacing human jobs, particularly in simpler roles, there are also many complex professions where human touch and decision-making abilities are essential. These professions, such as teaching, medicine, and personal training, require a level of nuance and understanding that goes beyond mathematical equations. Additionally, algorithms in finance and risk management are primarily backward-looking, relying on historical data to predict the future. However, there is a risk that these predictions may not accurately reflect future conditions, especially if they are based on biased or discriminatory past data. Therefore, it is crucial to consider the limitations of algorithms and the importance of human judgment and decision-making in various fields. Ultimately, the future of work will likely involve a balance between automation and human expertise.

    • Addressing issues in media companies caused by automation and AIMedia companies like Facebook and YouTube face challenges with discrimination and exploitative content from automation and AI. Hiring more people to address these concerns and improve quality will impact profit margins but create better entities. Human oversight and intervention are crucial.

      While automation and AI have brought significant efficiency and profits to media companies like Facebook and YouTube, they have also led to concerning issues such as discrimination and exploitative content. These companies cannot be run solely by robots or algorithms, and it's essential to fundamentally retool how they operate. The discovery of these issues is similar to the realization of the costs of carbon emissions and the need for retooling. Companies like YouTube are hiring more people to address these concerns and improve quality, which will impact profit margins but create better entities. This precedent could extend to other fields, including journalism. While automation can do a good job, it's important to recognize that it's not a replacement for human oversight and intervention. The conversation touched on various news items, including racial concerns, and highlighted the significant role algorithms play in various aspects of our lives.

    • The Significance of Algorithms and Data in SocietyDespite the benefits of algorithms and data, they can also lead to inaccuracies and negative consequences. A system of redress and data management is necessary to ensure fairness and accuracy in various domains.

      Algorithms and data play a significant role in various aspects of society, including law enforcement and markets. However, these technologies can also exacerbate problems and lead to inaccuracies. The importance of data and the need for accurate information have become crucial in today's competitive landscape. As Frank Pasquale pointed out, there is a need for a system of redress to address the potential negative consequences of algorithms and data usage. The cleaning and management of data is a mammoth task that humans will likely need to tackle. The example of personalized TV recommendations on streaming services illustrates the prevalence of these issues. It's essential to address these challenges to ensure fairness and accuracy in various domains. Listen to the Money Stuff podcast, hosted by Matt Levine and Katie Greifeld, for more insights on Wall Street finance and related topics. And don't forget to try the new honey lemon pepper wings from Popeyes!

    Recent Episodes from Odd Lots

    The Theory That Explains Why Everyone Went Crazy

    The Theory That Explains Why Everyone Went Crazy

    Does it feel to you like society has gone crazy? Well, you're not alone. There's a general view that all around the world, in the realms of politics, culture, business, and so forth, a lot of people are losing their minds. So if this is true, what's the reason for it? On this episode we speak with Dan Davies, the author of the new book The Unaccountability Machine: Why Big Systems Make Terrible Decisions - And How The World Lost Its Mind. Dan talks about the field of study known as cybernetics, and the inevitable outcomes of systems that grow more and more complex. This complexity -- which describes many things in the modern world, and leads to what Dan calls "accountability sinks," or entities that basically exist just to be blamed for things that have gone wrong. Dan walks us through how these emerged in the modern world, where things are headed, and how the trend could theoretically be reversed.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJuly 01, 2024

    Lots More With Neil Dutta on a Looming Fed Policy Error

    Lots More With Neil Dutta on a Looming Fed Policy Error

    Neil Dutta, the top economist over at Renaissance Macro, has generally been sunny and optimistic about the economy over the last four years or so. But now he's warning of a possible mistake by the Federal Reserve. In his view, the central bank is waiting too long to get confirmation that inflation is coming back to target. Meanwhile, unemployment is starting to creep up in a meaningful way. As he sees it, if you're still worried about upside risk to inflation at this point, you need to have a theory about where that inflation is going to come from — and it's really hard to come up with an answer for that right now, given the general downward momentum in hiring and the overall economy. In this episode of Lots More, we catch up with Neil to talk about the risk that the Fed will blow the soft landing.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 28, 2024

    The American Entrepreneurs Who First Opened The Chinese Market

    The American Entrepreneurs Who First Opened The Chinese Market

     From cars to toys to clothes, we're just used to seeing the label "Made In China" on all sorts of things. But how did China become a go-to destination for manufactured goods in the first place? Who actually recognized that there was a huge opportunity to tap the abundant, low-cost labor to sell goods to Western consumers? On this episode of the podcast we speak with Elizabeth Ingleson, a professor at the London School of Economics and the author of the book Made in China: When US-China Interests Converged to Transform Global Trade. Ingleson traces the roots of the US-China trade relationship to a handful of US entrepreneurs in the early 1970s who first went into the country and recognized its opportunity as an export powerhouse. We discuss who these individuals were, the obstacles they had to overcome, and how they reshaped the entire global economy.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 27, 2024

    Why Tom Lee Thinks We Could See S&P 15,000 by 2030

    Why Tom Lee Thinks We Could See S&P 15,000 by 2030

    The stock market has had a torrid run in 2024 despite the fact that interest rate cuts haven't materialized in the way people had expected at the start of the year. In fact, outside of a few blips here and there (like spring 2020), US stocks have been phenomenal performers for years. Tom Lee, the founder of Fundstrat and FS Insight has been bullish for a long time, having caught the correct side of this lengthy trend. On this episode, we speak to the former JPMorgan strategist about how he thinks about the market, what he sees happening right now in macro and demographic trends, and why he thinks it’s plausible that the market could roughly triple in the next six years.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 24, 2024

    CoreWeave's CSO on the Business of Building AI Datacenters

    CoreWeave's CSO on the Business of Building AI Datacenters

    Everyone knows that the AI boom is built upon the voracious consumption of chips (largely sold by Nvidia) and electricity. And while the legacy cloud operators, like Amazon or Microsoft, are in this space, the nature of the computing shift is opening up new space for new players in the market. One of the hottest companies is CoreWeave, a company backed in part by Nvidia, which has grown its datacenter business massively. So how does their business actually work? How do they get energy? Where do they locate operations? How are they financed? What's the difference between a cloud AI and a legacy cloud? On this episode, we speak with CoreWeave's Chief Strategy Officer Brian Venturo about what it takes to build out operations at this scale.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 21, 2024

    John Arnold on Why It's So Hard To Build Things in America

    John Arnold on Why It's So Hard To Build Things in America

    Virtually everyone, across the ideological spectrum, has the view right now that it's too hard to build things (or get things done generally) in America. New infrastructure is thwarted by red tape and permitting. New housing is thwarted by YIMBYism. Even something that doesn't require much new construction -- like NYC's attempt to impose congestion pricing -- is difficult to get done after years and years of wrangling. What is the core problem? And what can be done to address it? On this episode, we speak with John Arnold, who started his career as an energy trader at Enron, before going on to found a highly successful energy hedge fund. Now in his role as the co-founder of Arnold Ventures, he works on policy solutions to address these key bottlenecks. We discuss how he goes about philanthropy to affect policy change, the problems he's identified, and what solutions could be put in place to improve domestic development.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 20, 2024

    Evolving Money: Money Without Borders (Sponsored Content)

    Evolving Money: Money Without Borders (Sponsored Content)

    Throughout history, financial markets have struggled with the issue of borders. Borders create friction, add cost and cause headaches for anyone who wants to spend money across them. On top of that, various national currencies can be wildly unstable.

    Could a borderless, global currency ease friction and enhance financial inclusion and stability around the world? Cryptocurrencies offer an intriguing possible solution to money’s border problem. And a particular kind of cryptocurrency, called stablecoins, could become a powerful medium of exchange for international payments - and offer people around the world increased economic freedom.

    This episode is sponsored by Coinbase.

    See omnystudio.com/listener for privacy information.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 18, 2024

    The Big Trade Underneath the Strangely Calm Surface of the S&P 500

    The Big Trade Underneath the Strangely Calm Surface of the S&P 500

    For much of this year, the S&P 500 has marched steadily higher while measures of stock market volatility, like the VIX, have stayed pretty low. But looking at the headline index only tells you part of the story. Beneath the surface of the S&P 500, individual stocks have been moving up and down a lot. And of course, traders have figured out a way to make money on the difference between the quiet overall index and all that volatility happening in individual stocks. This is the dispersion trade that's gotten quite a bit of attention in recent months. But figuring out exactly who's doing it and how pervasive it is isn't that easy. In this episode, we speak with Michael Purves, CEO and founder of Tallbacken Capital Advisors, and Josh Silva, managing partner and CIO at Passaic Partners, about this new volatility trade and what it means for the overall stock market.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 17, 2024

    What a 'Degen' Crypto Trader Really Does All Day

    What a 'Degen' Crypto Trader Really Does All Day

    A few lucky people have made generational wealth trading the ups and downs of the crypto market. And some finance professionals have shifted gears to focus primarily on the space. But what is it like to actually trade these coins day-to-day? How do people pick which ones to buy? How do they analyze the coins themselves? How do they get reliable information? And what is it like, emotionally, to trade such an infamously volatile asset? On this episode of the Odd Lots podcast, we speak with Julian Malinak. In his day job, Julian works in healthcare tech. But the rest of the time, he's looking on message boards for the next 100-bagger. At one point he had made enough to retire on. And then it all went poof. But he keeps grinding and trying to improve his craft. Julian — who we found on the Odd Lots Discord server — explains what he does all day, and how the market really works from a trading perspective. 

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 14, 2024

    How Indonesia and China Cornered the Nickel Market

    How Indonesia and China Cornered the Nickel Market

    There's been a huge change in the market for nickel, which goes into everything from electric vehicles to steel. Indonesia has grown to absolutely dominate production and now provides more than 55% of the world's supply. A lot of that is going to China, which has partnered with Indonesia to help grow its nickel industry at a phenomenal rate. Now, there are accusations that low-grade and low-priced Indonesian nickel is flooding the global market, to the detriment of other producers. Western miners like BHP and Anglo American have been shuttering their own nickel operations, and have written them down by billions of dollars in recent years. On this episode, we speak with Michael Widmer, head of metals research at Bank of America, about the sea change that's taken place in the world's nickel market and what it says about the green energy transition, as well as the scramble for other strategically important metals. We also talk about all those bullish calls on copper, and general volatility in the metals space.

    See omnystudio.com/listener for privacy information.

    Odd Lots
    enJune 13, 2024

    Related Episodes

    Instagram and the Dangers of Non-Transparent AI: Mechanistic Interpretability

    Instagram and the Dangers of Non-Transparent AI: Mechanistic Interpretability

    In this episode, we explore the AI concept of mechanistic interpretability - understanding how and why an AI model makes certain decisions. Using Instagram's machine learning-based feed ranking algorithm as an example, we discuss the dangers of algorithms that operate as black boxes. When the mechanics behind AI systems are opaque, issues like bias can go undetected. Through explaining ideas like transparency in AI and analyzing a case study on potential racial bias, we underscores why interpretable AI matters for fairness and accountability. This podcast aims to make complex AI topics approachable, relating them to real-world impacts. Join us as we navigate the fascinating intersection of technology and ethics.

    Want more AI Infos for Beginners? 📧 Join our Newsletter!

    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"

    Top AI Trends for 2024 | The AI Moment, Episode 7

    Top AI Trends for 2024 | The AI Moment, Episode 7

    On this episode of The AI Moment, we will discuss the latest developments in enterprise AI, & the top 7 AI trends for 2024.

    After the year AI had in 2023, what could possibly be next? I will give you a hint, AI won’t slow down much in 2024. There are seven key trends I think will impact the adoption of AI in 2024. I walk through what those trends are and why they are so important.

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Our latest episode with a summary and discussion of last week's big AI news!

    This week Twitter and Zoom’s algorithmic bias issuesDespite past denials, LAPD has used facial recognition software 30,000 times in last decade, records show, We’re not ready for AI, says the winner of a new $1m AI prize, How humane is the UK’s plan to introduce robot companions in care homes?, OpenAI is giving Microsoft exclusive access to its GPT-3 language model

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-fourth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Holger Hoos, Responsible AI

    Holger Hoos, Responsible AI

     

    Transcript

     

    Holger Hoos is the Alexander von Humboldt Professor of AI at RWTH Aachen University (Germany), Professor of Machine Learning at Universiteit Leiden (the Netherlands), and Adjunct Professor of Computer Science at the University of British Columbia (Canada), where he also holds an appointment as Faculty Associate at the Peter Wall Institute for Advanced Studies. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), and the European Association for Artificial Intelligence (EurAI). He is board chairman of the Confederation of Laboratories of Artificial Intelligence Research in Europe (CLAIRE), vice-president of EurAI, past president of the Canadian Association for Artificial Intelligence (CAIAC), member of the Advisory Board, and former Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR). Prof. Hoos leads the VISION coordination mandate for the four European networks of centers of excellence in AI established in 2020.

    Holger’s research focuses on Human-Centered AI, AI for Good, and AI for All. His goal is to improve the efficiency of AI methods by automatically increasing performance and reducing resource needs; and to broaden access to and use of cutting-edge AI methods. Overall, Holger and his group develop and study AI methods that augment rather than replace human intelligence, and that help human experts to overcome their biases and limitations. Known for his work on machine learning and optimization methods for the automated design of high-performance algorithms and on stochastic local search, Holger has developed – and vigorously pursues – the paradigm of programming by optimization (PbO). He is also one of the originators of automated machine learning (AutoML). Holger works at the boundaries between computer science and other disciplines. Much of his work is inspired by and has broad impact on real-world applications.

    In 2018, Holger co-founded CLAIRE, an initiative to advance European AI research and innovation. Incorporated as an international nonprofit in 2019, CLAIRE promotes excellence across all of AI, for all of Europe, with a human-centered focus. In 2021, CLAIRE received, jointly with the European Laboratory for Intelligent Systems (ELLIS), the €100 000 German AI Innovation Prize (Deutscher KI-Innovations-Preis) in recognition of outstanding contributions to development and research in AI. This prize is the largest of its kind in Europe.

    In November 2021, Holger was selected for the Alexander von Humboldt Professorship in AI, Germany’s most highly endowed research award, which honors its recipients for their outstanding research record and aims to facilitate long-term and groundbreaking research contributions. Supported by this award and substantial additional resources made available by the university, he started building a new research group at Rhine-Westphalia Technical University of Aachen (RWTH Aachen, Germany) dedicated to the advance of Human-Centered AI, AI for Good, and AI for All, in January 2022. He also serves on the board of directors of the RWTH AI Center.

    LinkedIn

    Ousted OpenAI board member on AI safety concerns

    Ousted OpenAI board member on AI safety concerns

    Sam Altman returns and OpenAI board members are given the boot; US authorities foil a plot to kill Sikh separatist leader on US soil; plus, the UK’s Autumn Statement increases the tax burden.


    Mentioned in this podcast:

    US thwarted plot to kill Sikh separatist on American soil

    Hunt cuts national insurance but taxes head to postwar high

    OpenAI says Sam Altman to return as chief executive under new board 


    The FT News Briefing is produced by Persis Love, Josh Gabert-Doyon and Edwin Lane. Additional help by Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Our engineer is Monica Lopez. Manuela Saragosa is the FT’s executive producer. The FT’s global head of audio is Cheryl Brumley. The show’s theme song is by Metaphor Music. 


    Read a transcript of this episode on FT.com



    Hosted on Acast. See acast.com/privacy for more information.