Podcast Summary
From manual to software-driven trading on Wall Street: Marty Chavez's career illustrates the transformative role of technology in financial services, from supercomputers to SECDB and now to AI.
Learning from this conversation with Marty Chavez is the transformative role technology has played in the financial services industry, particularly in the evolution of Wall Street trading from a manual process to a software-driven business. Marty's career, spanning from his early days working on supercomputers simulating bomb explosions to his tenure at Goldman Sachs where he helped develop the legendary SECDB system, showcases this trend. He currently continues to shape the industry as a partner and vice chairman at 6th Street Partners. Furthermore, his experiences have equipped him with a unique perspective, leading him to serve on the boards of various esteemed organizations, including the Broad Institute and Stanford Medicine. Throughout his journey, Marty's father's advice, "computers are the future," has proven to be prophetic. The potential impact of artificial intelligence on the industry is an exciting development that Marty discusses in the podcast, offering valuable insights for those interested in the future of financial services.
Steve Harrison's career-defining advice at Harvard: Advice from Steve Harrison at Harvard to combine computer science and biochemistry led to a custom major, digital twins in biology, and a career in finance applying digital twin technology
Steve Harrison's advice in 1981 at Harvard, encouraging the speaker to combine computer science with biochemistry, set the stage for his entire career. This advice led him to construct a custom biochem major, focusing on simulation and digital twins of living systems. This experience inspired him to build digital twins of various realities, including financial systems on Wall Street. The ability to experiment and ask questions in a digital twin environment allowed for better prediction and decision-making. After graduation, his fascination with digital twins in biology led him to pursue graduate work in healthcare and AI. However, his mother's encouragement to earn a PhD kept him in academia for a time. Eventually, he transitioned back to Wall Street, where he continued to apply digital twin technology to various industries, including finance.
The Challenges of Early AI and Unexpected Career Turns: During the 'nuclear winter' of AI in the 1990s, perseverance and adaptability led a researcher to leave academia and find success in an unexpected role on Wall Street.
The field of artificial intelligence (AI) in the early 1990s faced significant challenges due to the computational limitations of the technology. The speaker, who was working on an AI project at Stanford, encountered the issue of calculating the joint probability distribution in medicine, which was a massive problem due to the large number of disease categories and clinical findings. Despite their efforts, the computers were not fast enough to make significant progress. This period was known as a "nuclear winter" in AI. The speaker, feeling despondent and unsure of what to do next, received a letter from a headhunter at Goldman Sachs, leading him to leave academia and join the finance industry. Here, he found himself in an unexpected role as a commodity strategist on the oil trading desk, an odd fit for a gay Hispanic computer geek on Wall Street in 1994. The experience was a turning point in his career, ultimately leading him to new opportunities and successes. This story illustrates the importance of perseverance and adaptability in the face of adversity and the unexpected twists and turns that life can bring.
The intersection of expertise and technology in financial services: Goldman Sachs' use of SecDB during the financial crisis showcased the value of analyzing complex relationships and having all necessary data in one virtual place, leading to innovative solutions and limited losses.
The intersection of expertise in different fields can lead to innovative solutions, as demonstrated by Goldman Sachs' use of SecDB during the financial crisis. SecDB, a system that simulated the pricing of financial instruments, helped Goldman Sachs navigate the crisis by identifying and hedging against large unhedged positions in collateralized debt obligations (CDOs). While other firms lost billions, Goldman Sachs' ability to analyze complex relationships and to have all the necessary data in one virtual place allowed them to act quickly and limit their losses. Regulation has historically driven technological change in financial services, particularly in the shift to electronic trading of assets like stocks and bonds. However, emergent technologies have also played a role in shaping the industry. Looking forward, the intersection of regulation and technology will continue to shape the future of financial services, with a focus on transparency, efficiency, and risk management.
Effective regulation in a capitalist economy: Rules and simulations ensure financial stability and prevent market manipulation or crashes. Understanding and managing boundaries between AI and the real world is crucial as AI becomes more agentive.
Effective regulation is crucial in a capitalist economy. The speaker emphasizes the importance of rules and simulations in ensuring financial stability and preventing potential market manipulation or crashes. He uses the example of the Dodd Frank legislation and the Federal Reserve's implementation of the DFAST simulation to illustrate this point. He also highlights the need to understand and manage the boundaries between AI and the real world as AI becomes more agentive. The speaker also touches upon the role of regulators in understanding complex systems, such as algorithms in electronic trading, and the importance of establishing safety measures at the boundaries between these systems and the real world. He draws parallels between this concept and the need for safety measures at railroad junctions. Regarding the current day and the rise of generative AI, the speaker acknowledges the significant advancements in AI technology since his PhD in 1991. He emphasizes the importance of managing the boundaries between AI and the real world, especially as AI becomes more agentive and capable of causing change. He also highlights the potential impacts of generative AI in various industries beyond financial services. However, the conversation did not delve deeply into these potential impacts, so further exploration is needed to fully understand the speaker's perspective on this topic.
The connectionist neural network approach to AI is delivering remarkable results and is being combined with Bayesian decision networks.: Connectionist neural networks, once dismissed, now lead AI research with impressive results. They're effective for recognizing static things but less clear for dynamic systems, and Fortune 100 companies adopt them for productivity gains.
The connectionist neural network approach to artificial intelligence (AI), which was once dismissed by some researchers, has become the leading thread of research in the field and is delivering remarkable results. Initially, Bayesian decision networks and neural networks were seen as distinct approaches, but now they are being combined. This research began with simple image recognition tasks, such as identifying cats in images, but has since expanded to include more complex tasks like predicting what comes next or filling in missing information. The success of these techniques relies heavily on the training set and a stationary distribution, making them particularly effective for recognizing things that don't change rapidly, like cats. However, their application to dynamic systems, such as markets, is less clear. Despite the risks, these techniques are being adopted by many Fortune 100 companies, and both employees and executives are recognizing their potential impact on business productivity. The widespread use of these tools, from the bottom up with developers and organizations, and the top down with CEOs and boards, suggests a unique moment of momentum in the field of AI.
Effective AI implementation relies on a reliable data source: To maximize AI's potential, businesses must ensure they have a trustworthy data foundation, collaborate with regulators, and build ethical AI systems.
Having a reliable and accurate single source of truth for data is crucial for effective implementation of AI in organizations. This data engineering problem is often overlooked, but it's essential for ensuring that AI systems are trained on correct and actionable information. The larger the context window for data, the more complex workflows can be augmented with AI, leading to significant improvements in various sectors such as legal, compliance, vendor onboarding, and risk management. However, it's important for businesses to collaborate with regulators and lawmakers to ensure the ethical and responsible adoption of AI technology. By building trust and understanding, businesses can accelerate the adoption of AI while also meeting regulatory requirements.
Regulating large language models at key interfaces: Focus on creating standards and attestations at LLM interfaces instead of making creators liable for all issues. Regulations should prioritize safety and ethical considerations in AI and biotech.
Regulation of large language models (LLMs) in technology and other industries should focus on creating standards and attestations at key interfaces, rather than making LLM creators liable for every issue that arises. This approach is similar to securing electronic trading systems, where prevention of human actions like shouting orders into phones is difficult, but controls at interfaces with other computers are crucial. The analogy is that LLMs function like operating systems, and regulations should address these boundaries first. The speaker also emphasizes the importance of implementing regulations before major issues arise, as seen in the Dodd-Frank Act, where most regulations were unnecessary red tape. In the context of life sciences and biotech, generative AI holds immense potential. The speaker had a fascinating conversation with Jensen Huang, founder of NVIDIA, about the intersection of chip design and software. Jensen, who was a pioneer in using software to design chips, saw NVIDIA as a software company despite its reputation as a hardware company. The speaker believes that similar advancements in AI and biotech will revolutionize the industry, and regulations should support this progress while ensuring safety and ethical considerations.
Predicting Drug Success with AI and Simulations: A Long Way to Go: While AI and simulations have made progress in drug discovery, the complexity of biology and vast number of organic compounds make accurate predictions a daunting task. However, the increasing availability of data offers hope for AI and LOMs to infer the rest of biology and improve drug trial success rates.
While advancements in simulations and artificial intelligence (AI) have made significant strides in various industries, including drug discovery, there is still a long way to go before we can accurately predict the success of a potential drug or even find the drug itself. The complexity of biology is vastly greater than the most complicated chips, and we are unsure of the exact number of orders of magnitude of abstraction layers required to fully understand it. Furthermore, the vast number of possible organic compounds makes navigating the space a daunting task. However, with the increasing availability of data, there is hope that AI and large language models (LOMs) can help us infer the rest of biology and improve the probability of success in drug trials. Despite the challenges, the potential rewards are immense, making it an exciting time to be alive and work in the fields of computer science and biology.