Logo

    compute express link

    Explore "compute express link" with insightful episodes like "133 - Recap of the OCP Global Summit 2022", "#157: Compute Express Link 2.0: A High-Performance Interconnect for Memory Pooling", "#155: Innovations in Load-Store I/O Causing Profound Changes in Memory, Storage, and Compute Landscape" and "#146: Understanding Compute Express Link" from podcasts like ""The French Storage Podcast", "Storage Developer Conference", "Storage Developer Conference" and "Storage Developer Conference"" and more!

    Episodes (4)

    #157: Compute Express Link 2.0: A High-Performance Interconnect for Memory Pooling

    #157: Compute Express Link 2.0: A High-Performance Interconnect for Memory Pooling
    Data center architectures continue to evolve rapidly to support the ever-growing demands of emerging workloads such as artificial intelligence, machine learning and deep learning. Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in artificial intelligence, machine learning, communication systems, and high-performance computing (HPC). These applications deploy a diverse mix of scalar, vector, matrix, and spatial architectures through CPU, GPU, FPGA, smart NICs, and other accelerators. During this session, attendees will learn about the next generation of CXL technology. The CXL 2.0 specification, announced in 2020, adds support for switching for fan-out to connect to more devices; memory pooling for increased memory utilization efficiency and providing memory capacity on demand; and support for persistent memory. This presentation will explore the memory pooling features of CXL 2.0 and how CXL technology will meet the performance and latency demands of emerging workloads for data-hungry applications like AI and ML. Learning Objectives: 1)Learn about CXL 2.0, the next generation of Compute Express Link technology; 2) Memory pooling features of CXL 2.0; 3) How CXL will meet the performance and latency demands of emerging workloads for data-hungry applications like AI and ML.

    #155: Innovations in Load-Store I/O Causing Profound Changes in Memory, Storage, and Compute Landscape

    #155: Innovations in Load-Store I/O Causing Profound Changes in Memory, Storage, and Compute Landscape
    Emerging and existing applications with cloud computing, 5G, IoT, automotive, and high-performance computing are causing an explosion of data. This data needs to be processed, moved, and stored in a secure, reliable, available, cost-effective, and power-efficient manner. Heterogeneous processing, tiered memory and storage architecture, accelerators, and infrastructure processing units are essential to meet the demands of this evolving compute, memory, and storage landscape. These requirements are driving significant innovations across compute, memory, storage, and interconnect technologies. Compute Express Link* (CXL) with its memory and coherency semantics on top of PCI Express* (PCIe) is paving the way for the convergence of memory and storage with near memory compute capability. Pooling of resources with CXL will lead to rack-scale efficiency with efficient low-latency access mechanisms across multiple nodes in a rack with advanced atomics, acceleration, smart NICs, and persistent memory support. In this talk we will explore how the evolution in load-store interconnects will profoundly change the memory, storage, and compute landscape going forward.

    #146: Understanding Compute Express Link

    #146: Understanding Compute Express Link
    Compute Express Link™ (CXL™) is an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. Datacenter architectures are evolving to support the workloads of emerging applications in Artificial Intelligence and Machine Learning that require a high-speed, low latency, cache-coherent interconnect. The CXL specification delivers breakthrough performance, while leveraging PCI Express® technology to support rapid adoption. It addresses resource sharing and cache coherency to improve performance, reduce software stack complexity, and lower overall systems costs, allowing users to focus on target workloads. Attendees will learn how CXL technology maintains a unified, coherent memory space between the CPU (host processor) and CXL devices allowing the device to expose its memory as coherent in the platform and allowing the device to directly cache coherent memory. This allows both the CPU and device to share resources for higher performance and reduced software stack complexity. In CXL, the CPU host is primarily responsible for coherency management abstracting peer device caches and CPU caches. The resulting simplified coherence model reduces the device cost, complexity and overhead traditionally associated with coherency across an I/O link. Learning Objectives: Learn how CXL supports dynamic multiplexing between a rich set of protocols that includes I/O (CLX.io, based on PCIe®), caching (CXL.cache), and memory (CXL.mem) semantics.,Understand how CXL maintains a unified, coherent memory space between the CPU and any memory on the attached CXL device,Gain insight into the features introduced in the CXL specification
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io