IP meets low latency HPA and AI design demands

by · Electronics Weekly.com

Based on Synopsys’ Ethernet and PCIe IP, the Ultra Ethernet IP help developers of AI/HPC infrastructure chips and systems.

https://static.electronicsweekly.com/wp-content/uploads/2024/12/10094448/NEWS-Synopsys-alt-Image-Ultra-Ethernet-and-UALink-Enable-Connecting-Massive-AI-Accelerator-Clusters.jpg

Hyperscale data centres will need to scale to hundreds of thousands of accelerators and rely on efficient and fast connections to support large language models which process trillions of parameters in large language models for HPC and AI operations, continued Ron Lowman, principal project manager for PCIe at Synopsys.

For example, the Llama 3 AI model, introduced in 2024, has 400bn parameters. The rate of growth means the industry goal is to reduce training and inference time, he continued.

https://static.electronicsweekly.com/wp-content/uploads/2024/12/09123743/Synopsys-Launches-Industrys-First-Ultra-Ethernet-and-UALink-IP-Solutions-228x300.jpg

The Ultra Ethernet IP and UALink IP are claimed to be the industry’s first Ultra Ethernet IP and UALink IP to connect massive AI accelerator clusters. They are based on open industry standards, the UALink for AI accelerators and Ultra Ethernet is a standard for interoperable, high performance stack architecture for AI/HPC networks.

The UALink IP is to scale up local networks while the Ultra Ethernet IP is to scale out global ones.  meet the demand for standards-based, high-bandwidth, and low-latency HPC and AI accelerator interconnects.

Comprising a PCS controller, verification IP and the 224G Ethernet PHY IP, the Ultra Ethernet IP will enable up to 1.6Tbps bandwidth to connect up to one million endpoints in a single network; sufficient for AI workload real time processing.

The verification IP helps ensure protocol adherence to evolving standards for faster validation of AI and HPC systems.

The UALink IP also comprises controller, PHY and verification IP.  To increase AI compute capacity it offers up to 200Gbps throughput per lane in order to link up to 1,024 accelerators within one rack.

The controller IP synchronises and shares memory across these accelerators to reduce latency and ease bottlenecks within AI hardware infrastructure, explained Lowman.

The MAC and PCS IP support an interface to the higher layers of the Ultra Ethernet stack, providing a full silicon implementation for switches, AI accelerators and smart NICs (network interface controllers).

The Synopsys Ultra Ethernet IP is scheduled to be available in the first half of 2025, followed by the UALink IP, scheduled to be available in the second half of 2025.

www.synopsys.com