Broadcom launches new Tomahawk Ultra networking chip in AI battle against Nvidia

Broadcom has launched the Tomahawk Ultra, a groundbreaking Ethernet switch chip designed specifically to accelerate high-performance computing (HPC) and artificial intelligence (AI) workloads. It aims to challenge Nvidia’s dominance in AI networking by providing an open, ultra-low latency, and high-throughput solution for tightly coupled AI clusters and HPC environments.

Let’s have a look at the Key Features of Tomahawk Ultra:

  • Latency and Throughput: The chip delivers an ultra-low latency of 250 nanoseconds and a massive throughput of 51.2 terabits per second (Tbps) at 64-byte packet sizes, enabling rapid data transfer between numerous chips in close proximity, such as inside a server rack.
  • Lossless Ethernet Fabric: Implements advanced technologies like Link Layer Retry (LLR) and Credit-Based Flow Control (CBFC) to eliminate packet loss, creating a lossless network fabric, which is crucial for AI training workloads.
  • In-Network Compute: Supports in-network collective operations (e.g., AllReduce, Broadcast), offloading compute tasks from XPUs (accelerators) onto the switch itself, speeding up AI job completion and reducing network congestion.
  • Optimized Ethernet Headers: Reduces Ethernet header overhead from 46 bytes to as low as 10 (or 6 bytes per some sources) for enhanced efficiency while maintaining full Ethernet compatibility, which significantly improves network performance.
  • Topology Awareness: Supports complex HPC network topologies, including Dragonfly, Mesh, and Torus, via topology-aware routing.
  • Compatibility: The chip is pin-compatible with previous-generation Tomahawk switches, enabling straightforward upgrades for data centers already using Broadcom networking hardware.
  • Manufacturing: Produced using Taiwan Semiconductor Manufacturing Company’s 5-nanometer process technology.

Strategic Importance vs. Nvidia

  • Broadcom’s Tomahawk Ultra targets the scale-up AI computing market, where many processors must be linked to handle massive AI models. It competes directly with Nvidia’s NVLink Switch chip, with the key differentiator being the Tomahawk Ultra’s ability to connect four times as many chips using an enhanced Ethernet protocol rather than proprietary links.
  • The chip supports standard Ethernet infrastructure, fostering openness and potentially lower costs compared to Nvidia’s proprietary InfiniBand-based solutions, making it attractive to cloud providers and enterprise AI data centers.
  • The move reflects Broadcom’s broader push into AI infrastructure, leveraging its switching expertise to take on Nvidia’s dominance in GPU and AI interconnect technologies.

Market Reception and Availability

  • Broadcom has started shipping the Tomahawk Ultra in July 2025, with volume production and deployment expected in 2026. Leading cloud providers and networking partners like Quanta Cloud Technology and Arista are involved in sample testing and early adoption plans.
  • Market analysts see this launch as a significant escalation in competition against Nvidia in the AI data center networking segment, potentially giving customers more choice in scaling AI workloads efficiently.

Broadcom’s Tomahawk Ultra Ethernet switch chip is a major innovation targeting the AI and HPC markets with exceptional latency, throughput, and lossless performance. It is built to rival Nvidia’s proprietary interconnects by leveraging advanced Ethernet to support next-generation AI scale-up, potentially reshaping the landscape of AI hardware networking

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *