SAN FRANCISCO – Broadcom’s semiconductor division on Monday launched Jericho4, its latest networking chip designed to connect data centers up to 60 miles (96.5 km) apart while accelerating AI-driven data processing. The new chip enhances bandwidth and efficiency for large-scale AI workloads, addressing the exploding demand for high-speed, secure data transfers in cloud and AI infrastructure.
Why It Matters for AI and Cloud Computing
-
AI computation is becoming more complex, requiring thousands of GPUs (like those from Nvidia and AMD) to be linked across distributed data centers.
-
Cloud giants like Microsoft (MSFT.O) and Amazon (AMZN.O) need faster, more scalable networking solutions to prevent bottlenecks in AI model training and inference.
-
Security is critical—data traveling beyond physical data centers faces interception risks, making robust encryption and low-latency transfers essential.
Key Features of Jericho4
-
Massive Scalability: A single system can integrate ~4,500 Jericho4 chips, enabling seamless operation across sprawling AI clusters.
-
High-Bandwidth Memory (HBM): Borrowing tech from Nvidia and AMD’s AI processors, Jericho4 minimizes congestion by rapidly handling vast data volumes in memory.
-
Long-Range Efficiency: Optimized for inter-data-center communication, reducing latency for AI workloads spread across geographic locations.
Industry Impact
With AI pushing cloud providers to rethink networking infrastructure, Broadcom’s Jericho4 positions the company as a critical enabler of next-gen AI scalability. As hyperscalers invest billions in GPU clusters, innovations like Jericho4 could become the backbone of future AI data centers.
The chip is now available for deployment, with major cloud players expected to adopt it for AI expansion projects in 2024 and beyond.