Skip to content
Tech FrontlineBiotech & HealthPolicy & LawGrowth & LifeSpotlight
Set Interest Preferences中文
Growth & Life

Nvidia's Invisible Empire: How Networking Became an $11B Pillar of AI Infrastructure

Nvidia's networking division reached $11 billion in quarterly revenue, solidifying its position as the company's second-largest pillar. Leveraging InfiniBand and Spectrum-X, Nvidia has transformed individual GPUs into integrated data center platforms, redefining the AI infrastructure market.

Jason
Jason
· 2 min read
Updated Mar 19, 2026
A futuristic visualization of a vast data center glowing with blue and green fiber optic lines, rese

⚡ TL;DR

Nvidia's networking business hit $11B in quarterly revenue, proving that connectivity is as crucial as compute in the AI era.

Growth Beyond the Silicon: Nvidia's New Pillar

While the world's gaze remains fixed on Nvidia’s H100 and Blackwell GPU chips, another segment within the company has quietly evolved into a juggernaut. According to the latest analysis from TechCrunch, Nvidia’s networking division pulled in a staggering $11 billion in revenue last quarter. This figure is not just a record high; it signals that networking has become the company's second-largest pillar, rivaling its core chip business and surpassing the total market cap of many independent semiconductor giants.

Nvidia CEO Jensen Huang has repeatedly emphasized that modern AI is no longer about individual chips, but about "data center-scale computing." The networking business acts as the "nervous system" that connects thousands of GPUs, allowing them to function as a single, massive computer. As enterprises pivot from training solitary models to deploying vast agentic ecosystems, the demand for high-speed, low-latency connectivity is experiencing explosive growth.

Spectrum-X and InfiniBand: The Keys to Technical Dominance

Nvidia’s dominance in networking is rooted in its strategic acquisition of Mellanox years ago. Today, this business is driven by two core products: InfiniBand and Spectrum-X (Ethernet). InfiniBand remains the gold standard for supercomputers and top-tier AI clusters due to its unparalleled throughput and near-zero latency. However, Nvidia is now aggressively pushing its Spectrum-X Ethernet platform into enterprise data centers, enabling companies using traditional network architectures to deploy AI workloads with high efficiency.

Industry benchmarks suggest that Spectrum-X significantly outperforms conventional Ethernet, particularly in handling the "collective communications" essential for AI training. This vertical integration allows Nvidia to offer a "full-stack solution"—from chips and software libraries to switches and network interface cards (NICs). Customers buying the entire ecosystem from Nvidia are guaranteed optimal performance, creating a powerful lock-in effect.

Market Trends: The Taiwan Connection and Global Interest

Nvidia’s success is vividly reflected in global market data. According to Google Trends, search interest for "Semiconductor" and "Nvidia" in Taiwan remains at a high of 74. This aligns with Taiwan’s role as the primary manufacturing hub for Nvidia’s products through TSMC and a vast supply chain of server assemblers. Companies like Foxconn, Quanta, and Wiwynn are integral to the production of Nvidia’s networking hardware, making the division’s growth a direct economic boon for Taiwan’s electronics sector.

In California, where search interest stands at 46, the conversation is shifting toward the return on investment (ROI) of "AI Infrastructure." Investors are debating whether networking technology will become the next bottleneck for AI expansion as chip supply stabilizes. For now, Nvidia’s foresight in networking has successfully built a formidable moat against competitors.

Competitors and Industry Challenges

Despite Nvidia’s current lead, the competition is mounting. Broadcom and Cisco are championing the Ultra Ethernet Consortium (UEC) standards in an attempt to break Nvidia’s stranglehold on the high-performance networking market. Additionally, as AI computing shifts toward the "edge," demand for more power-efficient and cost-effective networking is rising, providing opportunities for other players to capture market share in specialized niches.

Future Outlook: Connectivity is Power

In 2026, the trend is clear: while compute power is essential, connectivity defines the ceiling of AI performance. The rise of Nvidia’s networking business marks the company’s final transition from a "chip designer" to a "data center computing platform."

As the Blackwell architecture scales, Nvidia’s interconnect technologies, such as NVLink 5.0, will further blur the lines between a single chip and a massive cluster. For any company hoping to challenge Nvidia’s throne, developing a faster chip than the H100 is no longer enough. They must now engineer a neural network of hardware capable of orchestrating hundreds of thousands of chips in perfect harmony. As one analyst put it: "In the world of AI, speed is meaningless without the reach and stability of the network."

FAQ

為什麼 Nvidia 的網絡業務對 AI 這麼重要?

AI 訓練需要成千上萬顆 GPU 協同工作,網絡技術就是負責連結這些晶片的「高速公路」。如果網絡不夠快,GPU 的算力就會被浪費在等待數據傳輸上。

InfiniBand 與乙太網路有什麼區別?

InfiniBand 專為高性能運算設計,延遲極低但成本較高;乙太網路則是主流標準,Nvidia 的 Spectrum-X 則是將乙太網路提升到接近 InfiniBand 的性能,適合廣大企業使用。

Nvidia 網絡業務的主要客戶是誰?

主要是雲端服務供應商(如 AWS, Azure, Google Cloud)以及正在自行建置大型 AI 集群的企業與研究機構。

這對台灣的供應鏈有什麼影響?

台灣是 Nvidia 網絡設備最重要的製造基地。隨著該業務營收破百億美元,台灣的伺服器代工廠與零組件供應商將迎來顯著的訂單增長。