Blog
01.2026

AI’s Next Bottleneck: Why Networking Is the New Chokepoint

AI models have outgrown the boundaries of a single chip. They now run across large clusters of highly specialized, extremely expensive silicon. When those chips sit idle, waiting for data, the system breaks down economically.

In the AI era, networking is no longer a feature; it’s a necessity.

AI is driving a fundamental reset of the hardware stack, which I wrote about last year, about the $7 trillion Hardware Renaissance (read more here). We are in a golden era of innovation across semiconductors, systems, and infrastructure, with unprecedented capital flowing into compute.

While compute has advanced rapidly, the surrounding infrastructure has not kept pace.

Traditional data center networks were built for servers and storage, not for the unique demands of AI workloads.

Why AI networking is the new chokepoint

The AI infrastructure challenge is only getting bigger. Models are becoming larger, faster, and more specialized. There will not be a single “winning” AI chip.

Training, inference, and edge applications will each require different architectures. Prefill and decode workloads need specialized solutions. The future requires an open, AI-native fabric that allows diverse compute, memory, and accelerators to operate as a coordinated system.

This creates an entirely new, scalable infrastructure layer: AI-native networking.

Founder-market fit matters at the infrastructure layer

These problems are hard by design. They sit at the intersection of hardware, systems engineering, and distributed software. What matters is deep domain experience, judgment shaped by prior cycles, and the ability to design end-to-end systems.

This is where founder-market fit matters most.

Upscale AI: A unicorn in six months

Upscale AI , which today announced a $200M Series A and reached unicorn scale in under six months, is building an AI-native networking platform.

Their speed is notable, but the signal is stronger: the market is already demanding solutions to this bottleneck.

Upscale AI is aligned with and actively contributing to the major open standards emerging across AI networking, spanning silicon, systems, and software. As AI infrastructure becomes increasingly heterogeneous, openness is not optional. It is foundational.

The company is also seeing early engagement from multiple hyperscalers and GPU and xPU vendors. This traction validates the need for a purpose-built networking layer and reflects confidence in the team’s ability to execute at the infrastructure level.

Barun Kar and Rajiv Khemani have built category-defining companies multiple times across security, silicon, and systems. This is our third time partnering with them at MayfieldAuradine, Aurascape, and now Upscale AI, which was incubated within Auradine. We believe people build products, people build companies – and this team can execute.

Every major technology shift produces a defining infrastructure company. Mayfield has seen this pattern before, beginning with our investment in 3Com in the 1980s, when networking became foundational to the PC era. In the Telco era, it was Cisco and Juniper. In the cloud era, Arista emerged. In the AI era, the architecture itself must change. Upscale AI is fundamentally re-architecting networking by unifying GPUs, AI accelerators, memory, storage, and networking into a single, synchronized AI engine.

Originally published on LinkedIn.






# #