This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored what’s next for AI. Explore the magazine.
Because of generative AI’s rapid advances over just three years, it’s easy to forget that the technology is built on a decades-long effort to massively increase computational power—and that AI models in development will need even more power very soon.
Large language models (LLMs) became possible thanks to a remarkable thousand-fold increase in computing performance over the past 20 years. That improvement was driven in part by Moore’s Law, but also by the introduction of new tools that enabled the building of scalable, distributed systems based on commodity hardware, said Google Fellow Amin Vahdat.
“Just as the underlying trends slow, the demand for computing exploded.”
Amin Vahdat, Google
However, performance growth has started to peak exactly at the moment when AI developers need more performance. “Just as the underlying trends slow, the demand for computing exploded,” Vahdat said.
He predicted that the AI industry needs another 1000x performance improvement in just the next three years if its current pace of innovation is to continue.
Many in the infrastructure field are excited about the challenges and opportunities in delivering those performance improvements, including new solutions for computing, networking, and power generation.
AI infrastructure needs new solutions
Despite massive amounts of new data center construction, there may simply not be enough computer capacity to meet expected demand, and big AI companies are snapping it up wherever they can. OpenAI recently agreed to pay Oracle $300 billion over five years to provide compute power. McKinsey & Company estimated in a report that the amount of data center investments needed by 2030 will reach $7 trillion, with $5.2 trillion going to AI workloads alone.
GPUs, the chips that enable AI models to perform their massively parallel computations, are an essential part of neural networks. But GPUs are expensive, consume enormous quantities of electricity, and generate massive amounts of heat. What AI companies want are less expensive computing solutions that provide more performance while consuming less power. Other bottlenecks include networking speed and a lack of memory capacity.
“The fundamental problem in our industry is that chips take too damn long to design.”
Faraj Aalaei, CogniChip
The first step to creating bold new solutions is to understand the limitations of the current systems. For example, chips used by the AI industry today were designed as long as eight years ago, and were not meant to support today’s AI models, said Faraj Aalaei, founder and CEO of Cognichip.
“There’s no way that (semiconductor companies such as NVIDIA) could have comprehended this kind of scaling this fast,” Aalaei said. “The fundamental problem in our industry, the hidden problem, is that chips take too damn long to design.”
Hardware evolution ahead
Anirudh Devgan, CEO of Cadence Design Systems and a pioneer in electronic design and circuit simulations, is certain plenty of changes will come to semiconductors, and that they will continue to be a growth driver in AI and other sectors, including robotics.
“Chip design complexity is growing exponentially,” Devgan said, adding that hardware innovations will continue to drive major changes, particularly as specialized silicon emerges for robotics and other applications.
Many data centers also weren’t built to support AI, says Seshu Madhavapeddy, co-founder and CEO of Frore Systems, a developer of thermal technologies. The people who designed most of today’s existing data centers had no way to anticipate that they would need to cram so many high-performing GPUs onto their racks. Just cooling these super-powerful processors is an engineering challenge of its own.
“The reality is that all the power converts to heat, and if you don’t have a very efficient means of removing heat, then you’re not going to be able to actually run your data centers at a high level of performance,” Madhavapeddy said.

Golden age for infrastructure startups
“This is a great time for startups,” said Frore’s Madhavapeddy. “Because the data center market is growing so fast and generating so quickly, and everything is changing every year or every two years, this type of dynamic environment is where startups can really thrive.”
When it comes to disrupting infrastructure, some opportunities are better than others. Aalaei recommends that startup founders take big swings and create discontinuity. Making incremental changes to existing platforms won’t deliver the big rewards that VCs seek.

Still, he doesn’t recommend competing directly with NVIDIA for the GPU market. They’re too entrenched and too rich. Instead, he recommends that founders sizing up a market study the lessons that NVIDIA’s success story teaches.
“When NVIDIA came up as a startup, there were already people building graphic chips,” Aalaei said. “Intel was still the king of the jungle. And NVIDIA found their way of creating discontinuity… this did not happen by accident. They had a thought process about what a GPU could do and what the future held. And they spent an enormous amount of money as a public company doing this, and they had the courage to do it and stick to it.”
Rajiv Khemani, co-founder and CEO of Auradine, maker of blockchain and AI applications, agreed. But Khemani cautioned that future founders should not overlook service or technology gaps in existing market segments.
“If you’re doing a new startup, you have to be very clear on what you’re going to do,” he said. “There’s a phenomenal opportunity to segment. You have to pick a segment where you don’t have a dominant player and where you can deliver a fine X value.”
The infrastructure crisis threatening AI’s future is at the same time the industry’s greatest opportunity. For founders who can solve these fundamental bottlenecks, the potential is enormous.
Founder Takeaways
Explore The Future of AI | This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored the next revolution of AI. We explored how AI is changing scientific research, creating new startup economics, straining power grids, and challenging us to rethink everything from enterprise software to regulatory frameworks. Dive into the Future of AI magazine to see the full picture.
