Thermodynamics: The New Frontier of AI Infrastructure – Issue #22
Spotlight: Thermodynamics – The New Frontier of AI Infrastructure
For decades, we’ve talked about the hardware compute stack and the software stack. There’s a third layer that’s now just as foundational: the Thermal Stack.
The AI Thermal Stack is the integrated cooling architecture required to keep AI running at scale. It handles heat extraction from compute,networking, and other semis and heat rejection into the atmosphere, across both data centers and edge platforms. As AI workloads scale, thermal architecture directly determines compute density, energy efficiency, and performance. You cannot separate the two anymore. Performance is no longer just a function of compute, it is a function of thermodynamics.
The thermal constraint isn’t isolated. It is the result of five converging pressures. Training intensity is increasing as frontier models require massive GPU clusters running for weeks. Inference at scale is turning AI into a continuous, always-on energy load. Data center limits are emerging as power availability and cooling capacity become bottlenecks. Sustainability pressure is rising as 24/7 workloads collide with climate goals. And energy is becoming a primary cost driver, directly impacting model economics and pricing.
The broader signals this week reinforce the same shift. Infrastructure providers are redesigning systems for sustained, high-intensity workloads, while enterprises are deploying AI systems that run continuously at scale. Across the stack, the bottleneck is no longer just compute availability, it is the ability to power and cool it efficiently.
We are seeing a new category emerge in response. Frore, which recently raised $143 million at a $1.64 billion valuation, is redefining the Thermal Stack with an integrated architecture that manages heat from chip to atmosphere across data centers, industrial edge systems, and consumer devices.
The brain of AI moved from CPUs to GPUs andTPUs. The next critical unit isn’t faster. It is cooler. Cooling Processing Units will define how much AI the world can run.
Here is your Saturday guide to the signals shaping the future of AI: 👇
Signals Shaping the Future of AI:
Infrastructure
NVIDIA to supply 1M GPUs to AWS through 2027. The deal signals continued hyperscaler demand for compute, with NVIDIA expanding deeper into cloud infrastructure and inference workloads. Click here
NVIDIA launches BlueField-4 storage architecture to fix AI data bottlenecks. The new system targets faster data access for agent workloads, improving performance and efficiency as context sizes scale. Click here
Trane improves cooling efficiency for AI factories. The company introduced new thermal designs integrated with NVIDIA systems, boosting efficiency and freeing more power for compute at gigawatt-scale data centers. Click here
Blue Origin proposes massive orbital data center network to power AI from space. The company filed plans for up to 51,600 satellites, betting space-based compute can bypass terrestrial energy and scaling constraints. Click here
Enterprise
Meta signs $27 billion deal with Nebius to secure AI compute capacity. The agreement includes $12 billion in dedicated infrastructure and up to $15 billion more in optional capacity, tied to next-gen NVIDIA systems. Click here
Mastercard launches AI model for payments and fraud detection. The system is trained on billions of transactions to improve fraud detection, personalization, and real-time decisioning across global commerce. Click here
NVIDIA launches NemoClaw to standardize enterprise AI agents. The open framework adds security, privacy, and orchestration layers, positioning NVIDIA to define how agents are deployed across enterprise environments. Click here
Adobe and NVIDIA partner to embed agents into creative workflows. The collaboration integrates AI across content creation, marketing, and production pipelines, accelerating enterprise adoption of agent-driven work. Click here
Cursor builds new AI coding model to rival OpenAI and Anthropic. The startup is moving beyond tooling into model development, signaling rising competition in developer-centric AI platforms. Click here
Capital Flows
OpenAI acquires Python startup Astral. The move strengthens its position in developer tools and expands its footprint in the coding ecosystem. Click here
Jeff Bezos explores raising $100 billion to buy and modernize industrial firms with AI. The strategy signals AI-driven reinvention of traditional industries at massive scale. Click here
Oasis Security raises $120 million to secure non-human identities. The company focuses on protecting AI agents and machine identities as enterprise automation expands. Click here
Ecolab acquires cooling firm CoolIT Systems for $4.75 billion. The deal targets liquid cooling infrastructure as a critical bottleneck in scaling AI data centers. Click here
Research
Xiaomi unveils MiMo V2 model approaching frontier LLM performance. The release highlights rising global competition beyond U.S. labs in high-end model development. Click here
NVIDIA introduces an open Physical AI data factory blueprint. The framework aims to accelerate robotics, autonomous systems, and real-world AI training pipelines. Click here
Microsoft releases MAI image model. The system advances text-to-image generation as competition intensifies in multimodal AI. Click here
Policy
Gartner predicts 80% of governments will use AI agents for decision-making by 2028. The shift signals AI moving from tools to automated public-sector decisions, with growing emphasis on explainability, governance, and trust. Click here
White House prepares federal AI framework amid regulatory gridlock. The proposal aims to set national rules, including child safety and preemption of state laws, but faces ongoing disagreement in Congress. Click here
Global AI Strategy
NVIDIA wins approval to resume H200 chip sales in China and prepares Groq chips for the market. The move reopens a major revenue channel while signaling a shift toward inference-focused competition inside China’s AI ecosystem. Click here
UK commits £2.5 billion to AI and quantum to accelerate national adoption. The government aims to lead the G7 in AI deployment while funding sovereign AI, scaling quantum computing, and strengthening domestic tech leadership. Click here
Alibaba launches new enterprise AI agent platform. The company is pushing agent-based systems as a core layer of enterprise software across China and global markets. Click here
Talent Signals
Each week, we spotlight key roles tied to the themes shaping this week’s AI headlines, connecting talent to the companies driving the news.
Oasis Security builds AI-powered identity security software focused on managing and securing non-human identities like API keys, tokens, and service accounts. As AI systems and agents proliferate across enterprise environments, Oasis helps teams gain visibility, enforce governance, and reduce risk across increasingly complex access layers. Open roles are listed on its careers page. Click here
Framer builds an AI-powered website and product design platform that lets teams create, launch, and manage sites without code. As AI compresses design, content, and development workflows into a single SaaS layer, Framer is expanding across engineering, product, and growth. Open roles are listed on its careers page. Click here
David AI builds high-quality audio datasets and research infrastructure for training speech and conversational AI systems. Founded by former Scale AI operators, the company focuses on solving the data bottleneck as voice becomes a core interface for AI applications. Open roles are listed on its careers page. Click here
You can see all the opportunities at Mayfield-backed AI companies here, and across the broader ecosystem here.
Social Signals
The most important conversations in AI are unfolding across social media, where top voices are shaping the next wave of signals and strategy. Here are some of the top social signals and their takes from the past week.
Sundar Pichai (Click here) — “We trained a new flood forecasting model designed to predict flash floods in urban areas up to 24 hours in advance. To help address a flash flood data gap, we created Groundsource, a new AI methodology using Gemini to identify 2.6M+ historical events across 150+ countries.” In a post viewed 564K+ times, Pichai announces a new urban flash-flood prediction system now live in Flood Hub, alongside the open-sourcing of a large global dataset to support research. The effort combines frontier models with geospatial and historical event analysis to expand early warning coverage in densely populated cities.
Nav Toor (Click here) — “Stanford proved that ChatGPT tells you you’re right even when you’re wrong. Researchers tested 11 of the most popular AI models across 11,500 real advice-seeking conversations. Every single model agreed with users 50% more than a human would.” In a post viewed 9.5M+ times, Toor points to research showing that leading models consistently validate users, even when they describe manipulation or harmful behavior. In a study of 1,604 participants, those paired with more sycophantic AI became less willing to apologize or compromise, yet rated the flattering system as higher quality. For founders and product leaders, this is the tension. Engagement and retention may be rewarded by affirmation, but long-term trust requires challenge, judgment, and alignment with human values.
Yann LeCun (Click here) — “Unveiling our new startup, Advanced Machine Intelligence (AMI Labs). We just completed our seed round: $1.03B, one of the largest seed rounds ever, probably the largest for a European company. We’re hiring.” In a post viewed 2.5M+ times, LeCun announces AMI Labs, which aims to build AI systems with world models, persistent memory, reasoning, planning, and stronger controllability. The $1.03B seed round signals significant early capital backing behind next-generation architectures focused on long-term reasoning and safety.