Blog
12.2025

Corporate Alliances – Issue #9

A clean newsletter layout titled “Founder Insights: Weekend Edition – Issue #9.” The cover highlights the theme “Corporate Alliances,” summarizing how AI companies are forming long-term partnerships that shape control over compute, distribution, and talent. Key stories mention AWS Trainium3, Google TPUs, OpenAI’s “Code Red,” and the growing competition among AI enterprise platforms.

This week’s theme is corporate alliances. The biggest moves were not new model drops, but the alliances and dependencies forming across the stack. Companies are choosing sides, tightening partnerships, and building long-term ties that will shape who has leverage in the next cycle.

Infrastructure showed this clearly. AWS pushed Trainium3 and previewed Trainium4, reshaping its long-term position with NVIDIA. Google accelerated its next TPU line to keep pace. NVIDIA strengthened its hold on inference with new MoE systems, while Anthropic deepened its dependence on AWS. These are multi-year relationships that decide where future models can run and how fast they can grow.

In the enterprise, companies are fighting to own the user relationship. OpenAI declared a “Code Red” as Google’s Gemini 3 surged, redirecting resources toward strengthening ChatGPT and delaying other projects. Microsoft expanded Copilot across the business stack. Amazon countered with Nova agents and AgentCore. Salesforce’s Agentforce momentum and Apple’s high-profile talent hire highlight how fast companies are adjusting their alliances to stay relevant.

Capital and research followed the same pattern. OpenAI’s partners took on massive debt to support its growth. Chipmakers moved to secure optical interconnect technology. Agent companies like Sierra kept raising the promise of deeper customer integration. Regulators and global players, from health agencies to semiconductor alliances, focused on new partnership models rather than isolated rules.

AI continues to accelerate, but this week’s signals point to a simpler truth. Power is accumulating around the relationships companies build and the commitments they are willing to make.

Here’s your Saturday guide to the signals shaping the future of AI:

Infrastructure

  • AWS escalates its fight with NVIDIA by launching Trainium3 and previewing Trainium4 as Amazon debuts its next-generation accelerator with a claimed four times performance boost and signals a push toward self-sufficiency in AI infrastructure, aiming to erode NVIDIA’s dominance in the hardware supply chain. Click here
  • Google accelerates internal chip design to meet Gemini 4 compute demands as the company ramps up investment in next-generation TPU development, signaling a fast-rising hardware race in response to AWS’s Trainium push and the anticipated 10x compute jump required for the upcoming model. Click here
  • NVIDIA’s new 72-GPU servers boost China’s leading MOE models tenfold as new benchmark data shows massive inference gains on Moonshoot AI’s Kimi K2 Thinking model and DeepSeek’s open models, underscoring Nvidia’s strategy to dominate deployment even as rivals challenge it in training. Click here
  • AWS introduces AI Factories to secure sovereign market lock-in as the company rolls out fully managed on-premises supercomputing clusters for governments and major enterprises, aiming to win long-term sovereign AI contracts by delivering the entire stack outside rival public clouds. Click here
  • Anthropic deepens its reliance on AWS compute following the Trainium 3 rollout as the company reportedly consumes more than 500,000 Trainium chips across Amazon data centers, tightening its strategic dependence on AWS and sharpening the competitive divide with OpenAI’s Oracle and CoreWeave infrastructure stack. Click here

Enterprise

  • Sam Altman declares a “Code Red” as Google’s Gemini 3 surges, as OpenAI redirects resources to strengthen ChatGPT in response to Gemini 3’s strong benchmarks and rapid user growth, delaying other projects. With Google regaining momentum and OpenAI facing talent losses and major funding needs, the company is working to release a new reasoning model that it says will outperform Gemini 3. Click here
  • Microsoft expands Copilot across organizations, including SMBs, to tighten enterprise lock-in as the company launches Microsoft 365 Copilot Business at an SMB friendly price and rolls out new Copilot Chat and Security Copilot integrations across the Office suite and Defender, Entra, Intune, and Purview, extending its AI moat from large enterprises into the broader business market. Click here
  • AWS pitches Nova agents and Bedrock AgentCore as the enterprise alternative to Copilot and Agentforce as Amazon positions its Nova models, Nova Act, and AgentCore as customizable workflow agents with strong policy controls, aiming to rival Microsoft and Salesforce while still powering parts of Salesforce’s stack. Click here
  • Salesforce raises forecasts as enterprise adoption of AI agents accelerates, as the company reports surging demand for its Agentforce platform, hitting nearly $1.4 billion in ARR with more than 100 percent year-over-year growth, signaling that autonomous AI tooling is rapidly becoming a core enterprise workflow driver. Click here
  • Apple poaches Microsoft and Google veteran to accelerate its lagging AI strategy as the company hires Amar Subramanya, a senior leader who moved from Google’s Gemini team to Microsoft’s Copilot group, signaling a high-stakes talent push to rescue the slow Apple Intelligence enterprise rollout. Click here

Capital Flows

  • OpenAI’s partners take on $100 billion in debt to fuel the model’s expansion as Oracle, SoftBank, and CoreWeave absorb massive leverage to fund OpenAI’s compute demands, creating a circular financing structure that enables the company’s trillion-dollar procurement ambitions while leaving its vendors financially stretched and increasingly dependent. Click here
  • Marvell moves to acquire Celestial AI as chip vendors race to supply hyperscalers with next-generation optical interconnects, using major M&A to stay competitive against NVIDIA and the growing wave of in-house silicon from Amazon, Google, Microsoft, and Apple. Click here
  • Excelsior Sciences raises $95 million for AI-driven drug discovery as the company advances its “smart bloccs” platform to speed small molecule design and draws backing from Khosla Ventures, Deerfield, and Sofinnova, highlighting growing investor momentum behind AI and biotech crossovers. Click here
  • OpenAI takes a stake in Thrive Holdings as AI firms deepen ties to private equity by trading access to its technology for potential future returns. At the same time, Thrive uses its tools to modernize service firms, signaling a shift toward AI companies embedding themselves in investment structures rather than relying solely on product sales. Click here
  • Sierra secures new investment from SoftBank Vision Fund 2 and expands to Japan as the enterprise AI agent startup, already valued at $10 billion after a $350 million raise earlier this year, pushes into global markets with new customers and accelerates its bid to lead the growing agent wars in enterprise workflows. Click here

Research

  • OpenAI unveils a truth serum training method that makes models confess to deception as researchers showcase a new confessions technique where models generate a secondary explanation admitting when they lied, gamed rewards, or violated policy, offering a stronger way to detect scheming and hallucinations in high-stakes deployments. Click here
  • AI adoption surges in scientific research, boosting productivity but raising new risks as a survey shows 62% of researchers now use AI for writing, analysis, and data processing, reporting major gains in speed and output while expressing concerns about errors, security, and reduced scientific diversity. Click here

Policy

  • HHS unveils a new AI strategy for the US health sector as the agency releases a 20-page plan to expand AI across healthcare and public health, focusing on risk governance, tool development, staff enablement, and R&D funding, while raising ongoing concerns about protecting sensitive patient data. Click here
  • SAFE CHIPS Act seeks to lock in AI chip export controls to China as US lawmakers propose a 30-month freeze on easing restrictions for advanced AI chips, aiming to preserve American leverage in the AI hardware race and further harden the boundary between US vendors and Chinese customers. Click here
  • EU opens antitrust probe into Meta over WhatsApp AI chatbot restrictions as regulators scrutinize new API rules that block general-purpose AI chatbots and may disadvantage rivals like OpenAI and Microsoft, putting Meta’s control of the platform under direct EU challenge. Click here
  • UK FCA opts for live testing of AI with banks instead of new rules as the regulator expands collaborative pilots with banks and fintechs to co-create guardrails for deployments like AI agents in lending and customer service, signaling a shift toward sandbox-style oversight rather than rigid tech-specific regulation. Click here

Global AI Strategy

  • China doubles down on domestic AI hardware as US export controls backfire, as new analyses claim the restrictions have pushed China to accelerate its own AI chip and model ecosystem while squeezing more performance from existing NVIDIA hardware, strengthening state-industry collaboration, and raising long-term competition with the United States. Click here
  • SoftBank’s Arm and South Korea sign chip design training pact to expand national semiconductor talent as the government partners with Arm to create a chip design school that will train more than a thousand specialists and strengthen Korea’s fabless and system semiconductor ecosystem, tying the country’s long-term AI ambitions closely to a single IP licensor. Click here
  • Meta signs new AI content licensing deals with major global media outlets as the company secures agreements with publishers including USA Today, CNN, Fox News, and Le Monde to feed real-time news into Meta AI, using revenue-sharing and neighboring rights structures that turn media companies into active strategic partners in the AI content race. Click here

📱Social Signals

The most important conversations in AI are unfolding across new media, where top voices are shaping the next wave of signals and strategy. Here are some of the top social signals and their takes from the past week.

  • Ilya Sutskever (Click here) — “Scaling the current thing will keep leading to improvements… it won’t stall. But something important will continue to be missing.” In his broader podcast discussion, Sutskever lays out a roadmap toward superintelligence in five to twenty years, arguing that today’s models generalize “100x worse than humans” and that true AGI requires a new ML paradigm built around super-fast continual learning rather than ever-larger oracles. He signals that scaling will keep delivering gains but cannot close the remaining gap, positioning SSI — with its focused research compute — as the place pursuing the conceptual breakthroughs needed beyond current deep-learning methods.
  • Pawel Huryn (Click here) — “DeepSeek-V3.2 is the worst frontier model I’ve recently tested with AI agents… None of them can follow simple instructions: output a valid chess move without adding anything else.” Huryn contrasts this with GPT-5.1 and Gemini-3 Pro, which consistently produce full chess games in strict SAN format — the same capability needed for clean JSON, tool calls, and structured outputs in production. His signal reinforces a clear hierarchy in agentic reliability, with GPT-5.1 and Gemini-3 Pro far ahead of open-source challengers like DeepSeek.
  • Marc Benioff (Click here) — “LLMs are the new disk drives: commodity infrastructure you hot-swap for whoever’s cheapest and best. The fantasy that the model is a moat just expired.” Benioff’s post is rapidly trending, crystallizing a growing belief among operators and investors that frontier models are becoming interchangeable components. His signal reframes the competitive landscape: differentiation is shifting away from the model itself and toward distribution, data, agents, and enterprise lock-in.
  • Burkov (Click here) — “NeurIPS 2025 Best Paper Award: Attention lets language models decide which tokens matter… but it has limitations.” Burkov highlights a new paper that isolates the impact of gating in transformers by testing more than 30 gating variants across dense and MoE models up to 15 billion parameters. The winning method inserts a learned gate on each attention head before concatenation, stabilizing training, reducing attention sinks, and delivering major long-context gains without redesigning the full architecture. His signal: small architectural tweaks — not just scale — are meaningfully advancing model stability and long-context performance.

To go deeper, subscribe to my monthly Founder Insights newsletter, where I share lessons from the frontlines of company building, perspectives on AI’s future, and our industry’s road ahead.

Originally published on LinkedIn.

# #