Blog
12.2025

Future of AI Policy and Governance

This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored what’s next for AI. Explore the magazine.

Winning Over Lawmakers and Public Means Earning Trust

At home and abroad, companies working with AI should keep close tabs on how their AI products or services are perceived—by employees, the public, and especially lawmakers.

AI remains a widely misunderstood technology and startups attempting to expand their businesses into different markets must navigate rapidly changing legal and regulatory environments. 

Meanwhile, breathless stories in the media continue to stoke fears about AI replacing workers, criminals misusing AI to commit fraud, and students relying on AI to cheat on tests. 

Setting the record straight is critical. If AI leaders can’t clearly communicate the advantages of AI, legislative and consumer backlash are real possibilities. 

Navigating complex regulatory environments

Companies operating globally already must adhere to varying rules in Europe, the United States, China, and other countries—some more complex and burdensome than others. 

According to Jeff Hancock, the director of the Social Media Lab at Stanford, Europe emphasizes precautionary oversight, with lots of regulation but spotty enforcement. The U.S. leans toward market-driven innovation and more lax regulation, but with excellent enforcement for the regulations that do exist. China prioritizes state control and social stability with strict regulation and enforcement. And other Asian markets, like Japan and Korea, have very little regulation nor enforcement, because they want to encourage as much innovation as possible.

Regulation impacts many aspects of the technological supply chain, creating uncertainty for companies unprepared to deal with the varied legal landscape. For example, in recent years, the U.S. and China have each attempted to prevent NVIDIA and AMD from selling high-performance GPUs in China

“We have shifted our entire supply chain to make sure that any final aspect of the supply chain is not done in China,” said Rajiv Khemani, co-founder and CEO of Auradine. “There is some uncertainty with regard to what could happen.” 

Even within the U.S., state-level regulations are proliferating. Illinois recently passed the Wellness and Oversight for Psychological Resources Act, which banned the use of chatbots to provide therapy without medical supervision. The new law also prohibited other AI uses in mental health care, even by licensed professionals. Critics argue the law unfairly prevents legitimate AI-powered applications from helping people who could benefit from them. It’s a cautionary tale about what happens when innovation outpaces public understanding.

Sovereign AI and military defense

Adding to the confusion is the rise of sovereign AI, or AI models that are exclusively controlled by a country in order to adhere to its specific national interests. 

Howard Wright, head of the startup ecosystem for NVIDIA, talked about what an enormous opportunity sovereign AI represented. 

As they compete to become the world’s dominant AI power, China and the United States have begun adopting different sovereign AI strategies. Other countries are also pursuing this, building their own LLMs running in their own, secure datacenters in order to protect state secrets.

AI also has a role to play in warfighting. The war in Ukraine shows that the era of autonomous weapons is here. In mid-2025, Ukraine launched an unprecedented and orchestrated attack on five Russian airbases by AI-enhanced and semi-autonomous drones. The attack damaged or destroyed 41 aircraft. 

Howard Wright

Military experts now agree that AI may become the catalyst for a new autonomous weapons age. 

“We’re going to see more AI tooling used to control and orchestrate drone attacks,” said U.S. Air Force Col. Jason Hansberger. “But if we build this complexity and this level of mass (attacks), then we’re going to need a lot of AI tools to help in decision making.” 

“The solution to open source, AI, and any potential competition that may arise out of that, is open data.”

Jose Plehn, BrightQuery

Alongside the growth of sovereign AI is a fear that international competition might stifle the overall growth of the AI industry by limiting global information sharing.

International rivalries might be a speed bump, but won’t fundamentally change the advantage of open source, said Jose Plehn, CEO of BrightQuery.

“I strongly believe that the solution to open source, AI, and any potential competition that may arise out of that, is open data,”  Plehn said. “Data and knowledge have no borders. Once knowledge is out there, it’s out there. It’s very hard to contain.”

Start by building trust

When it comes to influencing policy and winning over lawmakers and the public, the best thing startups can do is earn trust. Achieving that starts with proving that their AI works the way it’s supposed to and isn’t a threat, Anna Makanju, vice president of Global Affairs for OpenAI, said during one panel discussion.  

“There’s a business imperative for these things to be safe.”

Anna Makanju, OpenAI

“The thing that gives me the most optimism (about AI) is that there’s a business imperative for these things to be safe,” said Makanju, a former White House foreign affairs official. “Ultimately, consumers and governments don’t want to integrate something that’s going to be unsafe.”

Anna Makanju

Another way to build trust, Makanju added, is to design AI systems that enhance core human needs, specifically autonomy and competence. When serving regulated sectors, such as financial services and healthcare, companies must incorporate accountability, traceability and human decision into their systems. 

One way to understand a market’s regulatory environment is to hire experts with a global political perspective. Quentin Clark, managing director at venture firm General Catalyst, said during a panel discussion that his company works with the Tony Blair Institute for Global Change to help think through the impacts of AI applications. 

Real risks and anxieties remain

Criminals continue to think up new ways to employ AI to commit fraud on businesses and individuals. Parents and educators are alarmed about how some school children use AI to cheat on tests. Workers around the world still fear AI will cast them into unemployment.

Anxiety over potential job loss exists even in the tech sector.

“When you look at AI and its capabilities without proper context, you feel like AI might be replacing software engineers,” said Ursheet Parikh, a partner at venture firm Mayfield. “AI agents are writing code and fixing bugs… but if AI can do 50% of your work, you now have 50% of your other time to review the outcome of the AI and train the AI to do better next time.” 

Parikh said that everyone knows AI-generated code is not perfect and requires human review.

Others at the conference predicted AI automation will mean job augmentation rather than job loss. Several speakers said they see a day coming when every worker will supervise and manage their own groups of AI-agent underlings. But that optimistic future requires clear communication now.

The cybersecurity opportunity

Defending against AI-powered crime represents an opportunity for AI startups. Eric Schmidt, the former Google CEO, said the world faces three key potential threats: 1) from some kind of new bio-weapon, 2) misinformation aided by new AI-powered deep fakes, which can undermine democracies, and 3) more sophisticated cyber attacks. 

“If you can write code, you can also write cyber attacks,” Schmidt said. “AI models program better than I ever did.”

AI security companies need to become skilled AI ambassadors to help important constituencies understand the nature of the problems and how AI can help. 

“AI models program better than I ever did.”

Eric Schmidt

“Everybody has a firewall to make sure the bad guys aren’t coming in, and everybody is using some type of web proxies or secret service edge to keep the data safe,” said Moinul Khan, cofounder and CEO of Aurascape, an AI security company. “AI (security) tools are different. They’re not static software applications. AI tools are really dynamic. They’re learning. They’re adapting and evolving in real time.”

To deal with dynamic, AI-powered attackers, enterprises need a dynamic, AI-powered defense, Khan added. Traditional cybersecurity defenses won’t cut it.

“Your existing enterprise network, the inline stack that is supposed to protect you? 90% of the time they’re completely lost.”

Moinul Khan

Charting a path forward

The companies that successfully navigate AI’s regulatory future will share common characteristics: they’ll prioritize safety and transparency, invest in policy expertise early, communicate benefits clearly to multiple audiences, and build compliance into their products from day one.

Most importantly, they’ll recognize that regulation isn’t just a constraint—it can be a competitive moat. The founders who master regulatory complexity will be positioned to scale globally while others remain stuck in single markets. They’ll be the ones who earned trust, educated stakeholders, and turned policy challenges into strategic advantages.

Founder Takeaways

  • Global AI regulation is fragmenting rapidly across the U.S., Europe, China, and Asia. Understanding these differences isn’t optional—it’s survival.
  • Earning trust from policymakers and consumers is the foundation for influencing AI policy. Safety and transparency are business imperatives, not just compliance boxes.
  • AI leaders must become educators, not just innovators. The ability to communicate benefits clearly can prevent legislative backlash.


Explore The Future of AI | This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored the next revolution of AI. We explored how AI is changing scientific research, creating new startup economics, straining power grids, and challenging us to rethink everything from enterprise software to regulatory frameworks. Dive into the Future of AI magazine to see the full picture.

# #