This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored what’s next for AI. Explore the magazine.
In 2017, Google scientists published “Attention is All You Need,” which introduced the transformer architecture. That seminal moment in AI research was the jumping-off point for the wild success of companies like OpenAI, Anthropic, and the like, and helped the AI industry explode from an estimated $16 billion in revenue in 2017 to as much as $757 billion this year.
Generative AI tools like ChatGPT offer widespread benefits for the daily lives of everyday consumers. Indeed, consumer AI benefits are currently outpacing those enjoyed within the enterprise.
A big explanation for that is how easy it is for ordinary consumers to use ChatGPT (and its rival generative AI tools). Everyday users can’t get enough of doing research, semantic search queries, financial analysis, or just entertaining themselves and their friends with AI-generated images.
One stat sums this dynamic up best: In 2024 alone, generative AI tools accounted for $97 billion in “consumer surplus,” the delta between what consumers will pay for something and what it actually costs.
Opening up new experiences
But it’s not just OpenAI and rivals who are driving massive consumer AI adoption. Estimates show that, globally, as many as 1.7 billion people use AI tools, with 500 million using them daily.
And while OpenAI generates about 40% of global consumer AI spend, a recent report suggests that more than 130 brands are providing AI tools in areas like video, business, voice and transcription, and writing.
Agents are likely one of the next big growth areas. These will offer far-reaching use cases and lead to many new consumer AI-oriented businesses that span shopping, healthcare, entertainment, and more.
For example, Manish Chandra, cofounder of Poshmark, envisions a personal style agent that knows who you are, what you like to wear, what appointments you have that day, and even what mood you’re in—and is able to use all that information to recommend outfits from your closet each morning.
Meta’s head of business AI, Clara Shih, also predicted an imminent future in which consumers employ AI assistants to help them find the specific products they want.
“Personality is incredibly important in how users receive a model.”
Madhavi Sewak, Google DeepMind
To do that effectively, developers of AI agents will need to figure out how to get them to respond appropriately to a variety of wildly different requests with the right “personality.” For instance, someone asking for help on a math assignment expects a very different response from when they’re asking a model to generate some code. Yet it’s often the same model powering both responses.
“Personality is incredibly important in how users receive a model,” said Madhavi Sewak, a distinguished researcher at Google working on DeepMind. And yet it’s an unsolved problem—so far. “We don’t really yet know how exactly we can train personalities into models, but models obviously have personalities,” Sewak said.
Opportunities in personalized medicine
Experts see potential in using AI to transform generic health data and advice into hyper-personalized care. Whether it is offering clinical support or analyzing trends for mental health professionals, AI systems can give more tailored insights—sometimes with more accuracy than professionals.
General Catalyst managing director Quentin Clark said he expects one solution to America’s often-dysfunctional medical system could be every consumer eventually having an AI concierge—a virtual doctor.
Erick Tseng, CEO of Next Chapter, is developing an AI-assisted mental health coaching service that uses a custom model trained on bespoke clinical data.
Tseng tells an anecdote in which a caregiver he knew suspected early dementia in his father. While a general practitioner dismissed the concern, an AI flagged early signs of Alzheimer’s. A specialist later confirmed the diagnosis.
Backed by Sam Altman, Retro Biosciences aims to find therapeutics that can reverse aging. The company has two programs that are particularly notable in trying to tackle the problem of aging itself by developing therapeutics for age-related diseases.
Using AI research, the company is working on treatments in autophagy, the natural recycling process in cells that breaks down old proteins. This process can become stalled in advanced age and in neurodegenerative diseases like Alzheimer’s. Another area of research is a cell-replacement therapy that swaps out old microglia (one of the four main cell types in the brain) with new, “zero age” cells created in the lab, per CEO Joe Betts-LaCroix.
Concerns about privacy and data
AI is democratizing expertise in health care, offering second opinions to those who might never afford them or who face other obstacles in getting care.
But many of these systems aren’t HIPAA-compliant, and sensitive data is often fed into general-purpose models. That’s a risk for patients and the companies alike.
The paradox becomes: The more helpful the AI, the more vulnerable the patient. This is why involving humans-in-the-loop will become more critical than ever. Specifically, that means using AI to assist and augment health professionals, rather than replace them. For instance, a therapist could enlist AI to monitor a session and surface prompts for care in the summaries. It becomes a collaborative model to build trust, where the final decisions and nuanced care still reside with a human.
“If we can build AI and protect our privacy, there’s a big market for that.”
Vanessa Parli, Stanford
Outside of healthcare, it’s long been said that when we use free technologies, we are the product—and that raises additional issues about personal data.
“When Facebook and Google first [started],” said Vanessa Parli, the director of research programs at Stanford HAI, “we didn’t even [ask], ‘how are they using my data?’” But now we’re less trusting and demand more transparency and protection.
While consumers are making it clear they’re eager for more and more things to do with AI, they’re also signaling their wariness about giving away too much of their personal data. As Sichao Wang, a senior manager at Cisco, put it, “we’re actually playing on…very risky ground” if there’s no clear security or privacy foundation built into today’s (and tomorrow’s) AI tools.
That idea was echoed by Sandy Pentland, a professor of Media Arts & Science at Stanford. There’s a growing fear, Pentland said, that although AI agents are giving consumers more power, they’re also increasing the risk of harmful trust and security breaches.
That was the motivation behind LoyalAgents, a partnership between Consumer Reports and the Stanford Digital Economy Lab that aims to “make AI agents secure, loyal, and effective advocates for consumers everywhere.”
For Parli, there are lessons to be learned from industries like healthcare and finance, which have strong mandates to protect consumers’ privacy. Now, she believes, there’s a big opportunity to incorporate that mandate across consumer AI without harming corporate business interests. “If we can build AI and…protect our privacy,” she said, “there’s a big market for that, and I would prefer that it’s about educating the public.”
Going forward boldly
It’s already clear the consumers want AI and that their enthusiasm will only generate more and more business opportunities. The companies seeking to leverage that opportunity should build boldly but responsibly.
Those that thrive in this burgeoning market will be those that both create tools people want and take their privacy and security seriously. They’ll be the ones that recognize AI’s power to hyper-personalize experiences, understanding individual mood, health, and context, while building the trust frameworks necessary to make personalization feel safe rather than invasive.
Founder Takeaways
Explore The Future of AI | This article is part of our Future of AI series from Imagination in Action 2025 Silicon Valley Summit — where founders, leaders, and investors explored the next revolution of AI. We explored how AI is changing scientific research, creating new startup economics, straining power grids, and challenging us to rethink everything from enterprise software to regulatory frameworks. Dive into the Future of AI magazine to see the full picture.
