A People-First View of the AI Economy
Today marks 9 months since ChatGPT was released, and six weeks since we announced our AI Start seed fund. Based on our conversations with scores of inception and early-stage AI founders, and hundreds of leading CXOs, I can attest that we are definitely in exuberant times. In the space of less than a year, AI investments have become de-rigueur in any portfolio, new private company unicorns are being created every week, and the idea that AI will drive a stock market rebound is taking root. People outside of tech are becoming familiar with new vocabulary. Large language models. ChatGPT. Deep-learning algorithms. Neural networks. Reasoning Engines. Inference. Prompt engineering. Co-pilots. Leading strategists and thinkers are sharing their view on how it will transform business, how it will unlock potential, and contribute to human flourishing.
While there are still many unknowns, and it is prudent for us to be aware of the risks as well as the potential of any new technology (Oppenheimer, anyone?), one firm conviction makes me optimistic. We are guided by a People-First philosophy at Mayfield, one in which the start-up founder’s bold vision elevates the customer of their product and ignites a community. When applied to AI, People-First has even more powerful resonance.
I believe that two dynamics will combine to establish AI as a powerful force that will allow any human to become what I call Human2—as in Human Squared.
First, our main form of interacting with computing devices will change. It will become conversational. Whereas we once relied on a command line, then the GUI, the browser, and the mobile device, we are now going to primarily communicate with computers through rich and layered conversations. The impact of that change will be compounded by a second one: for the first time, technology will be able to perform cognitive tasks that augment our own capabilities. Rather than merely speed up and automate repetitive tasks, AI will generate net new things much like humans do. The result is that we'll be able to multiply our own capabilities with a human-like co-pilot—or teammate, or coach, or assistant, or genie. AI x Human = Human2. And precisely because the potential and power of AI is so great, the need to focus on responsible development is paramount.
We have customized our People-First framework to apply to AI companies and are using it to guide our investment decisions. Today, we are publishing the five key pillars of that framework in the spirit of fostering responsible AI investing:
Mission and Values Count
Founding values drive culture. They are not something that can be bolted on as a company grows. We saw this in the missions of three of our most successful companies over the last decade.
Improving people’s lives with the best transportation
Putting people at the heart of commerce & empowering everyone to thrive
Building the infrastructure that enables innovation
This time around, we are having similar discussions with AI-first founders to see if they have a human-centric mission and authentic values. We want to understand what drives their thinking about the impact of their technology and ensure we’re aligned.
Gen AI has to be in Your Start-up DNA
The recent explosion in AI has been driven by innovative thinking by researchers, model builders, ethicists, and technologists. We believe that founders who have been steeped in that world understand how to design and build People-First AI businesses. So when we meet with founders, we are looking for:
A fundamental belief in AI as an augmenting, not a replacing force - a view of AI as a teammate or even a co-founder;
An AI-native founding team, which has worked in the academic or applied Gen AI field, or one that has a unique insertion point into the generative AI wave;
A passion for design and user experience to bring out the invisible AI capabilities to all human-computer interaction and workflows;
Solutions that are powered by generative AI elements like LLMs, proprietary models and datasets, and a chat-like natural language interfaces;
An overall value proposition that involves the cognitive offloading of repetitive tasks.
Trust and Safety Cannot be an Afterthought
As we already know, there are some harmful effects of AI. Some we have identified include hallucinations, poisoning, lack of transparency, inequity, injustice, bias, deep fakes, IP and copyright infringement, and violations of privacy and security. We are asking founders to evaluate the trustworthiness of the models driving their innovation, and encouraging them to look at pioneering work on holistic model evaluation such as that being done at Stanford. We believe founders need to evaluate this not only at the time of model selection but also in the whole lifecycle of a model, from development, to testing, and deployment. At the same time, compliance with the growing regime of regulations, guidelines, and frameworks for the responsible use of AI is paramount.
Data Privacy is a Human Right
We believe that privacy requires its own focus, and cannot just be subsumed under trust and safety. Fortunately, given the myriad of regulations like CCPA, DGA, DMA, DPA, GDPR, PIPA, and PDPO that emerged in recent years, companies are already working on putting data controls in place. This is especially important in the age of generative AI, when models produce new data from training sets, and the unauthorized use of training data has become a significant intellectual property concern. Regulations for the ethical use of data, which provide assurance and risk management, are now emerging across the globe. Governance areas that have to be addressed include: discovery and inventory of all data; detection and classification of sensitive data; understanding models access and entitlements by users; consent, legal basis, retention, and residency understanding; followed by quality and lineage. Paying attention to these things is critical. We are asking founders to do so and encouraging them to build guardrails now. It will be too hard to act once the proverbial data horse has left the barn.
Superhuman Impact can be Scored
We believe that People-First AI will truly elevate humans and are working on a design framework to measure that potential when meeting founders. Going back to our company examples, Lyft, Poshmark, and HashiCorp elevated drivers, seller stylists, and cloud practitioners respectively, enabling them to grow into vibrant communities. They had to make tough decisions to stick with their commitment, but ultimately, were rewarded by the satisfaction of having achieved their missions of empowering and elevating people.