Takeaways from our wine and cheese event at Mayfield

Blog
11.2023

How Is the AI Landscape Evolving for Startups?

Last week we hosted a small wine and cheese event here at the Mayfield offices to discuss OpenAI’s slew of recent announcements. The core topic was whether or not this was an opportunity for startups, or a death knell. The general feeling is that startup opportunities are shrinking – but there will still be segments of the market where they have the opportunity to dominate. There was also still a lot of debate around the topic – the hallmark of a still-nascent market.

Opportunities for the Big Players

  • The big players will acquire more companies – The Mosaic acquisition was very strategic, these are the kinds of acquisitions that will take place across the market
  • There will be lipstick on every offering – Microsoft and other large players will put copilots on everything
  • OpenAI still wins the day on quality of output for now – It’s currently setting the gold standard
  • Enterprises will look for the safest bet possible – They will go for the most trusted vendors, and there’s a lot of inertia in that direction

Opportunities for Startups

  • The surface area is too large for the big players to cover entirely – There are so many features and unique markets, so many verticals, and so many niche use cases that OpenAI can’t cover everything
  • Data is still problematic – Virtual Private Cloud models are the only way to ultimately contain and not have enterprise data be used to train OpenAI and other proprietary datasets
  • In the IoT Space there is a lot of fragmentation and fine-grained features for different use cases (e.g. doorbells) that have their own stack around them. The agent layer, nextgen autopilots that are designed for unique use cases, will resemble that. That fragmentation is an opportunity for startups
  • Don’t assume that number one today is number one tomorrow, leaders typically fall short and cannot own the entire market
  • Don’t sleep on open source – Closed source is winning the day for the moment, but open source will out-innovate with the size and scale of user contributions. Model creators who are open source can move with speed and cost-efficiency in a way that OpenAI cannot
  • People prototype with OpenAI, but it’s expensive – Don’t overengineer with OpenAI. Be ready to move it to another platform (like Mistral). You likely won’t have negative impact and will improve your cost basis
  • Models will continue to shrink as they become more refined in their focus (bigger/better vs. small/right data). Bigger won’t always be better, the right data in a smaller model is likely the winner in the end. Inferior is good enough for many buyers. In spite of the construct of best of breed, OK is OK – meets minimum makes the juice worth the squeeze
  • OpenAI today is what MySpace was in the early internet days – In the end, who owns the GUI doesn’t matter. The stickiness is fundamentally where the data lies

Predictions

  • Will we have one or many hyperscalers? Just like with the cloud and the multi-cloud thesis, we’ll likely do the same with AI. At a minimum there will be a duopoly (Cohere and Anthropic as 2 and 3). However, it could be possible that abstractions with Open AI may kill interoperability with Cohere and Anthropic
  • AGI is cool but it will always be 20 years out
  • Google has 15% market share, but half of that is e-commerce. Their true AI enterprise reach is very small. The leaders are really the leaders today
  • Data is king – If there is free access to data and OpenAI is taking all of it, we’re going to have to counter that. Reddit and Twitter won’t give OpenAI access
  • The “Year of the Agents” will be the next five years – They won’t be fully autonomous for years, the first instantiation will be these copilot examples. This is where the CAMEL paper comes into play – it’s an instruction set for fine-tuning (One LLM plays the role of the assistant, another LLM plays the role of the human, they banter back and forth and train one another – agents training agents can turn into a loop)
  • Guardrails are really problematic today – If agents are training agents, who is held accountable when it blows up? Probably the one who profits. The big opportunity will be in creating guardrails (locks on your door in a nice neighborhood) for protection. This will be everywhere across every market in every environment
  • AI will mirror cyber – There will be user failure as well as system failure. Think of the agent either as a “being” or as a “thing” – only a being can be held responsible
  • Goal-Seeking – It’s not about getting an email out, it’s about asking what the email is trying to do. This line of thought sets agents up for interesting but high-risk stuff, and that fundamentally is what AGI is all about
# #