Podcast / CXO of the Future

Naveen Zutshi, CIO, Databricks

Today we welcome Naveen Zutshi, Chief Information Officer at Databricks, to The CXO Journey to the AI Future podcast. He joined Databricks in January 2022, and is responsible for their IT solutions, driving transformational programs to help the company scale its consumption-based business globally. He’s on the board of advisors for several fast-growing startups like Propelo & Torii, and more established companies including Zoom and Rubrik. He’s also an investor in many fast-growing startups and early-stage VCs.

Question 1: First Job: Could you talk a little bit about your background and how you got to the position where you are now?

I’m a computer engineer, so I love coding. However, I realized early on that maybe I was a slightly better manager. So I moved into management, mostly engineering management, for a while, but stayed technical, where I found my true calling.

In between, I’ve been at startups and in large companies. What’s been interesting is that sometimes you can take what you learned at high tech companies and leverage that in other environments such as retail. For example, I remember the work I did at Gap because it was such a huge opportunity. After all, there was a lot of legacy, and there was huge improvement to be made both on the e-commerce side as well as on the store side. In every job I’ve done, it has been amazing to work with so many smart people, which includes some of the incredible startup founders I’ve had a chance to partner with.

Question 2: Generative AI: Everybody’s talking about it. Maybe you can start by sharing your own view of how important CIOs should be prioritizing Generative AI. Is this a unique opportunity? We’ve all been through the mobile era, the social era, and the cloud era, Is this different? Could you put a little context around Generative AI?

I can tell you a little bit of my own story. Obviously, for me, the number is 22, because that’s when ChatGPT came out. I spent hours and hours asking all sorts of questions and getting some amazing answers. The initial hype was incredible, and I think, at least for me personally, I felt it was a seminal moment similar to, or maybe stronger than, any one moment that happened for mobile or the internet.

However, over the last year we’ve started to see what it can do, and right now there’s a disillusionment that we’re going through as an industry, but all the same I still feel incredibly confident that the future is bright for GenAI. In the meantime, there are a few thorny problems that we need to solve first.

Question 3: Early learnings. What are some of the early learnings when you think about Generative AI use cases there internally at Databricks? I’m sure you talk to CIOs in your role there. What are some of the early findings about how to become excellent?

The first use case that I’ve seen the most both internally as well as with our customers, has been the copilots, typically focused on software. We have implemented it for all our engineers and R&D.

We see a lot of our customers have rolled out different versions of it whether it’s Github, Copilot, Copilot X, or other versions of it. And they’ve seen various amounts of benefits, anywhere from 10-40%.

And I think there are other interesting use cases around software development. For example, test migration and test unit case creations, e.g. migration of codes from one language to another. That’s been a really good one that I’ve found for even applications like Salesforce. So, that’s starting to become a more established and more mature use case, and companies are mandating that every developer use Copilot in their daily work, which has become the new standard or the new expectation around productivity.

The second real use case I’ve been seeing is summarization. We want to drive summarization, and reduce hallucinations using RAG. And ultimately, we’re starting to see this action as intent-based. You know, where you take an intent, and have a multi-step process that actually achieves that intent. And I think that’s hopefully where the industry is moving. We’re seeing some early examples of that in the B2C space. But I’m assuming those will also translate into the B2B space.

Question 4: Gaps: So if that’s the future, where we have intent-based agents actually resulting in true productivity or workflow redesign, what are the risks in getting to that? What are some of the obstacles, issues, or maybe even gaps in the technology that you believe need to be addressed? I know this isn’t a technology-only issue. What are the headwinds overall?

Let’s start with RAG use cases: The biggest obstacle today is still reliability. You measure the reliability of answers coming back and in an enterprise setting your reliability needs to be pretty high, even close to perfect. And with GenAI use cases, you can reduce hallucinations, but it’s hard to achieve perfection. That’s one area.

The other area is unfortunately still the completeness of data. A lot of customers still talk about that. I don’t have all the data sets in a common form. For example, we work with a large air carrier. Do we have all of it in a data lake or a lake house paradigm like that large air carrier? Do we have the access control setup? So that’s another area. For instance, if I took out this unstructured data, and then ran it against a RAG model, but then you asked it a question, and I asked it a question, it should be privacy-preserving for you versus me.

And then, if you think about each summary, or each step of the process, if that reliability is very high, can I chain it together into a common model? Where can I drive content into this sequence of steps? For example, could I have it flawlessly plan an entire summer trip if I provide all the right parameters? Would it book the hotels, the activities, the flights, the meals?

I think that the validation step with humans in the loop is something that will still be required before it becomes truly autonomous. I think that we’re a ways from a truly autonomous state.

New metrics: We’ve watched the hype slow down as people don’t see the results they want. What are the necessary success metrics to prevent people from falling into that trough of disillusionment?

One thing I would say is to think about AI as a whole. What has been the success? The success has been dramatic, right? So there’s a dramatic success in revenue growth and profitability.

Let’s say you have so many use cases across AI overall that are now in production. And companies have seen revenue growth as well as profitability improvements as a result of AI. In fact, many companies are just AI-based as a result.

The same thing is going to happen with GenAI use cases too. I think the productivity benefit is the biggest one. If you look at the McKinsey report, it says 30% of all work could be automated by GenAI. I’m not sure if that is true or not yet. It’s a big number, but I would expect to see people start with some core use cases where you develop some level of success at scale.

Question 5: One of the questions that CIOs often ask is “buy versus build.” The idea here is that this is a new technology we need to learn. We’ve got people and staffing issues. But there are also existing vendors, and they’re offering AI-capable functions. Copilot, for example, as you mentioned. Do I do a little of both? Do I become an expert at building? What’s the “buy versus build” thesis that you believe we should be thinking about?

The way I think about this today is that there are two vectors: one is transitory, and the other one may be more permanent.

The first vector is thinking something like this: “Hey, if the field is still nascent, my SaaS vendors or my new startups can’t achieve scale yet, or they’re not working on the problem that I’m working on. Maybe I can experiment and build something in production. Maybe that will last for a year or two. By that time my SaaS vendors will catch up, and I will leverage their products.”

The second vector is that I have proprietary data, or have proprietary algorithms that I want to use against private data. And I don’t want to share this data with my SaaS vendor. I want to build it on a framework that is my own platform.

And I think there will be continued use of that, you know. That is Bloomberg’s use case. And it’s not just tech companies. I think non-technology companies will also have some use cases like that.

And I think on the buy side, what we’ve primarily seen is newer vendors coming into play whose GenAI capabilities we can leverage. And that has been very, very interesting. And so we’re using some buy as well and internally we’re using both buy and build.

So it’s a hybrid model and my expectation is that over time, with internal use cases, you will do more buy than build, but you will have some level of build done for sure.

Question 6: Responsible AI: This is a concept we’ve heard about in the press. There’s this idea around potential bias, hallucinations, privacy risk, as you noted before. How do you think about it? What advice would you give to a CIO?

I’ll mostly speak to B2B companies. I think B2C use cases may have additional privacy restrictions. So for me, customer data is sacred. How are we ensuring that the customer data is privacy-preserving? We’re talking to the customers before we use their data for training our examples or internal custom employee data. There is personalization, and there is privacy on that data as well. So you have this confidential and sensitive data. And in practice, you should have the right classifications on that data, so you just need to preserve those classifications back into your AI models as well. That’s one area.

The second area is policies. What kind of policies have you established? Make sure that you have really robust and clear policies established with legal and that you’ve educated your teams and your employees on those policies. There’s a lot of hype to use these tools, but you don’t want to inadvertently use them in the wrong manner.

Finally, having a good governance model in place is important as well. Depending on the use case, you may still have some hallucinations, and that could be acceptable, but in other use cases complete precision may be required. Understand what kind of use case you’re actually delivering. What kind of information do you want to create?

Bonus: Do you see an organizational role for responsible AI? And if so, on whose shoulders does it rest?

It’s hard for me to say right now. Today, legal, security, and IT are working together on this, but I’m assuming that in other companies, especially in B2C companies, that might become a specific role.

Naveen is currently the Chief Information Officer of Databricks. In this role he is responsible for Databricks’ information technology solutions, driving transformational programs to help the company scale its consumption-based business globally. Naveen is on the board of advisors for several fast-growing startups like Propelo & Torii, as well as established companies like Zoom and Rubrik. He’s also an investor in many high growth startups and early-stage VCs.

Naveen’s experience spans software development and infrastructure, from leading organizations from Fortune 500 companies to tech startups. Prior to Databricks, Naveen was CIO at Palo Alto Networks for the last six years, helping the company move to a cloud-based business, and integrated over 17 companies in that time period. Prior to Palo Alto Networks, Naveen worked as Senior Vice President at Gap Inc. where he was responsible for the company’s infrastructure, operations, and information security organizations. Before Gap, Naveen spent time at SaaS startups or high-tech companies like Cisco.

He earned a BE in Computer Engineering from Bangalore University and an MBA from the University of Arkansas.

You May Also Enjoy