Podcast / CXO of the Future

Dan Elron, Managing Partner, Enabled Strategy

Today we welcome Dan Elron, Managing Partner, Enabled Strategy, to The CXO Journey to the AI Future podcast.
Until recently, Dan Elron was Managing Director of Corporate Strategy, Technology, and Innovation at Accenture, where he helped drive strategy for their technology business. He has advised many key clients, working with CEOs, corporate boards, and leading academics and policymakers across the globe.

Most of his client work has been in the high-tech, telecom, and financial services industries, and more recently, the automotive industry, as it began to integrate advanced digital technologies. He was also the Information Technology Industry Advisor for the World Economic Forum. For the past decade, he has worked on the impact of artificial intelligence across large enterprises, and anticipates significant disruption during the coming decade.

Question 1: Current Job: Could you tell us a little bit about your background and how you ended up where you are today?

I’ve been in management consulting for a long time. I recently retired from Accenture after working for many years in the tech and telecom industries, as well as other large global enterprises. I eventually began working on Accenture’s own business and technology strategy, its ecosystem and its relationship with both large providers of technology as well as startups.

Now I’m working with startups as a mentor, as an investor and advisor to investors, and with large organizations that are struggling to integrate the amazing wave of technology that’s coming to us, including the topic today, which is Generative AI. But as you can imagine, that’s just the tip of the iceberg. Many other things are happening over the next 10 years, I think not just in IT, but in IT-enabled technologies, including in healthcare, biology, agriculture, etc.

So there’s a lot of interest today around leveraging technology for strategic value, and I really enjoy working with clients on that topic.

Question 2: Generative AI: You’ve had so many conversations at a strategic level with thought leaders and major corporate teams. Is there something unique about Generative AI or AI, specifically, that you think means higher priority from a leadership standpoint? How much priority do you think we should be putting on this?

I’m reminded of Yuval Harari, who wrote the book Sapiens. There’s something he said that really struck me; he said that using Gen AI is “hacking the operating system of humans.”

Language is the operating system of humans, and if we’re hacking that, even if it’s primitive hacking today, we’re impacting a lot of the information flow among humans, a lot of the transactions between humans, and eventually a lot of the emotions between humans. Ultimately, all organizations are made up of humans, and they don’t work very well if the humans don’t work well with each other and communicate with each other.

So here we are, introducing a new agent. An automated agent that speaks our language, generates our language, and can help or hinder how an organization works. So because of that, I think it’s extremely important. I don’t think anybody understands exactly where it’s going to go. We’re early in the technology’s development; every month brings something new. And we’re fascinated by what’s going on today. But it behooves us to think about the likely evolution over the next 5 or 10 years, which is very difficult to predict. It will likely extend from language to images, and from the digital to the physical world. But it is fundamentally disruptive.

So the answer to your question is, yes, I think it’s extremely important. It doesn’t necessarily mean that this is where you need to spend the most money as an enterprise, but it probably means that this is where you need to spend the most thinking as an enterprise.

Question 3: Early Learnings: How have you executed a Gen AI strategy for your team or organization and what initial learnings or initiatives can you share?

I was shocked by the early use cases even a year ago. Very basic things such as feeding support tickets into a model, reducing the speed of figuring out those problems, and reducing the level of experience required to address those problems was truly astonishing.

So very impressive gains came very quickly. And not necessarily in the areas that we predicted. That requires experimentation which I think most folks are doing right now, and it requires creativity. There are a lot of underappreciated sources of knowledge that you can feed into Gen AI.

And it’s important to note that I make a distinction between Gen AI and the rest of AI, including machine learning. I think these things integrate with each other and need to work together; we can’t just focus only on Gen AI. But when you do that you get amazing results. Not necessarily always the most strategic results in terms of market positioning, but certainly impactful in terms of speed, cost, and as I said, the level of talent required to execute a process.

So I think the early learnings here are that it can be transformational for many processes. It requires experimentation. It requires integration, as I said, with other technologies, whether it’s AI specifically, machine learning and associated technologies, or with other technologies. If you’re in biotech it means integrating with the latest learnings in science and biotech. And that’s something that we’re beginning to see.

There’s an amazing difference between the few leaders and companies that have been at this for a year or six months, and those that are just getting started today. So if you’re not doing a lot of that, you’re already behind.

The other lesson learned, and something that concerns me, is that a lot of companies when they think about Gen AI, think: “What should I do? Which department is it? Customer care, chatbots?” And they rarely think about the ecosystem as a whole.

The suppliers are going to use the same thing, and you can work much better with your suppliers. Or sometimes be disintermediated by your suppliers if you don’t think about how they’re going to use Gen AI. And the same thing will happen with other partners: supply chain partners, government partners, your customers, etc.

Many of today’s use cases I see are internally focused. We need to start looking outside. Odds are, somebody outside our ecosystem will have better ideas than we do, statistically, in terms of what to do with Gen AI.

In some cases, if leadership teams aren’t careful, companies will get disrupted. Because somebody will build a platform, and products and services will be consumed by some kind of automated agent that they design and control. And companies will have to respond.

So the early concern I have here is: you really need a holistic point of view. Everyone is looking at their competitors today, but consider the whole value chain, how you’re delivering value to customers. Who has more power? Whose power is going to be amplified by the use of Gen AI? And, unfortunately, whose power is going to be diminished because they still have too much friction? For example, we’re making information more easily available. So the competitive dynamics in many value chains are going to be disrupted by that. It’s just beginning now, and it’s going to accelerate.

Because of this the role of the Chief Strategy Officer, which maybe wasn’t that significant five years ago, is really becoming much more significant.

Question 4: Metrics: So how do you move a team to begin? You mentioned the Chief Strategy Officer. How do you create metrics that set an agenda? How will you prioritize where to deploy Gen AI solutions? What areas or use cases are at the top of the list?

Before we go there, I think the strategy to implement Gen AI and AI in general, is potentially strategic. But it’s also so important tactically, in terms of cost reduction, and process design.

The strategy has to be driven both top-down and bottom-up.

The experimentation that you and many others talk about needs to happen and often should happen bottom-up. I don’t like the idea of having the Chief AI Officer; I know that’s a little controversial. I think everybody should be a Chief AI Officer. Certainly, everyone who reports to the CEO should be the Chief AI Officer for their domain. It’s not something you can delegate. It’s the same thing that I think about innovation more broadly, by the way.

But in terms of the impacts, they’re going to be different. Therefore the metrics are going to be different between the tactical bottoms-up work and the more strategic work. So clearly, for the tactical bottom-up work, the classic measures of productivity can be extremely significant, and indeed we are seeing sometimes major productivity gains already. Certainly, we see that in software development, and also in customer service, in repair, trouble ticketing, etc., they’re very significant. I would focus less on the ROI. One reason is that the cost side of geAI is still unclear, I think.

Prices are going to come down eventually, and I wouldn’t hamper the technology by allocating too much cost to it yet. When that scales, maybe, but for now I think you want to experiment and introduce it as quickly as possible and not apply the usual business case logic which slows everything down in many, many companies. I know that’s easy to say and CFOs don’t like it. But this technology is still early and I think that’s the right thing to do.

Also, for all executives, while there’s a lot of value in taking a two-hour course, if you’re an executive affected by AI or Gen AI, there’s a lot more value in actually seeing it across your organization, working, and hopefully working well – so you need to get close to the work in the field and not treat it as a standard ROI-based effort.
In terms of the metrics, I think speed is extremely important. Because, as we know, speed is strategically valuable in just about any context.

Another thing to consider is the business process: how many steps in the business process are automated versus manual? How can that change? It’s a very basic thing. This is something we looked at in the past when we did reengineering for those of you who remember that – efficiency and effectiveness.. But that will help you to understand the ability of your organization to change by applying GenAI, agents, etc..

Also, how susceptible are you to disruption? If something that took many steps suddenly becomes much more simple with this technology, it could be that your competitors, or new entrants, can do that with a new platform and overcome the complexity that kept you successful as an incumbent.
So some of these metrics are not traditional, but because we’re early in the technology, we want to understand the strategic value.

Another set of impacts that I find very interesting addresses the work experience and training required. We all know there’s a shortage of talent and that Gen Z has different priorities and different requirements for their jobs. So what is the impact in terms of the experience, training, and job satisfaction of introducing this technology? Therefore, what does that mean in terms of your labor demand, the kind of labor, and the amount of labor that you’re going to need over the next few years? How do you design the new AI-enabled roles while maintaining what humans value?

To give an example of impact, one case I know of is in the automotive industry. The average experience required to solve a particular problem went down from the need to use an engineer with 5 to 7 years of experience to engineers with 1 or 2 years of experience because they were enabled by Gen AI. That’s extremely powerful, and raises the question of, for example, how to keep both levels of engineers engaged and learning.

In markets such as Europe, where many engineers are retiring, the use of GenAI, including automating processes via genAI agents, will be extremely helpful. So, to your question, preserving knowledge and reducing labor demand are metrics that should both be considered.

And the last metric which I don’t really think you were asking about, is risk.

We tend to attribute too much risk in some cases to AI because of all we read. What is the real risk here? I encourage companies to evaluate the risks specifically for each use case.

What’s the worst that can happen? And how often do we have problems? Obviously, it’s still early days, we’ll have problems. But are they show-stoppers, can they result in serious, systemic injury or damage, or are they things that can be managed and fixed?

That perspective should help management teams overcome the fear that in some cases prevents them from doing anything. Even in a law firm, there are many, many things you can do with Gen AI that are not going to get you in trouble when you get to a judge who doesn’t like a brief that you’ve written with genAI.

And so it will help to have a solid, simple framework for saying, say, this is high-risk, high-reward, or maybe high-reward, low-risk. That helps you with the deployment strategy and helps you measure whether or not you have a balanced portfolio of tech applications in terms of risk and reward, and to be prepared for clean-sheet disruptors who will start with genAI as a basis for designing their processes.

Extra: You came from working at a management consulting firm for years. How do you think management consulting will evolve through this?

I think because the industry is populated by people, who, as an economist called them, are ‘symbolic manipulators’ (people who work with symbols, whether the symbols are software or words on the Powerpoint), that the industry is going to be perhaps one of the most affected by this ‘hacking of language.’ And if the language is structured, such as computer code, the disruption will be much more serious.

There are still barriers to this disruption. It turns out that, you know, PowerPoint, for example, is not the best way to train a large language model. But we’re going to get there.

From my perspective (and this is an industry I spent pretty much all my career in), the industry needs to reinvent itself. The impacts on software are definitely already happening. I just saw various projects where companies are using software to develop complex systems or at least pieces of complex systems that would have cost tens of millions of dollars and now can be done much faster with less experienced talent. So it’s disruptive there.

It’s also disruptive in something that the industry does, which is to take information, aggregate it, and deliver it in a customized way to a client.

It’s an industry that really needs to rethink what it’s doing. There are huge opportunities for the industry in redesigning the ecosystems, for example, for the use of agents; in rethinking how work is organized; in enabling the human transition; in making sure the benefits are captured and risks managed. The consulting industry did that in the process reengineering days; it is time to do that again and be willing to let go of work that is now highly automatable, however hard this is.. When a new technology is really threatening, sometimes you don’t move very fast, or you focus on small projects and proofs of concept, and I think that’s very dangerous for the industry. Technology providers would love to capture the value that used to go to consultants and integrators during past tech transitions. This time it could happen much more quickly, before the industry has managed to change how it works.

Question 5: Gaps and Issues: What do you think are some of the gaps, issues, or impediments for large organizations? I know this isn’t a technology-only issue. What are some of the issues, or maybe even gaps in technology, that you believe need to be addressed?

Well, I’m going to discuss it with you as a venture capital firm. There’s a terminology we haven’t used for a while, but maybe you’ve heard of it, it’s called the ‘- ilities.’

These are attributes of technology, especially computer technology, which include scalability, manageability, integrability, security, etc. Those are often missing for Gen AI – I think we all know that. But I don’t think there’s enough funding that’s going towards these things, which may not be that sexy, but for an enterprise to use this software in a reliable, predictable way, we need a lot of investment in those.

Another gap that kind of addresses something that I mentioned earlier is risk mitigation and risk management. We know that technology is far from perfect (think: hallucinations), but there are very few tools at this point to tell us what is dangerous and what is not dangerous. Or: to help enterprises figure out where to put the technology and let it run, where to turn up the temperature on a model, which means where to let it be more creative vs. use a completely explainable model. Not everything has to be explainable, that would be terrible if we required every model to be explainable. I know a lot of companies think they should. I don’t agree with that.

There are places where you can take risks, where creativity is good, and where, frankly, you don’t care whether you know how the answer was received. You probably can tell that the answer was a good one and the risk of not explaining how you got there is acceptable. I already see how big companies are unnecessarily preventing those uses.
We are also missing good approaches to training. We know that even an hour of training on using language models has a huge impact on productivity. But what about a day of training? What about a month of training with mentors? What kind of training? We still don’t know.

Another gap is the change management. People in many organizations, up to the very top are, in many cases, frozen. They will say, “Yeah, let’s have a few pilots here and there. Let’s reconvene six months from now, and see how these pilots have gone.” In some cases it’s fine, but in many cases, it’s too slow. So what kind of change management can we implement in order to encourage departments and product groups that are susceptible to disruption, or which have an opportunity to really do much better and generate a lot more revenue? How do we encourage them to be comfortable with the technology and to move from exploration into production? One lesson seems to be that it is better to deploy the technology to teams than just to individuals.

Even example case studies, videos, and interviews/podcasts such as the ones you’re doing in this series can give confidence to organizations to move faster. And right now I think many are saying they’re moving fast, but when you really look at the rate of implementation and how close they are to getting something into at-scale production, the answer is often not very good.
You extrapolate the things that they think could get into production and you see that they’re going to be a year or two years away, which for the more basic things, or, say in the IT shop, is way too long.

Extra: You’ve helped people get through this change management obstacle many times in your nearly 30-year career at Accenture. How did you get people fired up?

Well, it is challenging. But yes, I remember even having to convince many telecom companies that broadband was a good idea. And here we are with something that probably is much more disruptive than broadband.

Fear and greed are very helpful here.

For example, if you paint the picture of an ecosystem that’s full of agents that decide who should buy what, at what price, and from whom and then look at your own process and see how many steps and how bureaucratic what you’re doing today is, this creates a degree of fear because your competition could adopt these technologies and be much better, faster, and potentially cheaper.

The more I read about these competitors to Amazon such as the Chinese retail and e-commerce players who are coming into the US, and how they’re using not necessarily Gen AI, but AI more broadly, or technology in general, and how quickly they’re moving, that should strike fear into all retailers and other companies as well, maybe even highly regulated, often inefficient, protected industries such as insurance, who may intellectually understand the situation but don’t move very fast.

About greed, I think we all know how that works. It can be very powerful with the C-suite.

The cost reductions, as we mentioned, the reduction in the talent requirements for addressing business processes, the simplification of organizations, the integration of agents, not just today’s Gen AI, but what’s coming next, etc.: when you describe those and provide real examples in a C-suite you see eyes light up. They understand that this will have a bottom-line impact on their organizations. The revenue side may be more challenging, but the examples are coming quickly here as well- for example in simple upsell situations.

The biggest concern I have around getting people fired up is that there’s so much negativity about AI right now, especially in Europe. The concerns about the risks, the amount of regulation, and the issues around hallucinations that could get companies into trouble are all valid. But I believe that’s putting many speed bumps in front of the exploration and employment of the technology when they’re not necessary – or when it is others, such as regulators, or your suppliers, who have to take the lead. You may not be able to wait for these often slow processes for what you are doing inside your business, despite the negativity.

We often don’t like where we ended up with the Internet, and as a society we should learn from that and address these issues quite forcibly, but we should not let the sins of the past prevent us from dealing with the opportunities and challenges of this new set of technologies.

So these are some of the challenges we need to overcome. These are early days, and competitive advantage often is created in the early days. New models are created by innovators; in some cases, being second or third doesn’t work, especially when you look at platforms and other business models, new ecosystems, etc. Every industry is different, but I think having a view of what might happen, having scenarios, and moving quickly when necessary, makes a lot of sense.

Question 6: What would you say to a C-level executive on how they should be thinking about going forward?

My sense in the last few months, having been in this business for a long time, is that I have never seen a technology that’s moving as quickly with the potential impact that Gen AI has. This is partially because it’s built on top of the internet and a lot of prior investments. But we’re now accelerating, and I struggle to imagine what and how quickly things can happen.

So that’s the message I would leave. It requires making efforts to learn and reflect, to really think about the medium/long term impacts and how your business and ecosystem could change, including how you make money. As I mentioned, there are lots of lessons from early adopters, from China, other industries and places.

The second thing is, don’t delegate. As I said, hopefully you and your team can be your own Chief AI officers, and you don’t need to hire one. This is something that every senior business executive needs to take seriously and think about, at least for their domain, and how that’s going to accelerate as we get into, for example, autonomous agents, as we discussed,
Yes, the change can be concerning and could well be disruptive, but I think the impact of technology on things we care about such as healthcare, the environment, or transportation is going to be dramatic over the next ten years, and hopefully very positive.

I am hopeful our children are going to benefit from this. We need to be careful, but also really be inquisitive and excited about what might come.

Until recently, Dan Elron was the Managing Director of Corporate Strategy, Technology, and Innovation at Accenture, where he helped define its technology vision and drive its strategy for its technology business, and where he advised key clients, working with CEOs, corporate boards, and leading academics and policymakers across the globe. He also helped lead Accenture’s Innovation Network and its relationships with large and emerging technology partners. Most of his client work has been in the high-tech, telecom, and financial services industries, as well as the automotive industry as it needed to integrate advanced digital technologies. He was also the Information Technology Industry Advisor for the World Economic Forum. For the past decade, he has worked on the impact of artificial intelligence on large enterprises, anticipating significant disruption during the coming decade.

Currently, Dan works on strategic topics and advises senior leaders at large US technology players, and mentors and invests in startups in the US and Europe. He teaches at the University of Virginia, where he is associate director of the Center for the Management of Information Technology, and also at INSEAD in France.

You May Also Enjoy