Risk vs. Impact Part 2: The 7 Sins of Enterprise AI Strategies
Jeremy Barnes Jeremy Barnes
May 29 12 min

Risk vs. Impact Part 2: The 7 Sins of Enterprise AI Strategies

Part 1. This blog post is also available as a podcast.

Today virtually all companies are being forced to innovate. The rate of change is accelerating and many companies are particularly excited about AI—although many find themselves struggling to advance their AI strategy. Since AI implementation cuts across organizational boundaries and traditional silos, a shift to an AI-driven strategy requires new thinking about managing risks, both internally and externally.

In Part 1, I introduced the the four personas of AI adoption that I’ve observed in the market, including:

  • AI Followers - engaging with AI as traditional software, like through an email client
  • AI Consumers - buying only AI vendor point solutions and offloading risk to them
  • AI Innovators - adopting an innovation culture and creating strategic differentiation with AI
  • AI Exploiters - AI is a well-established part of their business model, but innovation is relatively static

I also explained why, regardless of persona, boards and CEOs need to think about how much “break-out time” they'll need to change and enable their culture to adopt AI.

In this post, I'll follow up by talking about what I call “the seven sins of enterprise AI strategies", which are governance issues at the board and executive levels that block companies from being able to move ahead with AI. Some of these are errors of omission, but most are in fact bad choices that are easy to hide or justify, and often cover an unwillingness to engage in the difficult tradeoffs that AI entails.

These are choices I’ve seen made variously by the board, CEO, CFO, CIO and other members of senior leadership teams. There's no reason that companies can't learn from the mistakes of others and move ahead more quickly.

The 7 Sins of Enterprise AI Strategy

1- Disowning the AI strategy

This is probably the most important sin. In this case, a CEO and board will say that AI is a priority, but they lack time to fully onboard the responsibility and will delegate it down to a different department or an innovation lab.

In some cases, innovation labs have been nine-figure write-offs for companies—money holes, useful for PR and talent attraction, but not true product or business innovation.

However, the success is not based on whether or not a company uses an innovation lab—it's whether they are truly invested in it. A good metric to evaluate the level of investment is how often the CEO is present there.

If you want to do a quick spot check and you already have an innovation team, ask their director: "How often do you speak meaningfully with the CEO?" “At least two days a month” would be a good indicator that a team is culturally enabled to innovate.

In general, if the CEO doesn’t own it, it’s not an AI strategy—it’s an AI dream. If companies want to see real impact from AI, it won’t work to delegate it down. More often than not, the people involved get bored of having little tangible impact and leave.

The bottom line is that the CEO and board need to lead an AI strategy, or else recognize that they will only ever, at best, be an “AI Consumer” (see my last post).

2- Ignoring the unknowns

This happens when companies say they believe in AI, but don't reach a level of proficiency in AI where it's possible to identify, characterize and model the threats that emerge with new advances. It's essentially saying, "we are okay with being here in this massive fog of war and we feel so safe and secure that we don't really care if something happens that is a surprise to us. You know, we think we'll be okay."

In this case, even if it is decided that it doesn't make sense to go all in on AI innovation, it's still important that there is a hypothesis for how to address AI within a company—a hypothesis that can be monitored, tested and refined over time so that, at some point, if it's necessary to change it, an early warning system exists and the company can actually move forward. Further, it will allow you to equip the team with a baseline of data, hardware and models, so that if and when the shift happens, you're not starting from scratch. The longer-term play is having people—the major assets of the company—actually work together on AI.

There is a risk of implementing AI in the wrong way—and there's also a risk of not doing it. CEOs and boards should reflect on their break-out time: how much time do you have before this becomes an inescapable risk, and how long will it take you to adjust?

3- Not enabling the culture

The ability to implement AI is about bringing science into the company. But science is about experiments, and experiments have to be able to fail in order to learn something from them. So that experimentation mindset—and openness to failure—needs to be adopted across the company.

Further, not enabling employees to work across departments will also limit the success of AI initiatives. What organizations need to keep in mind is that AI doesn't respect organizational boundaries. What you need, in terms of data, in terms of deployments, in terms of tooling are spread across the company. Without a collaborative culture, you’ll only have innovation within a department versus across the company; success will be siloed and you might onboard much of the pain without the corresponding gain.

Most companies will default to looking for high-impact, low-risk solutions. That's where companies naturally want to live. The problem is that an early success could lead to simply optimizing, rather than advancing new value streams. You might not be willing to sacrifice the golden goose, even if it means ensuring the future of the organization. But, the lack of new activity and risk-taking may cause the original innovation momentum—the team, the culture—to evaporate.

It is hard for companies to accept increased risk in exchange for impact (more on this in the fifth sin), but it will come as part of the continuous cultural enablement of an experimental mindset.

4- Starting with the solution

This is the most common sin. It’s important to be able to understand the specific problems you’re trying to solve, because AI will likely not be a solution to all of them. Imagine you start by deciding, "OK, AI is the answer." You'll buy from one vendor with an AI solution, and another with a data lake. Then you might think that you have the right formula for all of your AI initiatives. The challenge with this approach is that implementing many different tools and making them work together requires a significant amount of effort—and won't even necessarily produce value.

Have the conversation at the board level to ensure that an overarching AI strategy, and not simply quick-fix solutions, is the priority. Having foresight several years ahead will drastically change the outlook on the needed tools and replicability across an end-to-end process of AI implementation.

5- Lose risk, keep reward

As mentioned in the third sin, it is natural for companies to want to implement AI without any risk. They think that they can push all of the risk of the models not working onto the vendor. Or they undercut their AI efforts by requiring them to fit within a rigid risk model that was thought up before AI was even on the radar.

AI is still a very immature field, and there is no one-size-fits-all solution. The fluidity of the tooling market right now means a vendor who is motivated to decrease risk will also decrease innovation and ultimately impact by making successes small and failures non-existent.

The highly effective use of AI creates differentiation only for companies that are willing to learn from both their successes and their failures. Moreover, this attitude will help companies think ahead to account for (and integrate) emerging best-in-class tools.

A company that doesn’t effectively balance risk in AI (taking the “as little as possible” approach instead of “the right amount” approach) will ultimately increase their risk of being disrupted, by being unable to react to a competitor’s differentiated and strategic use of AI and the market shifts it creates. This may be a much bigger risk than the one avoided, increasing overall risk for the business.

A company maintaining risk-averse behaviour and unwillingness to lead in their industry will kill their ability to innovate.

6- Vintage Accounting

Good corporate governance generally implies good financial governance. However, attempting to fit AI into traditional financial governance structures causes problems that will often leave those efforts dead in the water.

New technology investments are often uncertain propositions. The rewards can be higher, but the risks are too. The link between what you put in and what you get out can be less tangible or predictable, which often makes it harder to square with existing plans or structures.

Your immediate instinct might be to treat AI as software, and source vendors accordingly. But if you only consider it to be purely a matter of purchase, then you'll fall into the AI Consumer persona.

Instead, I suggest modelling the rate of return on AI activities and all data-related activities. Modelling the rate of return requires that these activities affect profit (not just loss) and assets (not just liabilities). This is highly correlated with effective outcomes in data-based activities.

It is difficult to model the benefits of new technology investments—but consider modelling the potential returns against the risks of inaction (this may be aided with the help of sophisticated partners).

7- Treating data as a commodity

The final sin concerns data and its treatment as a commodity. Data is fundamental to AI. If data is poorly handled, it can lead to negative impacts on decision-making.

A commodity’s characteristics include:

  • Duplicable - it can be duplicated at no cost, but also loses all value when it is
  • Variably valuable - it has a different value to each possible user or buyer
  • Negative storage - it has a negative value in storage
  • Criminal value - it can be worth far more to criminals than to its original owner

Data should be treated like an asset. The stronger, deeper and more accurate the dataset, the better models that you can train and more intelligent insights you can generate.

But, at the same time, when data is stored, it can often be a liability. The personally identifiable information typically stored about customers can be stolen and subsequent penalties will ensue. Some jurisdictions now have legislation that provide for large enough penalties for data breaches that the upside doesn’t make up for the downside risk.

The last thing to consider is that the data you need for your AI models is likely not commoditized. Even if there's lots of data out there, it may not fit your desired use case or goals. It's rare that the data that's available happens to be exactly the data that you need, and so you need to build towards data from a use case rather than invest blindly in data centralization projects.

7sins

Start simple

So, now you know what not to do. Here are some of the simple things that you can do to move ahead.

First, discuss break-out time with the board—talk to your board about the big questions. How long will it take to become an AI innovator in your industry, starting where you are now, and how long that should be? To help that discussion, you can model it out, rather than simply discussing it conceptually. You can use this discussion to identify what the targets are and develop a plan to get there. If you're not sure where your AI efforts stand today, we’ve created a nifty 10-minute assessment to provide a snapshot.

Second, prepare for change and put in place monitoring. AI shifts all the time, so you'll want to regularly check in to adjust and pivot your strategy. It's important to develop a basic skill set to identify when things have changed. Then you can redo planning exercises with your board, or at least adjust to what has shifted. That way, if threats or challenges do come up, you have some opportunity to react to them.

Third, model out risks on both sides of the equation. In the case of AI, there are risks to both action and inaction. But don't model them in a traditional approach, which is to push risk down to different business units and then compensate those units for reducing risk rather than managing tradeoffs. Instead, view those tradeoffs in terms of risks and rewards, and start to think about how you are accounting for the assets and liabilities of AI.

Ultimately, you want to start to model what is the actual rate of return for all these activities that you are doing. Then benchmark it against what you see in other companies from across the industry. And that will give you a good picture of the current situation and where to go.

Once you know where you want to goand how fastyou and your team can learn more about how to move ahead from our new AI Maturity Framework. In it, you’ll find industry benchmarks and an easy-to-use blueprint to enable your organization to implement AI. For a snapshot of your organization’s current AI maturity, you can take our 10-minute industry survey.