Risk vs. Impact Part 1: The 4 Personas of AI Adoption
Jeremy Barnes Jeremy Barnes
May 29 9 min

Risk vs. Impact Part 1: The 4 Personas of AI Adoption

Part 2. This blog post is also available as a podcast.

Artificial Intelligence has become too big to ignore for most companies, and boards are urging their management teams to get out of ostrich mode as quickly as possible. But once they emerge, blinking in the sunshine, they discover a bewildering range of technologies and buzzwords, pushed by an equally baffling number of thought leaders and vendors. Management rapidly concludes that AI is a jungle and asks a technical part of their organization to identify a starting point.

This may be a mistake. Successful integration of AI is less about the technology than you would expect. It starts with understanding what kinds of problems you are facing and how leveraging AI can help you tackle them. There will always be an inherent tradeoff to consider: the strategic risks of not acting on AI vs. the challenges of adopting it.

Implementing AI is different from typical technology implementation because it is cross-cutting by nature. AI extends across an organization's boundaries and departments. Due to this, companies can get stuck if they don't invest in culturally enabling change.

What makes AI adoption successful are often basic elements: having the right level of organizational support for a project, not keeping data in silos and being open to accepting and managing certain risks rather than insisting on none. It's not using AI itself, but rather an organization being open to change that leads to clear, differentiated business opportunities that provide real leverage in the market.

Risk and Impact: The Four Personas

Implementing AI is a relatively new exercise, and there's no sure path to success. Despite the uncertainty, you can identify your organization's persona at a high level, given your current risk profile and capabilities to make an impact.

IA personas

1- AI Follower: High risk, low impact

The first persona is the one that most major companies embody today. AI Followers are using AI software, such as an email client or ad targeting, and employing data science and analytics platforms, but in a very ad-hoc manner. Typically they don't have fleshed out governance or coordination around AI, and may be surprised to discover what’s happened in the AI space already. While Followers can yield some of the benefits of AI, it's a high-risk endeavour because potential external threats of AI have not been considered. In particular, unless it’s been planned ahead, the time to adapt can be too long to effectively counter the use of AI by a competitor.

2- AI Consumer - Low risk, low impact

For this persona, the company mostly consumes point solutions of AI products and services, for instance a bank adding a feature for cheque deposits via a smartphone’s camera. In this case, risks are lower because AI usage is tightly controlled, well-governed and the risks are pushed onto vendors. Further, the scope of projects is limited and their impact reduced, because of a lack of coordination and leadership from across departments. AI Followers can easily become AI Consumers if they put in stronger risk management measures. However, the low risk is a trap. The culture of governance and control makes it very difficult for these companies to accept the risk necessary to transition to a high-impact use of AI until it’s too late, meaning they are complacent and unable to transition to AI Innovators—so, again, at risk of being wiped out by AI-wielding competitors.

3- AI Innovator: High risk, high impact

This is the sweet spot for the current maturity of AI technology. The AI Innovator recognizes that AI is a significant source of strategic differentiation for the company. The company seeks to enable AI both top-down as well as across departments. Innovators also have a willingness to take calculated risks and to learn by doing. For the Innovator persona, the time to develop a novel AI solution is low, minimizing the external threat of a competitor wedging them out of the market. And the governance culture at the company enables the effective balancing of risk and reward.

What makes the AI Innovator different from the other personas is that Innovators are specifically interested in solving future, ambitious opportunities with AI, rather than existing, low-risk opportunities. It's possible to reach this level today, from Follower, and a small number of companies have already reached it. It’s much, much harder to reach this level from Consumer, as it requires a 180-degree shift in risk culture, which normally necessitates a complete transition in senior leadership and a return to being an AI Follower.

4- AI Exploiter: Low risk, high impact

You might think that low risk, high impact sounds ideal. And it is a powerful persona for an organization to be: a comfortable user of AI that has it built into the business model. At this point, the organization's focus is the steady exploitation of AI capabilities to succeed. But organizations of this persona are often optimized to exploit existing opportunities, versus new opportunities. And that can still pose risks to the future of the business. Organizations can only reach this persona by first being an Innovator. There are very few companies that have reached this level today, one example being computational advertising companies.

AI Doesn't Respect Organizational Boundaries

AI is a forcing function to break down silos and ensure AI teams are working with business units.
How do different departments across a company prepare together for the risks of AI adoption?

For instance, in the Executive branch, it's necessary to engage the functions of strategy and governance. These groups can help set the roadmap for the organization, as well as ensure that risks are managed and ethics are built in from the start.

In Procurement, it's important for an organization to prepare to pay above-market price for an above-market impact. An organization will also have to consider what partnerships are necessary to get the required data and to manage the risks associated with the operation.

Within the context of Technology and IT, there are specific, technical constraints to prepare for. What amount of computing power will an AI solution require? How much storage will be required for ongoing operations? How to accommodate data privacy and retention policies? There is also a need to add flexibility and be more adaptable to the new demands of AI tech.

And in Operations, organizations can sometimes find themselves scrambling to figure out where data will be stored, how it must be managed, who has access and so on. Also, sometimes implementing an AI solution will simply come down to cross-functional know-how for putting it all together to ensure its success

Accepting Risk and Planning for Break-Out Time

There is always a balancing act with risk and reward when investing in new technology and it is no different with AI. There is an urge for more certainty before making a decision, but turning a blind eye while waiting for the market to become more clear can only be done for so long. Over the next 20-30 years, virtually 100% of the economy will have to incorporate AI in a meaningful way. It's important for companies to consider: what is the risk of not investing, whether now or in the near future?

In general, companies are not ready for the cross-cutting nature of AI. That is largely due to most companies being built on a 20th-century business model to manage and reduce risk. This aging risk management structure is like kryptonite to AI strategies.

In reality, there is an imperative for a board-level, holistic decision to accept a certain level of risk and begin investing in AI.

The positive news is that it's not rocket science—it's not actually that hard to model how AI could impact a given business. Consider the disruption to supply chains seen in the coronavirus crisis. With much of the economy turned on its head, this would be the opportunity for a company with AI solutions in place to evaluate unexpected situations and quickly adapt and advance beyond the competition.

For executives and boards, it's important to address the break-out time required for change - how long it would take their company to get from its present state to the point where they can use AI to tackle a novel problem. It’s also the time it would take the company to equip itself to meet the threat of an AI-wielding competitor that is using AI to redefine their business model.

Breaking out involves a company building the expertise and know-how to deploy AI in their business, as well as making a cultural shift to learning by trying and accepting the failure of some projects as a necessary part of working. The cultural part is the hardest.

Most companies that haven’t thought about AI very much are years away from being able to break out. Companies should have an idea of how long their break-out time might be, and maintain a plan to do so if market shifts require it.

While it can be difficult to manage a multi-departmental initiative, the potential benefits of AI are often enough cause to move ahead. And if you have the persona and openness for a high-risk and high-impact solution, you might yield significant benefits for your organization.

There are common challenges that boards and CEOs face in handling their AI shifts and I will expand on this in Part 2.

Once you know where you want to goand how fastyou and your team can learn more about how to move ahead from our new AI Maturity Framework. In it, you’ll find industry benchmarks and an easy-to-use blueprint to enable your organization to implement AI. For a snapshot of your organization’s current AI maturity, you can take our 10-minute industry survey.