How AI risk management is different and what to do about it
Richard Zuroff Richard Zuroff
October 15 6 min

How AI risk management is different and what to do about it

In less than 10 years, artificial intelligence has gone from being an academic pursuit to a strategic corporate investment. Leading organizations in many industries are beginning to report on successful AI deployments.

For organizations that want to put AI to use effectively, however, it’s important to do more than be first to the table. The companies that will benefit most from AI are those that can successfully manage both its benefits — and its risks.

To prepare, business leaders need to know that risk management for AI requires a different approach than previous technologies. Organizations should set themselves up for success by bringing together risk, business, and technical teams, using five key strategies to get the most from their combined efforts.

With great powers come new risks

AI systems pose new challenges for the same reason that they create powerful new capabilities. In contrast to rule-based systems configured with step-by-step instructions, AI systems’ logic is shaped by setting goals or objectives that shape the process of machine learning. The unique risk profile of AI systems is that they formalize implicit tradeoffs involved in completing these goals and objectives – and scale their impact through automated decision-making.

For instance, a cleaning robot might be programmed to optimize its route in order to meet a minimum level of cleanliness in the shortest possible time. Specifying the goal (maximizing cleanliness and speed) is much more efficient than manually mapping out rules for a specific route, especially if obstacles or dirty spots are constantly changing. The fastest possible cleaning would likely miss some dirty spots, while cleaning a room perfectly might be cost-prohibitive. The robot’s performance depends upon how well it balances the competing goals of cleanliness and speed over time.

The risks of robotic cleaning are relatively low because the tradeoffs of cleanliness and speed are easy to specify and monitor. However, in a business setting, the tradeoffs that must be balanced to create an ideal outcome are more complex and monitoring an AI’s decision-making judgment is not as easy. To build AI systems that reliably translate data and goals into good business decisions, organizations must be ready to codify implicit judgment into explicit choices about what and whose information, perspectives, and impacts take priority.

For example, in places where historical discrimination has shaped patterns of income, wealth, and home ownership for different racial or social groups, what influence (if any) should equity and fairness play in making loan or credit decisions? What is the precise mathematical tradeoff between profit and fairness that an AI should be programmed to make? How will the business guarantee to itself, its customers, and its regulators that these rules are being followed correctly?

Responsible AI is everyone’s responsibility

Business leaders need to ensure the design and implementation of AI applications reflect choices that are aligned with the company’s values and responsibilities, even when this forces the organization to confront difficult questions. Recognizing that formalizing and scaling these tradeoffs creates risks to be managed (whether financial or reputational) is a critical first step to achieving the best outcomes with AI.

Once this first step is taken, it is important that AI risk management not be isolated to a single team or function in the business. Organizations that are beginning their AI journey sometimes rely on AI model developers, as the in-house experts, to use best practices. A stronger approach is to “trust but verify” by having an independent team (often located in the Risk function) to verify and validate the safety and appropriateness of models before they are deployed. After deployment, this team should also continuously monitor models to check their usage for negative feedback effects, which can be especially pernicious for AI systems.

However, risk management should not be seen as purely the responsibility of a separate Risk team. Business and technology teams should also tailor their usage of AI to reduce risk through the everyday choices that managers, designers, and builders make about where, how and why to develop and deploy AI. It takes a village.

Five strategies for managing AI risk

While the best practices for great enterprise AI governance are still being written, there are important steps companies can take to be responsible early movers. Organizations should take a comprehensive inventory of their risks and mitigation strategies for existing and planned AI use cases. They can then use this gap analysis to focus investments in strengthening processes (such as validating models), policies (such as keeping people in the loop), and technology (such as building models using the latest explainability techniques).

Specifically, we suggest these five strategies:

  1. Take inspiration from risk management practices in the financial sector by building a separate team to perform one-time validation of new models, as well as ongoing validation focused on heightened AI risks, like drift or bias.
  2. Increase human understanding of models and their decisions for users and other stakeholders to promote appropriate use and facilitate human intervention through greater explainability and transparency.
  3. Consider the benefits of conserving human intervention and context-sensitivity into processing loops, especially if AI is being used for surveillance and analysis, in order to avoid the overly narrow logic of end-to-end optimization.
  4. Explore business models that use non-price mechanisms and matching algorithms to let users define their own tradeoffs when incommensurable values are at stake.
  5. Take a proactive and collaborative approach to shifting risk by collecting and sharing information required for carriers to underwrite AI insurance, and to ensure that harms from AI systems can be compensated.

Don’t hold yourself back

Risk management inherently focuses on the potential for negative impact, which can feel counter to the spirit of innovation surrounding AI. But great companies do not relegate risk management to a compliance function. Instead, they see it as a strategic enabler that allows them to make bold bets with more confidence.

The same is true for AI. We believe robust enterprise AI governance leads to more use of AI, which in turn create more benefit for companies, their customers, and their society. By starting now, organizations that are proactive about AI governance can accelerate their transition from AI experimentation to AI deployment at scale.

If you’d like to learn more about AI risk management, or if you’d like help putting it into action, click the button below to contact our team!