Overcoming the C-Suite’s Distrust of AI – (Executive Summary)


Harvard Business Review
by
Andy Thurai Joe McKendrick
March 23, 2022


Executive Summary


by Joaquim Cardoso MSc.
Digital Health Revolution . Institute 
Digital Connected, Data Driven, AI Augmented Health
March 31, 2022

What is the context

  • There has to be an associated degree of confidence or scoring on the reliability of the results.

  • It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future.

What is the problem with AI in the C-Suite

Many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse.

  • An examination of AI activities among financial and retail organizations by IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”

  • More than two in three executives responding to a Deloitte survey, 67%, say they are “not comfortable” accessing or using data from advanced analytic systems.

  • Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.

How to increase executive confidence in AI-assisted decisions


There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:

  1. Create reliable AI models — that deliver consistent insights and recommendations
  2. Avoid data biases — that skew recommendations by AI
  3. Make sure AI provides — decisions that are ethical and moral
  4. Be able to explain the decisions made by AI — instead of a black box situation

What are the tasks to raise executive confidence


Consider the following courses of action when seeking to increase executives’ comfort levels in AI:

  • Promote ownership and responsibility for AI beyond the IT department, from anyone who touches the process. 
    A cultural change will be required to boost ethical decisions to survive in the data economy.

  • Recognize that AI (in most situations) is simply code that makes decisions based on prior data and patterns with some guesstimation of the future. 
    Every business leader — as well as employees working with them — still needs critical thinking skills to challenge AI output.

  • Target AI to areas where it is most impactful and refine these first, which will add the most business value.

  • Investigate and push for the most impactful technologies.

  • Ensure fairness in AI through greater transparency, and maximum observability of the decision-delivery chain.

  • Foster greater awareness and training for fair and actionable AI at all levels, and tie incentives to successful AI adoption.

  • Review or audit AI results on a regular, systematic basis.

  • Take responsibility, and own decisions, and course correct if a wrong decision is ever made — without blaming it on AI.

Conclusion

  • Inevitably, more AI-assisted decision-making will be seen in the executive suite for strategic purposes.

  • For now, AI will be assisting humans in decision-making to perform augmented intelligence, rather than a unicorn-style delivery of correct insights at the push of a button.

  • Ensuring that the output of these AI-assisted decisions is based on reliable, unbiased, explainable, ethical, moral, and transparent insights will help instill business leaders’ confidence in decisions based on AI for now and for years to come.

Originally published at https://hbr.org on March 23, 2022.

Click above for the full version of the article.

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share