Harvard Business Review
by Andy Thurai Joe McKendrick
March 23, 2022
Executive Summary
by Joaquim Cardoso MSc.
Digital Health Revolution . Institute
Digital Connected, Data Driven, AI Augmented Health
March 31, 2022
What is the context
- There has to be an associated degree of confidence or scoring on the reliability of the results.
- It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future.
What is the problem with AI in the C-Suite
Many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse.
- An examination of AI activities among financial and retail organizations by IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”
- More than two in three executives responding to a Deloitte survey, 67%, say they are “not comfortable” accessing or using data from advanced analytic systems.
- In companies with strong data-driven cultures, 37% of respondents still express discomfort.
- Similarly, 67% of CEOs in a similar survey by KPMG indicate they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics.
- Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.
How to increase executive confidence in AI-assistec decisions
There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:
- Create reliable AI models — that deliver consistent insights and recommendations
- Avoid data biases — that skew recommendations by AI
- Make sure AI provides — decisions that are ethical and moral
- Be able to explain the decisions made by AI — instead of a black box situation
What are the tasks to raise executive confidence
Consider the following courses of action when seeking to increase executives’ comfort levels in AI:
- Promote ownership and responsibility for AI beyond the IT department, from anyone who touches the process.
A cultural change will be required to boost ethical decisions to survive in the data economy.
- Recognize that AI (in most situations) is simply code that makes decisions based on prior data and patterns with some guesstimation of the future.
Every business leader — as well as employees working with them — still needs critical thinking skills to challenge AI output.
- Target AI to areas where it is most impactful and refine these first, which will add the most business value.
- Investigate and push for the most impactful technologies.
- Ensure fairness in AI through greater transparency, and maximum observability of the decision-delivery chain.
- Foster greater awareness and training for fair and actionable AI at all levels, and tie incentives to successful AI adoption.
- Review or audit AI results on a regular, systematic basis.
- Take responsibility, and own decisions, and course correct if a wrong decision is ever made — without blaming it on AI.
Conclusion
- Inevitably, more AI-assisted decision-making will be seen in the executive suite for strategic purposes.
- For now, AI will be assisting humans in decision-making to perform augmented intelligence, rather than a unicorn-style delivery of correct insights at the push of a button.
- Ensuring that the output of these AI-assisted decisions is based on reliable, unbiased, explainable, ethical, moral, and transparent insights will help instill business leaders’ confidence in decisions based on AI for now and for years to come.
Originally published at https://hbr.org on March 23, 2022.
Click above for the full version of the article.
ORIGINAL PUBLICATION (full version)
Overcoming the C-Suite’s Distrust of AI
Harvard Business Review
by Andy Thurai Joe McKendrick
March 23, 2022
Summary
Data-based decisions by AI are almost always based on probabilities (probabilistic versus deterministic). Because of this, there is always a degree of uncertainty when AI delivers a decision.
There has to be an associated degree of confidence or scoring on the reliability of the results.
It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future.
There has to be an associated degree of confidence or scoring on the reliability of the results.
It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future.
Introduction
Despite rising investments in artificial intelligence (AI) by today’s enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite.
Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper?
Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions.
AI has been adopted widely for tactical, lower-level decision-making in many industries — credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed.
However, its mettle has yet to be proven for higher-level strategic decisions — such as recasting product lines, changing corporate strategies, re-allocating human resources across functions, or establishing relationships with new partners.
Whether it’s AI or high-level analytics, business leaders still are not yet ready to stake their business entirely on machine-made decisions in a profound way.
An examination of AI activities among financial and retail organizations by Amit Joshi and Michael Wade of IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”
An examination of AI activities among financial and retail organizations by IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”
67% of executives responding to a recent survey, say they are “not comfortable” accessing or using data from advanced analytic systems.
In companies with strong data-driven cultures, 37% of respondents still express discomfort.
More than two in three executives responding to a Deloitte survey, 67%, say they are “not comfortable” accessing or using data from advanced analytic systems.
In companies with strong data-driven cultures, 37% of respondents still express discomfort.
Similarly, 67% of CEOs in a similar survey by KPMG indicate they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics.
The study confirms that many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse.
Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.
… many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse.
Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.
When will executives be ready to take AI to the next step, and trust it enough to act on more strategic recommendations that will impact their business?
When will executives be ready to take AI to the next step, and trust it enough to act on more strategic recommendations that will impact their business?
There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:
There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:
- Create reliable AI models that deliver consistent insights and recommendations
- Avoid data biases that skew recommendations by AI
- Make sure AI provides decisions that are ethical and moral
- Be able to explain the decisions made by AI instead of a black box situation
1.Create reliable models
Executive hesitancy may stem from negative experiences, such as an AI system delivering misleading sales results.
Almost every failed AI project has a common denominator — a lack of data quality.
In the old enterprise model, structured data was predominant, which classified the data as it arrived from the source, and made it relatively easy to put it to immediate use.
While AI can use quality structured data, it also uses vast amounts of unstructured data to create machine learning (ML) and deep learning (DL) models.
That unstructured data, while easy to collect in its raw format, is unusable unless it is properly classified, labeled, and cleansed — videos, images, pictures, audio, text, and logs — all need to be classified, labeled for the AI systems to create and train models before the models can be deployed in the real world.
As a result, data fed into AI systems may be outdated, not relevant, redundant, limited, or inaccurate.
Partial data fed into AI/ML models will only provide a partial view of the enterprise.
AI models may be constructed to reflect the way business has always been done, without an ability to adjust to new opportunities or realities, such as we saw with disruptions in supply chains caused by the effects of a global pandemic.
This means data needs to be fed real time to create or change models real time.
It is not surprising that many data scientists spend half their time on data preparation, which remains as a single significant task in creating reliable AI models that can deliver proper results.
To gain executive confidence, context, and reliability are key.
There are many AI tools that are available to help in data prep — from synthetic data to data debiasing, to data cleansing, organizations should consider using some of these AI tools to provide the right data at the right time to create reliable AI models.
2.Avoid data biases
Executive hesitancy may be grounded in ongoing, and justifiable, concern that AI results are leading to discrimination within their organizations, or affecting customers.
Similarly, inherent AI bias may be steering corporate decisions in the wrong direction.
If proper care is not taken to cleanse the data from any biases, the resulting AI models will always be biased, resulting in a “garbage in, garbage out” situation.
If an AI model is trained using biased data, it will skew the model and produce biased recommendations.
The models and the decisions can be only as good as non-bias in the data. Bad data, knowingly or unknowingly, can contain implicit bias information — such as racial, gender, origin, political, social, or other ideological biases.
In addition, other forms of bias that are detrimental to the business may also be inherent.
There are about 175 identified human biases that need care. This needs to be addressed through analysis of incoming data for biases and other negative traits.
As mentioned above, AI teams spend an inordinate amount of time preparing data formats and quality, but little time on eliminating bias data.
There are about 175 identified human biases that need care. This needs to be addressed through analysis of incoming data for biases and other negative traits.
Data used in higher-level decision-making needs to be thoroughly vetted to assure executives that it is proven, authoritative, authenticated, and from reliable sources.
It needs to be cleansed from known discriminatory practices that can skew algorithms.
If data is drawn from questionable or unvetted sources, it should either be eliminated altogether or should be given lower confidence scores.
Also, by controlling the classification accuracy, discrimination can be greatly reduced at a minimal incremental cost. This data pre-processing optimization should concentrate on controlling discrimination, limiting distortion in datasets, and preserving utility.
It is often assumed — erroneously — that AI’s mathematical models can eventually filter out human bias.
The risk is that such models, if run unchecked, can result in additional unforeseen biases — again, due to limited or skewed incoming data.
3.Make decisions that are ethical and moral
Executive hesitancy may reflect the fact that businesses are under pressure as never before to ensure that their businesses operate morally and ethically, and AI-assisted decisions need to reflect ethical and moral values as well. Partly because they want to appear as a company with ethical, moral values and operate with integrity, and partly because of the legal liabilities that may arise from making wrong decisions that can be challenged in courts — especially given that if the decision were either AI made or AI assisted it will go through an extra layer of scrutiny.
There is ongoing work within research and educational institutions to apply human values to AI systems, converting these values into engineering terms that machines can understand. For example, Stuart Russell, professor of computer science at the University of California at Berkeley, pioneered a helpful idea known as the Value Alignment Principle that essentially “rewards” AI systems for more acceptable behavior. AI systems or robots can be trained to read stories, learn acceptable sequences of events from those stories, and better reflect successful ways to behave.
It’s critical that works such as that conducted by Russell are imported into the business sector, as AI has enormous potential to skew decision-making that impacts lives and careers. Enterprises need to ensure there are enough checks and balances to ensure that AI-assisted decisions are ethical and moral.
4.Be able to explain AI decisions
Executives could be wary in absorbing AI decisions if there is lack of transparency. Most AI decisions don’t have explainability built into it. When a decision is made and an action is taken that risks millions of dollars for an enterprise, or it involves people’s lives/jobs, saying AI made this decision so we are acting on it is not good enough.
The results produced by AI and actions taken based on that cannot be opaque. Until recently, most systems have been programmed to explicitly recognize and deal with predetermined situations. However, traditional, non-cognitive systems hit a brick wall when encountering scenarios for which they were not programmed. AI systems, on the other hand, have some degree of critical thinking capability built in, intended to more closely mimic the human brain. As new scenarios arise, these systems can learn, understand, analyze and act on the situation without the need for additional programming.
The data used to train algorithms needs to be maintained in an accountable way — through secure storage, validation, auditability, and encryption. Emerging methods such as blockchain or other distributed ledger technologies also provide a means for immutable and auditable storage. In addition, a third-party governance framework needs to be put in place to ensure that AI decisions are not only explainable but also based on facts and data. At the end of the day, it should be possible to prove that if a human expert, given the same data set, would have arrived at the same results — and AI didn’t manipulate the results.
Data-based decisions by AI are almost always based on probabilities (probabilistic versus deterministic). Because of this, there is always a degree of uncertainty when AI delivers a decision. There has to be an associated degree of confidence or scoring on the reliability of the results. It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future. This makes the reliance on the machine based decisions harder when it comes to sensitive industries such as healthcare, where 98% probability of a decision is not good enough.
Things get complex and unpredictable as systems interact with one another. “We’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it,” according to David Weinberger, Ph.D., affiliate with the Berkman Klein Center for Internet and Society at Harvard University. No matter how sophisticated decision-making becomes, critical thinking from humans is still needed to run today’s enterprises. Executives still need to be able to override or question AI-based output, especially within an opaque process.
Tasks to raise executive confidence
Consider the following courses of action when seeking to increase executives’ comfort levels in AI:
- Promote ownership and responsibility for AI beyond the IT department, from anyone who touches the process. A cultural change will be required to boost ethical decisions to survive in the data economy.
- Recognize that AI (in most situations) is simply code that makes decisions based on prior data and patterns with some guesstimation of the future. Every business leader — as well as employees working with them — still needs critical thinking skills to challenge AI output.
- Target AI to areas where it is most impactful and refine these first, which will add the most business value.
- Investigate and push for the most impactful technologies.
- Ensure fairness in AI through greater transparency, and maximum observability of the decision-delivery chain.
- Foster greater awareness and training for fair and actionable AI at all levels, and tie incentives to successful AI adoption.
- Review or audit AI results on a regular, systematic basis.
- Take responsibility, and own decisions, and course correct if a wrong decision is ever made — without blaming it on AI.
Inevitably, more AI-assisted decision-making will be seen in the executive suite for strategic purposes.
For now, AI will be assisting humans in decision-making to perform augmented intelligence, rather than a unicorn-style delivery of correct insights at the push of a button.
Ensuring that the output of these AI-assisted decisions is based on reliable, unbiased, explainable, ethical, moral, and transparent insights will help instill business leaders’ confidence in decisions based on AI for now and for years to come.
For now, AI will be assisting humans in decision-making to perform augmented intelligence, rather than a unicorn-style delivery of correct insights at the push of a button.
Ensuring that the output of these AI-assisted decisions is based on reliable, unbiased, explainable, ethical, moral, and transparent insights will help instill business leaders’ confidence in decisions based on AI for now and for years to come.
Originally published at https://hbr.org on March 23, 2022.