Ask these four questions to tell if your AI solution is really AI.
MIT Sloan Management Review
Michael Wade, Amit Joshi, Mark J. Greeven, Robert Hooijberg, and Shlomo Ben-Hur
June 22, 2020
In a world where buzzwords come and go, artificial intelligence has been remarkably durable. Since it first emerged as a concept in the 1950s, there has been a relatively constant flow of technologies, products, services, and companies that purport to be AI.
It is quite likely that a solution you are investing in today is being referred to as AI-enabled or machine-learning-driven.
But, is it really?
The reality today for most organizations is that AI and machine learning form a rather small piece of the overall analytics pie.
Indeed, research conducted by London-based investment firm MMC Ventures revealed that 40% of Europe’s artificial intelligence startups did not use any AI at all.
Furthermore, the offerings of many startups and analytics providers, even if quite advanced, fall short of even basic AI.
We define AI as any computer-based system that observes, analyzes, and learns.
The key here is that these systems are iterative — they get better and more accurate as they collect and analyze more data, without explicit intervention from humans. As the term implies, these are machines that learn, however simple the learning may be.
What Isn’t AI
Just as it is important to define what characterizes a system as AI, it’s equally important to identify what isn’t AI. Mistaking advanced analytics and computing techniques for AI and machine learning can often lead to confusion, and the following section details some of the most common AI fallacies for leaders to understand.
- Just because a system uses an algorithm and advanced statistics, that doesn’t make it AI.
- Just because a system answers questions, that doesn’t make it AI.
- Just because a system is advertised as AI, that doesn’t make it AI.
1.Just because a system uses an algorithm and advanced statistics, that doesn’t make it AI.
An algorithm is simply a set of predefined steps or rules to solve a problem. These can be simple (think of an if-then statement) or very complex (think of a chess-playing machine). However, most algorithms are static: They will always return the same output given the same input. That is, they do not adapt or learn.
These algorithms are often coded using standard statistical models, like correlation or regression, that are very good at identifying trend lines in well-defined data. These trend lines allow them to offer predictions of future states based on a set of past states. However, true AI is able to work with data that is not well structured, well defined, or even numeric. Some of the biggest breakthroughs in AI and machine learning have come from insights generated with natural language, image, and video data.
2.Just because a system answers questions, that doesn’t make it AI.
There are plenty of technologies, like conversation agents, that have the ability to answer questions posed to them. Recall the popularity of decision support systems in the 1980s and 1990s.
These tools provide automated responses for a variety of problems through digital dashboards, and versions of these systems exist even today for tasks like inventory management and sales projections.
In most cases, they do this by either matching the question with a database of prepopulated answers (think of a software “help” function) or calculating the answer based on applying an algorithm to data.
Some go further by searching the internet if nothing appropriate can be found in the database. Most of these systems do not have the ability to place the question in context, nor do they learn from the accuracy of past answers.
Therefore, they do not qualify as AI.
3.Just because a system is advertised as AI, that doesn’t make it AI.
We have encountered many startups, vendors, and “analytics” providers that promote themselves as providing cutting-edge AI/machine learning solutions. Unfortunately, we have been disappointed with most of them.
While they may indeed be good at advanced statistical methods, they are unable to build learning models from structured and unstructured data, especially the large volumes of data that are typically needed to build useful models.
What Does AI Really Do?
To evaluate whether the strategy or approach you’re evaluating requires artificial intelligence, let’s turn back to our definition of AI as any computer-based system that observes, analyzes, and learns.
- First, it needs to observe
- Second, AI needs to analyze
- Third, an AI system needs to be able to learn
First, it needs to observe. This means that it needs to be able to augment its database of information and insights. A rich but static data set is not enough, because it becomes stale the moment it’s created. Thus, a true AI system is able to sense its own environment and augment its base of knowledge in close to real time. Most Tesla cars have at least 21 sensors, including cameras, ultrasonic sensors, and radar. The purpose of these sensors is to observe the car’s surroundings and feed real-time information to the powerful autonomous driving system onboard. OrangeShark, an AI-based digital marketing startup, closely tracks various metrics of past advertising performance and automatically adjusts ad placement, targeting aspects of creative content for future ads.
Second, AI needs to analyze — that is, make sense of its environment. An AI system needs to be able to analyze information it observes and collects, even if that information is very messy. Thus, it needs to have advanced tools to find signals in very noisy data sets. A Tesla’s onboard computers analyze the images, blips, and other data it collects to make sense of its surroundings, allowing for the automation of several driving decisions. Gong.io helps salespeople in high-impact B2B environments by analyzing various aspects of sales calls, including voice sentiment and tone. Using this data, companies and sales professionals are able to arrive at many counterintuitive insights — for instance, calls with more positive sentiment are actually associated with lower closing rates than calls with less positive sentiment.
Third, an AI system needs to be able to learn. This third criterion is the most important differentiator between AI systems and plain-old data science. The ability to test, learn, and improve is only available to the most advanced machine learning systems today. These systems are able to proactively make assumptions, create and test hypotheses, and learn from them. Thus, they become more accurate over time. Tesla’s self-driving technology gets smarter with each kilometer it spends on the road. It does this by observing and analyzing the data from hundreds of thousands of Tesla cars and then learning from this data to improve the autonomous driving capabilities. It may learn to distinguish between an animal in the middle of the road and a plastic bag being blown by the wind, figuring out that it needs to stop in the first instance but can safely continue driving in the second. Several recommendation systems today, including those used by Netflix and Stitch Fix, start off making generic recommendations (when they have little knowledge about your preferences). Over time, they learn from your choices and improve to make more tailored, personalized recommendations — a capability that systems without machine learning would lack.
If you are not sure if the system you are using or are thinking about buying is really AI, we have developed a list of key questions to ask.
Is My ‘AI System’ Really AI?
Does it use large amounts of data across a variety of formats?
- It only needs a small amount of data — probably not AI.
- It has a hard time handling unstructured or messy data, like free-form text, images, or video — probably not AI.
- It uses large amounts of data in different formats, either through manual input or automated sensors — probably AI.
Does it update the data it uses over time?
- The data it uses is static — probably not AI.
- It doesn’t update with new data very often — probably not AI.
- It updates itself with new data in close to real time — probably AI.
Does it adapt its decision-making logic over time?
- Its underlying decision-making logic doesn’t change — probably not AI.
- Its underlying decision-making logic only changes with scheduled updates — probably not AI.
- It iteratively improves its decision-making logic in close to real time to the point that it is nearly impossible to understand how it reaches a given output — probably AI.
Does it adjust for possible biases?
- It doesn’t attempt to assess or measure potential biases — probably not AI.
- It doesn’t automatically adjust for biases, even if it sees them — probably not AI.
- It proactively measures and adjusts for potential biases — probably AI.
Of course, organizations first need to identify the right problems to solve and only then try to determine whether AI/machine learning techniques are the right solutions to those problems.
AI can be very useful for solving challenging business problems, yet the actual percentage of use cases where AI is significantly better than simple data science, or human insight, is quite low.
In most cases, the best insights can be generated using the simplest tools.
Never let the tool dictate how you will solve a problem.
But if you decide you need AI, then make sure the product you’re building or buying fits the bill.
About the authors:
Michael Wade is a professor of innovation and strategy at IMD Business School in Switzerland, where he holds the Cisco Chair in Digital Business Transformation. His most recent books are Digital Vortex and Orchestrating Transformation.
Amit Joshi is a professor of AI, analytics, and marketing strategy at IMD Business School in Switzerland. An award-winning researcher and case writer, he works extensively with companies in telecom, financial services, pharma, and manufacturing.
Mark J. Greeven is a professor of innovation and strategy at IMD Business School in Switzerland and coauthor of Pioneers, Hidden Champions, Changemakers, and Underdogs (MIT Press, 2019).
Robert Hooijberg is a professor of organizational behavior at IMD Business School in Switzerland. He is coauthor of Being There Even When You Are Not: Leading Through Strategy, Structures, and Systems and Leading Culture Change in Global Organizations: Aligning Culture and Strategy.
Shlomo Ben-Hur is a professor of leadership and organizational behavior at IMD Business School in Switzerland. He is coauthor of Leadership OS, Changing Employee Behavior, and Talent Intelligence and the author of The Business of Corporate Learning.
Originally published at https://sloanreview.mit.edu
https://sloanreview.mit.edu/article/how-intelligent-is-your-ai/