AI for Supply Chain: Debunking the Myths – Part Two:

Six Common AI Myths Explained

Abstract

We debunk myths about AI vs. human thinking, AI autonomy, the role of data, open source AI, the role of data scientists, and AI’s impact on jobs.

Article

This article is an excerpt from the report AI for Supply Chain: Debunking the Myths.
The full report can be downloaded here
.

In Part One of this series, we examined previously intractable supply chain challenges that are now within reach using AI/ML. Here in Part Two, we debunk some of the common myths.

What Is AI and Machine Learning?

So, what is AI and Machine Learning? AI is an umbrella set of technologies, from robotics to analytical systems. Within AI we have various subgroups such as machine learning, deep learning, natural language processing, robotics, and so on. (See side bar. For more in-depth definitions, read AI Definitions for Supply Chain.) For supply chain planners, machine learning can be thought of as having several areas of focus:

  • The development of algorithms which are a richer level
    of statistical mathematics that can be used to discover trends, refine forecasts and optimization, and evaluate multiple options.
  • The analysis of data to discover trends and patterns such as consumer preferences, as well as improve the quality
    of information.
  • The ability to learn — this means to constantly evaluate recommendations so as to constantly improve outcomes.
  • The development of intelligent agents. These are small programs that can autonomously perform specific tasks such as alerting users of changes, initiating a search, or performing other directed tasks.

Now, let’s move on and debunk some of those myths!

Myth 1 – I Systems Emulate How Humans Think

The definitions that say AI systems think like humans are myths in themselves. AI systems are not spontaneous and cannot emote. AI systems, even connected to many sensors, are not able to absorb the environment around them as we humans can with our five senses and the billions of neurons in our brain. The press is celebrating (or deifying) systems that can recognize faces. After a great while and millions of iterations, the computer can, voilà, identify a face.1
We humans can still do that better, learning in one or two “iterations.”

Although AI and machine learning developers are developing languages, algorithms, and techniques that can be applied to create software that performs tasks that people do, these capabilities are very primitive compared to people. Even picking up bags or boxes is a tough job for those clumsy robots.
We humans don’t even have to think about it.

Conversely, AI systems are probably better at remembering past experiences and improving systematically on past performance, whereas we humans often rely on our subjective memory and personal preferences. Coupled with machine learning’s ability to sift through mountains of data that we don’t have time for, it becomes a valuable aid to a supply chain analyst.

Will AI systems think like people? Researchers are working on something called General Artificial Intelligence, whose goal is for systems to act and adapt to new environments, like people do. But no system yet exists that has achieved that. Hollywood meets the Supply Chain? Not just yet.

Myth 2 – I Is Unseen and Will Execute Without My Approval

One of the biggest fears users have is that the “system” will derive new ways of looking at things and then execute without their review or approval. Yes, we do want more aware and insightful systems, but we may not want them acting on their own — not initially, anyway.

Think of this: millions of orders are processed through our ecommerce systems with virtually no human involvement. And we trust these technologies to do this for us every day. They are trained to do that.

Just as a well-trained employee does not need someone looking over their shoulder all the time, neither does a good IT system. It’s the training!

Training consists of lots of testing and validating results and, over time, allowing the technology to automatically perform certain tasks. Just as with traditional system implementation, it is the supply chain teams that will be directively approving the algorithms and data and applying them to specific problem sets they want to address. In planning applications, the systems will probably present some recommendations which, through learning, should get better and better.

Where ML becomes particularly useful on its own is in exploring new, big and/or unstructured data to look for associations and patterns. With mega-computing power, it can do this in background processing. As we use machine learning, the results will get better, we will trust them, and then we will allow a level of autonomy. Just like parents do with their children, users will determine the level of autonomy they are willing to permit.

Myth 3 – I Is Smart Algorithms, so If I Buy Them, I Can Just Apply Their Intelligence

Not without the data! A machine learning application is as much, if not more, about data. Algorithms without data are like having the finest all-clad copper-core cookware without the high-quality food ingredients. You won’t get the results.

Applied Machine Learning
is a complement of components, really. That is, quality data, high performance hardware servers, and the algorithms, all working together. All are extremely important and need proper focus by users and IT to get results (see side bar, AI/Machine Learning Market Offering Options.)

There is a skill, an expertise, in the field of AI/ML which is the science of data: that is, the data, its meaning and its quality.2 There are machine learning modules that are trained to source, clean, curate, and categorize data so that it can be easily accessed and utilized by the user community.

Myth 4 — Since There Is So Much Open Source, I Can Build an AI System from Scratch

Yes. But not very fast. And not very cheaply. Essentially, it is always the human labor effort that equates to most of the cost in technology transformations (see side bar at right, Build Your Own AI/ML System).

Yes, there are open source algorithms, rules libraries, and even API libraries. And of course, we assume our IT staff has the technical skills to deal with them. But they also have to understand them and be able to modify that code for our use and ensure its quality and production worthiness.

Additionally, the business counterpart has to know what they want, i.e., what is the project’s objective? Bringing the understanding of their respective fields, IT and business will need to make a lot of decisions. And they have to be prepared to arrive at different ends.3 This may introduce uncertainty and, therefore, be challenging for the organizational support.

In looking at many custom AI/ML projects within an enterprise, we can see that building an AI system is a departure from traditional software development. Yes, there is the code and the data. But in building an AI/ML system, IT may need to use different programming languages with new kinds of hardware choices that process different kinds of data (analog, digital, structured or unstructured, data streams etc.), utilize micro-servers for pre-processing, or build nodes and neurons.

These kinds of choices (see side bar at right) are often part of the AI/ML development process, something the typical IT programmer, who is more used to writing APIs or developing reports using a BI package, may not be too familiar with.4

And then there is the data. What data to source? How to organize it to use in the process? It’s a bit tougher than it looks. Thus, we are seeing an increasing number of requests from the business community to their technology providers to include more curated data sources.

What might that new custom AI/ML query or routine look like to the developer? It might require three or more new programs built with all those tool sets5 and an accompanying data lake that can store vast quantities of raw data in source formats, as well as various relevant formats. (See side bar, Build Your Own AI/ML System?)

That is a lot of work. And a lot to learn. Again, that might be new territory for the traditional IT programmer. It can be learned, but it takes time. ­

Myth 5 — So, If AI Is So Hard to Understand, I Will Need to Hire Data Scientists

Maybe. There is a lot to learn. End-users and IT should make it a priority to understand the AI/machine learning segment — what it is, why it is needed, and the particulars of certain types of data and analytical methods so they can apply them to their specific areas. But there are still those two paths emerging — build or buy. (See side bar of AI/ML Offering Options, at right.)

Actually, for supply chain pros and the challenges we deal with every day, machine learning algorithms could look like a somewhat familiar, albeit new, home to take our queries.

An example could be weather. A hurricane has been forecast as a category 3 or higher. Do we stock more water a few days before, incurring extra warehouse costs, or spend later on expediting transport costs?

Another, almost daily, challenge in Omni-channel retail is merchandise allocation. Will customers shop at the local store on a specific holiday? Complicating this decision is a weather forecast for snow, and customers may opt for online shopping. In these, and so many other scenarios, at some point we need to take an action. We want it to be based on our best choice.

These are non-linear problems.6 That is, there are continuous outcomes based on lots of variables: if/then, if/then, if/then, with the “ifs” changing very frequently. We can understand these types of problems quite well, so learning to apply an algorithm is not a bridge too far for many supply chainers. Or you can hire that data scientist, probably a green college grad who you will train in supply chain. If you are really lucky, you’ll find someone with both types of experience, but who will surely command a higher paycheck.

A linear option might be using a supply chain package with AI/ML already built. That might be a shorter route to achieving your goals: a solution provider who has packaged many of the supply chain questions you will be asking and has confronted the many types of data you will want to apply.7

However, this does not obviate the organization from understanding and being responsible for results.

Myth 6 — Conclusion: Last, but Surely Not the Least Myth, AI Will Take My Job Away

Take it away? No. Change it? Yes. AI/machine learning will broaden our view. There is a lot more to understand — all those causals, that changing world we are experiencing. Hence, we do need new insights and methods so we can learn more about the market, the customers, the environment in which products are sold or used and how the needs will change over time. Learning more will effect change on everything — from our product and service offerings to our data and planning platforms, and even the nature of our enterprise structure. Already, adapting to these changes is a source of great challenge.

In spite of the drama, though, AI/machine learning doesn’t have to be threatening. Nor does AI/ML have to be rocket science. Think of all those repetitive, icky tasks such as data cleansing, tracking down errors or missed data feeds, cranking out and double-checking reports between disparate systems; and planner tasks such as reviewing inventory levels, re-order points, lead times, and finding alternative available supply. These and so many other tedious tasks are burdensome. AI/ML can autonomously review hordes of data, and improve its quality and update it with more accurate and dynamic content.

We could benefit from this kind of automation, don’t you think?

Machine learning can keep track of constantly changing data and analyze it to determine if it is important enough for anyone to even care about. For example, if there is a major fluctuation in demand, we care about that and want to know about that ASAP. But for forecasting standard replenishment items that barely change from week to week, you can let the system decide.

In these types of scenarios, we can think of AI/ML as a great supporting player — not the “guy” who will take over your job. Change it — yes. Automation of this kind can free up time from the mundane and facilitate taking on deeper analytic activities. The Supply Chain Planner becomes a Supply Chain Scientist. We really are scientists already, since we can glibly spout statistics, formulas, forecast methods, and probability curves. AI/ML will add to that knowledge.

Yes, AI/ML will change things. Yet, it could provide that opportunity to finally ask some new questions and get the answers to those intractable supply chain challenges. And it just may free up some time to learn about AI/machine learning and upgrade your skills.

___________________________________________________________________________________

1 And fact is not always accurately, or without bias.< — Return to article text above
2 Read: The Data Scientists, Software Engineers and Data ManagersReturn to article text above
3 From our perspective, that should be an objective, if you will, since often the AI/ML effort is to discover new things about the supply chain. — Return to article text above
4 This is why the competition is on for more data scientists and AI-specific computer science grads. — Return to article text above
5 Just to access, filter, format and store data, or build neural nets, and test and validate the data before you can create insights. Then, perhaps, another program would be needed to see how it maps to the forecasts and sales results, and test the validity of the trend that was uncovered. — Return to article text above
6 nonlinear systemReturn to article text above
7 On that point, an interesting read is The Case for Buying a Business Analytics PackageReturn to article text above


To view other articles from this issue of the brief, click here.

Scroll to Top