Introduction: A Digital Supply Chain?
There is data all around us: forecasts, orders, reports; customer, product, and supplier information; weather, traffic, wily consumer social and mobile data; products in transformation and on the move; constantly changing regulations, customs compliance requirements, trading partner information; social, visual, text, additive, geospatial! The list can go on and on. Much of that data may be valuable to us, but it needs to be findable, searchable, accessible, accurate in each instance, inclusive of its context (location and condition), understandable, transformable — yes, this list can go on and on, too!
We also have systems: CRM, SRM, SCM, SCP, TMS, WMS, Accounting/ERP, and so on. However, many of the systems installed in the enterprise don’t have modern data or the tools to solve the broader challenges we now confront in an increasingly complex world.
A digital enterprise, a digital supply chain, or as we have been calling it of late, the networked enterprise, is a reality for many companies. But many of their information environments are stuck in the 1990s with enterprise-centric systems — flat-file concepts of information, resident in difficult to integrate systems, printable reports and millions of incongruent spreadsheets with unactionable, stale and late data. All this generally generates more questions than answers. (This is not a math problem. We’ve got plenty of smart algorithms and code.)
It’s All About the Data
Changing Data Structures
Much of the data that enterprises are run on and the value thereof has been based on tabular (structured) data — rows and columns of numbers and letters.
There were two sources of data:
- Our own amassed tabular data or
- Tabular data transmitted from customers or trading partners,
which were used for action of some kind:
- Transaction, which is the activation of fairly linear instructions — go, add, search (an index), jump to next line and so on …
- Planning, assessing/evaluating, which often leads to the derivation of some new data, forecasts, plans — summaries or altered fields created by mathematical instructions (statistical/algorithmic applications).
Both transaction workflows and deriving a statistical forecast are based on choices we preselected. That is, a specific workflow was coded or a specific algorithm had been defined after careful thought and experience.
From a data perspective, managing in this world is still hugely challenging. Within the body of structured data are innumerable records and fields, all of which contain values and definitions that are constantly alterable — by the various business units and departments within our own enterprise, by partners, by technology upgrades that require change, by regulatory entities (standards bodies, governments), and by acquiring or being an acquired entity during M&A.
We constantly find ourselves requiring new data fields, too. And our definitions are often not in sync with the rest of the world; thus, they require techniques to translate data to be able to communicate and trade.
We already have all the appropriate data management tools, but to apply them to the above conditions of highly variable change is burdensome. In many ways, we have come to the limit of what a structured-data approach based on traditional enterprise systems can do for us. That is because our world and the things that impact our product, plant, property, equipment and people exist in a multi-dimensional, not a two-dimensional world. Welcome to the world of analog and unstructured data!1
Welcome to the World of Big Data
Things, Visual, Immersive, Augmented Reality
IoT — a smart, connected world — is rapidly growing around us. Our products, processes, conveyances, and people are becoming connected and having their intelligence “augmented” by cloud-based systems.
No doubt all the sensor data can ultimately be converted to a digital format,2 but each unique measuring device — a temperature gauge, a video camera, a weather vane and so on, has its own unique taxonomy and mechanisms to sense and collect data in a unique format, a method for conversion, and a method for transmission. Examples are vibrations from movement or acoustics, chemical reactions that are the core of heat gauges, or pressure (measuring movement — inflation or deflation of a membrane or stress/resistance of the membrane), and so on. These are so commonplace that we don’t give them any thought. But what is not commonplace is the inclusion of all these data in our supply chain analytics. The human mind can easily think about all the types of sensing and data, but it’s not as easy to collect all that data, absorb such massive volumes, and then use it.
All the tech companies are touting that they have / support digital twins. It’s one of the new buzz words. But what does it mean? (And it sounds like even more data to collect/store/ and use!)
IEEE defines the digital twin this way:
A digital twin is a digital replica of a living or non-living physical entity. By bridging the physical and the virtual world, data is transmitted seamlessly, allowing the virtual entity to exist simultaneously with the physical entity.3IEEE
Let’s take a minute to unpack this definition.
“A digital replica of a living or non-living physical entity.”
Ok. So this means I actually have to have a data definition (schema) that represents a thing, and a data vessel (database) to contain it.
Many enterprise systems don’t have a robust central repository with full definitions of things. They may have some kind of separate server for IoT, but it is not part of the enterprise operating system. It is adjunct to the core. Now put a thing in motion and the thing actually can change — people touch it; its location changes, and, therefore, its quality and condition; or even additional elements become a part of it. So that schema has to be expandable and flexible, but yet controllable, usable, and understandable.4
“Data is transmitted seamlessly — ” How is the data transmitted seamlessly? How does the physical entity stay in sync with its digital twin, the virtual entity? To accomplish this task, our data collection and integration methods have to be complete and also extendable, flexible and controllable.
Consider AI and Machine Learning
AI, Neural Networks, Machine Learning, Complex Event Processing. These all sound so promising. We want them. But in order to leverage them and then to learn from the data, we need data — lots of data, good data. For example, in order to understand a complex event, we need lots of event data — SNEW (social media, news, event and weather data), sensor data, locations — all the modern supply-chain data. All the cool tools — best forecast algorithms, smartest neural nets and cleverest bots — are empty without great data.
Hence, it seems that our master data management technologies, the programs, the integration systems, our Auto-ID, our sensors and visual devices may need an upgrade, what to say of our applications and analytics systems!
How Do We Solve the Data Problem?
Of course, covering this topic adequately could easily require a lengthy book called Solving the Modern Supply Chain Data Problem. But, in brief, here are a few essential capabilities that need to be part of our solution.
- Data Model and Master Data Management. The MDM should have:
- Network-tenant sharable and extensible data model.5 In supply chain, our systems need to be architected to support a shared, common multi-enterprise data model, while allowing for the dynamic creation of community-specific or company-specific customizations of and extensions to that data model.
- Inheritance—This is a concept of change and flexibility. In supply chain we often trade across many boundaries—industries, ecosystems, trading blocks and large anchor partners who have their own compliance rules. So we may need to adopt, inherit, the process, workflow or data used in that instance, under that circumstance.
- Modern data—As we have stated, this includes a lot more than tables and rows (see side box).
- Data collection and integration:
- IoT/Auto-ID devices collect appropriate and applicable data across all supply chain processes from logistics, product integrity, build, sell, and service, and for customer insights and shopper assistance. (Read: Trends, Threats and Opportunities.)
- Open B2B and A2A platform. B2B is best served by using a third-party service that can maintain standards and data for the ecosystems in which we trade. A2A needs to be a flexible open platform and have the multitude of the vendors that we use in standard libraries, yet the ability to support our own internal needs. Tools should also be included to extend/modify in a no- to low-code experience.
- Augmented Reality adds another dimension to our experiences and creates richer process and product data by collecting and providing knowledge and visuals for processes, instructions, products, environments, and so on.6
- Data Quality/Data cleansing. We can get a lot of data. But is it good? Is it relevant?
- Methods using machine learning can assess data quality and often cleanse it before it is imported into the database. (Read: Thinking Machines—Part Three/The Planning Department of the Future.)7
- Machine learning can evaluate the relevancy of data. What sources and people have demonstrated accuracy? Which events and specific data are relevant?
- Make it real-time. If we expect machines to work smarter—well, at least faster—than us, then the data has to be there in an instant. Real-time means continuously refreshing information from data sensing and planning, through execution.
- Supply Chain Network. As an old commercial said, “Leave the driving to us.” Managing supply chain data is a vast, tedious and overwhelming task. It is probably better to have a third party do this work for you. Supply Chain Network providers own the task of maintaining industry schemas, standards, data cleansing, and data availability in whatever format is required. The maintenance fee has an overwhelming ROI and allows an organization to move on to other projects they could not tackle before because the data got in the way.8
Conclusion: Where We Will Go —
Many of us read the book Flatland,9 possibly prescribed by our mirthful math teacher in high school. In this book, first published in 1884, the main character, Square, presciently peers into the future (1999!), a world of three dimensions. For people living in the flat world of that time, it was hard to understand Square. We who live in this three-dimensional world today have no problem experiencing the three dimensions, but our systems and machines do.
Although to some degree it has become commonplace for certain machines and systems to be able to capture sensory data, convert it to digital format, and then ingest it into an analytical framework for humans to understand, in other ways, we are just beginning.
New innovations — IoT, Augmented Reality, Machine Learning, Autonomous/Self-driving systems — most of us are intrigued by all these new developments. And from a sales velocity perspective, these markets have been steeply growing.
However, there is a lot more to do than buy some technology. In fact, in survey after survey, users overwhelmingly declare that data issues are their biggest obstacle to successful implementation of these newer technologies. Think of this: beyond an ASN, the majority of suppliers aren’t even using more EDI. So the goal of a digital supply chain with even some of the modern data that it entails is just a dream in the mist.
Whatever the impact global supply chains and competition are having on your firm — and your career — we are only in inning two of a massive restructuring of the global economy. But it is already obvious that those who have seized each high-value technological advancement seem to always score well and move to the next round in the competition.
__ __ __ __ ___ __ __ __ __
Author’s note: I am not advocating technology adoption for tech’s sake, but so companies can receive the high value of better data and get the “inside track” with their partners and within their industry. Companies who are involved in technology and business consortia, and are part of supply chain networks,10 become preferred partners. Think of this: As ecosystems of enterprises operate their processes in multi-party networks, they have access to the rich intelligence that is amassed by their partners and by the data and services that are constantly enriched by the network providers.
Those users who are members of the user steering committees of the tech companies that are part of their portfolio are exposed to the best and the brightest ideas of the user community, as well as from the tech companies.
1 Even within an enterprise, there are large volumes of semi-structured or unstructured data. This includes emails, Word docs, PDFs, and all kinds of critical business-flow information not specifically captured within enterprise systems. As well, there are documents such as CAD files, specs, and plans of all sorts (business, marketing, project plans, etc.). — Return to article text above
2 Note: Analog signals can be converted into digital signals, representing the continuous wave as a series of changing digital values at discrete points in time. This is known as A/D conversion. Later, the digital values can be converted back into an analog signal to drive a speaker or video display (D/A conversion). — Return to article text above
4 We may want to know, as well, how this change of events (temperature, vibration, and so on) took place. A good example I heard was about a large retailer that was importing china. When it left the factory, the quality was good. But upon final arrival, some of the pieces were broken. So, where in the chain did the damage occur? The manufacturer and retailer agreed to put vibration sensors attached to GPS in each pallet. And voila! One of the carriers was often dropping some of the pallets. Food temperature is another example. (Read: Romaine Remains). As we can see, the condition of the real thing often changes. Modern merchandise planning systems, not just design systems or PLM, as well as catalog systems use lots of visuals — some simple artists’ renderings, and some pictures or videos. To the end-user these seem commonplace. However, getting these graphics/visuals and product data from design to build to end use is a pretty messy, multiple-system world with some unique databases that often don’t share the same data constructs. — Return to article text above
5 A write-up on this will be featured in our multi-party supply-chain network report featured next month. — Return to article text above
7 In this section on a New Adoption Lifecycle, we discuss the increased reliance on third-party data and services. These services curate/clean, analyze and provide information relevant to specific applications and scenarios. — Return to article text above
8 Third-party data services are often partners with supply chain application networks and provided within the foundational environments, so users do not have to acquire and build their own libraries of data. Behind the scenes, in the cloud, there are a myriad of connections between third parties. — Return to article text above
To view other articles from this issue of the brief, click here.