This article is an excerpt from the report: Supply Chain Networks Revealed — Part Two: How They Work
A copy of the full report can be downloaded here.
This is the first installment of part two of our Supply Chain Network series. Part One of this series defined what a supply chain network is, looked at the different approaches we’re seeing to solve multi-party challenges, and examined how a supply chain network can support end-to-end demand through fulfillment processes. Part Two here focuses on the technology and the differentiation between supply chain application networks in the market. Understanding these differences is essential when evaluating technology if we are to make good choices.
How Do Networks Do It? What Are the Differences?
Supply chain application networks in the market today have some history, which is important to understand because it is foundational to their approach to development and architecture. As discussed, there are two fundamental approaches, Integrator Networks (INs) and Real-time SVoT Networks (RSNs).
Networks: Two Histories
INs may have gotten started as exchanges or private networks (hubs). Examples are portals, industry-specific exchanges, or a commerce site for ecommerce or procurement. This means that although they share a common application, each hub is a private network isolated from the other tenant networks.
To grow, INs often make acquisitions. This adds functionality and customers to the fold. However, the applications acquired were built with different architectures, data models, and security protocols. Thus, these firms face the challenge of integrating the new data, code, and customers into their infrastructure. They address this problem by “wrapping” the applications with modern technology. Applications are invoked through application-to-application software and a common, modern UI that provides a consolidated user experience. They also leverage the B2B services within the platform to facilitate integration between tenants on the platform as well as external entities.
In this scenario, there are usually several processing engines, integrated collections of applications, or communities beneath the UI. For example, the demand application will solve for accuracy and trade-offs between customer service and inventory levels, then pass a message to the logistics application to look for carriers.
An IN that has acquired several applications within the same functional domain may re-architect the adjacent applications into a richer, single application (for example, expanding from one transportation mode to multi-mode) rationalizing the data and code base. This provides opportunities to migrate to more modern technologies and eases the burden and cost for support.
The RSNs, in contrast, have standardized on a single development architecture with a philosophy of single instance/shared data and a single processing engine/code base. A tradeoff has been made between the short-term acquisition of customers and the supportability and holism of a single-engine approach.
These distinctions are important, as explained in the examples above. As we move from single to multi—multi-mode, multi-partner—we expand our view and, thus, need more and more inclusion. Everyday examples abound in multi-party objectives. For example, drop ship, which requires precise coordination, has three or more parties involved in the process: the customer, the seller, and fulfillment entities such as warehouse, manufacturer, and the actual transport provider.
So, what key architectural/technical elements should the buyer look for?
Since the raison d’être of networks is the ability to operate an optimized set of activities between parties, it can take several forms, which are reflected in the underlying structure and services of a particular network and the way it may be implemented in an instance. There are several important concepts at work here that we will explore:
- The Database and Master Data Management (MDM)—the data model, how it is created and managed
- The Processing Engine(s)—planning, executing, optimizing, and analyzing
- Integration—how it is achieved, both business to business and application to application
- Capabilities—tools, applications, and the needed data, itself
These and other attributes are summarized in Table One in Part 2B.
The Data: Our Greatest Challenge, Our Greatest Asset
Sharable and usable data across the chain is essential to communicate and operate. Yet, it remains one of the biggest challenges in modern supply chains. Foundational, then, are the techniques we use to create, access, cleanse, and understand data.
Common Master Data Management — supply chain management cannot be achieved without consistent data; however, consistent data has been the specter of the inter-enterprise world, standards not withstanding!
Master Data Management has been difficult to achieve even within a single enterprise. It is just too hard to keep up with all the changes in day-to-day business data, what to say of changing regulations, standards, and the constant expansion of what we consider supply chain data.
This issue has been particularly burdensome for suppliers and service providers. Having the supply-chain network provider manage the industry data model on behalf of the whole community as well as the translation between participants is a real boon.
Importantly, modern dynamic networks accept that there may be unique requirements and changes, so extensibility of the data model and a dynamic request/consent handshake or permissions should be included in the services provided by the MDM tools.
There is so much dynamism in the chain today that formal data change management is not always practical. A transaction—order, confirmation, and so on—may have within it a unique data format. Rather than rejecting the transmission, a permission can accompany the transaction, requesting the use of or change to a specific element. This can be a one-time or an on-going exception. In this way, the business flow is not disrupted.
Database — This contains the actual data built on the MDM. Data instance options offered today are the single-tenant (private data), multi-tenant data (shared database), or an overarching network-tenancy (shared database, plus tenants have private data). For enterprise-oriented challenges such as finance or proprietary design, the private, single-tenant option may be desired. In one-to-one communication of plans in the supply chain, we can usually do pretty well. However, add dynamism—many changes—and partners are left with debates on whose version speaks the truth.
In Part 2B of this series, we look at integration, how the networks can be extended, and a comparison of enterprise applications vs. integrator networks vs. Real-time SVoT networks.
2 The last few years have also seen huge investments in development by ERP and supply-chain firms to develop integration layers that include A2A and B2B. — Return to article text above
3 E2open is an example of this, where the many networks use B2B. — Return to article text above
4 The leaders in this sector use intelligent agents to act on the data, complementing the algorithms. — Return to article text above
5 One Network and GT Nexus, now called Infor Nexus, are examples of this. — Return to article text above
6 Supporting the standards by which an industry and/or ecosystem operate, for example, automotive, defense, retail and so on. These can include master data, business logic, and specific rules and algorithms common to that industry. — Return to article text above
7 For example, “please use this data format for this order,” or “every time you do this particular task, please substitute this definition,” and so on. — Return to article text above
8 Debates on service levels, chargebacks, pricing and costs abound in most industries. — Return to article text above
9 INs and RSNs also provide enterprise control towers, since, in practice, most companies have on- and off-network applications, data, and partners. In practice, the major supply chain application networks will support various customer or industry requirements. — Return to article text above
10 You may ask, if the data is in a real-time database, how is it kept up to date if sources are not part of the network? Of course, most regular sources will update their partners when things change. Beyond that, to ensure the best possible data, techies have several techniques, such as machine learning, that they use to evaluate data quality and regularity of feeds. A source system can be unavailable or of poor quality, so a method called mean perturbation can be used. This means the software knows that a change in the state of the data should be occurring and assesses what the data should be (determined over time with experience, i.e., machine learning). These techniques are used to analyze the data and supply the database with improved data quality. — Return to article text above
To view other articles from this issue of the brief, click here.