This article is an excerpt from the report Agile Demand-Supply Alignment — Part Two: Evaluating ASDA Solutions.
A copy of the full report can be downloaded here.
In Part 2B, we presented a framework for understanding Agile Demand-Supply Alignment solutions, with questions to ask solution providers about how they detect, contextualize, prioritize, predict, and prescribe solutions for demand-supply imbalances. Here in Part 2C, we look at the characteristics of a multi-enterprise network architecture, as well as examining the differences between a ‘Visibility-only Control Tower’ vs. a ‘Supply Chain Application Network.’
Multi-Enterprise Network Architecture — Foundation for ADSA and More
The architecture and infrastructure a solution is built on determine a lot about what it can and cannot do. Most of the systems we included in our research are architected as multi-enterprise platforms, but that is not the case for most enterprise software. Most enterprise applications are architected for use within a single enterprise, including virtually all major ERP systems and most best-of-breed applications. There is a big difference between single-enterprise and multi-enterprise architectures in many areas such as:
- Shared ‘Single-Version-of-the-Truth’ — The most central difference is where data is stored. In single enterprise systems, each enterprise has their own copy of data and uses messages between trading partners to keep those data in synch. For example, a buyer’s ERP system generates a PO which is sent to the supplier and becomes a sales order in the supplier’s ERP system. Now there are two copies of the data about the same order. The process of exchanging data creates delays and errors.1 In a multi-enterprise network system, there is a single copy of the data shared between everyone on the network, with appropriate security controls so parties can only see data they are authorized to view. This means everyone is looking at the same data. When the buyer uploads a PO onto the network, there is now a shared copy of that order that both the buyer and supplier use for execution. Depending on the scope of the multi-enterprise platform, a single-version-of-the-truth is also maintained for data about inventory, demand forecasts, logistics and shipment data, manufacturing status, and so forth. Since this data is being used to drive multi-enterprise execution, the accuracy and timeliness of the data is critical. Ideally data is near-real-time and there are quality check mechanisms in place.
- Master Data Management – multi-enterprise network needs to accommodate a much higher velocity of changes to master data, coming from many different companies, with mechanisms to allow enterprise-specific variants, while still ensuring harmonization between the master data coming from all the different enterprises on the network.
- Security — There must be mechanisms for data owners to grant access rights based not just on a person’s role, but also the trading partner relationship (does the person work for a supplier, a customer, the government, etc.). This typically includes the ability to grant administrator rights to someone within the trading partner organization, so that they can do self-service onboarding and offboarding of users within their own company.
- Process Flows — Multi-enterprise platforms embody multi-enterprise processes and workflows which may involve not only a buyer and seller, but also 3PLs, carriers, inspectors, banks, insurance companies, customs agencies, and more. This will include partner-specific sub-workflows for each participating party.
- Network of Trading Partners — Multi-enterprise platforms often have pre-connected networks of trading partners. Participants are typically onboarded on an as-needed basis, as each new customer brings their trading partners (suppliers, customers, and service providers) onto the network. Often a trading partner network is concentrated in a particular industry. Most networks rely on customers to vet their own trading partners, but some solution providers do vet, audit, and certify a subset of suppliers on the network.
- Onboarding — Network platforms need mechanisms to streamline onboarding2 of new trading partners to the maximum extent possible.
- Data Quality — Transaction flows are the lifeblood of business activities between companies. Multi-enterprise platforms should have the ability to monitor the quality, integrity, and timeliness of data flowing through the platform to immediately correct malformed or missing data. This is described in more detail in the section below, Data Management and Monitoring.
- Integration — The platform needs to integrate both existing internal applications, as well as trading partners and their applications.
- Analytics & Optimization — Most existing optimization engines focus on a single functional area such as transportation route optimization, manufacturing production optimization, inventory optimization, or warehouse labor optimization. Multi-enterprise, network-level optimization is inherently different, as it requires coming up with globally optimal answers, incorporating orders of magnitude more data. This is needed to truly understand the network-wide tradeoffs and prescribe optimal resolutions to issues at enormous scale and complexity.
This last point on cross-functional, multi-enterprise analytics and optimization is an important one in the context of ADSA. Some form of analytics or intelligence is needed for almost everything an ADSA platform does, including predicting future shortages and overages, prioritizing issues, prescribing resolutions, and comparing the impacts of different disruptions and resolutions. This has implications for the platform architecture and where functionality resides.
Visibility-only Control Tower vs. Supply Chain Application Network
Initial iterations of supply chain control tower technology were created to provide ‘end-to-end’ visibility across functions and across a multiple-enterprise supply chain by being a central collection point for a wide array of disparate information from many systems and sources. This data is then displayed in supply-chain-wide dashboards, providing an overview of purchase orders, production, shipment, inventory, and in some cases demand. A Visibility-Only Control Tower does not do any planning, optimization, or execution. It relies on other existing systems to provide those functions. In contrast, a Supply-Chain Application Network has all the data gathering and synthesizing capabilities of a control tower, but also has built-in capabilities to perform multi-enterprise planning, optimization, and execution. Networked platforms are designed with the flexibility to allow some of the functionality that they normally perform on the platform to instead be performed off-platform,3 thereby not forcing users to switch from their existing planning and execution systems that they already use.
|Type of Platform|
Visibility-only Control Tower
Provide end-to-end visibility.
Loosely integrated enterprise applications (off-platform), trading partners, 3rd party sources.
Limited to visibility and simple alerts.
Supply Chain Application Network
Provide end-to-end visibility, planning, and execution.
Tightly integrated, built-in on-platform functionality and loosely integrated off-platform enterprise applications, trading partners, 3rd party sources.
Depends on the platform. Some focus on production, others logistics/GTM, demand/channel management, etc. or combinations of these.
Table 1 – Visibility-only Control Tower vs. Supply Chain Application Network
What’s in a Name?
Rather than seeing a black and white distinction between visibility-only control towers and supply chain application network, solutions can be viewed as sitting along a spectrum from having less to more functionality and intelligence embedded in the platform. To understand this spectrum, it is useful to look at what data is needed, what intelligence is needed, and where that data and intelligence resides.
On-Platform vs. Off-Platform Data and Intelligence
As described in Part 2A in Figure 2 – Many Organizational Functions and Systems Required to Achieve ADSA, data needs to be ingested from many different sources. For data sources that are already built into the platform (i.e., data that is part of the built-in planning and execution functionality), the integration is inherent, saving not only a lot of upfront integration effort, but ongoing maintenance of those integrations. Reducing the number of external integrations required increases the agility of the platform to evolve over time. For data sources that are external, the amount of integration effort, cost, and time required depends on the extent and nature of pre-built connectors available.4
Once the platform has the data, it can detect disruptions and deviations, but in order to prioritize those, it needs context. For example, to prioritize disruptions to materials flowing into a factory, the platform may need to know the production schedule and deadlines at that factory, which customer orders and revenue are dependent on each production run impacted by the delay, the value of those orders and potentially the lifetime value of each customer, options for rescheduling production, and so forth. Trying to bring in all that data and build the intelligence to understand it is daunting, due to the variety of semantics and the depth of logic needed to interpret it. The platform would be practically replicating much of the intelligence and logic already embedded in the external production planning and execution systems it is pulling the data from.
A visibility-only control tower, which does not have any planning and execution functionality, does not inherently have the built-in intelligence and context required to prioritize and optimize. That doesn’t mean such a platform cannot do any prioritization. With the right engineering effort by those solution providers, some simpler prioritization can be accomplished by pulling in the right data, provided the platform understands the semantics/meaning of the data and has ensured it is pulling in the right data from the source systems, normalizing the values properly, and interpreting it correctly for the purpose.
However, the scope and sophistication of prioritization and optimization that can be done by a visibility-only control tower is significantly constrained by the challenges inherent in doing all of the ‘reverse engineering’ work required and the lack of standard ways of expressing things like customer priority or lifetime value of a customer or production schedules. Thus, as a generalization, the less built-in planning and execution a platform has, the more rudimentary will be any prioritization or optimization it is able to provide. In contrast, a platform that has rich planning and execution capabilities built in, inherently has the intelligence that can serve as a foundation to contextualize and prioritize the issues that arise.
On-Platform vs. Off-Platform Optimization
Once issues have been detected and prioritized, they need to be resolved. The ultimate goal is for AI/ML to prescribe the optimal course of action. Several solution providers have started down that path, though we are still early on that journey. Some platforms provide useful approaches that do not involve AI/ML-generated resolutions. For example, they may gather information to show the user alternate sources of existing inventory, alternate suppliers, expediting options and costs, tools to invite suppliers to collaborate on a solution, and so forth. Saving the user the grunt work of finding, gathering, and organizing all that information, and making it easy and convenient to convene a team of co-workers and partners to resolve the issues — these are huge productivity boosters.
Still, a more powerful approach is when the platform does the optimization work required to recommend the best resolutions. This enables supply chain professionals to make smarter, faster, more strategic decisions. Organizations often already have very capable optimization engines running within best-of-breed applications such as within their existing TMS, WMS, production, and inventory optimization applications. However, these engines only optimize within their own domain. ADSA requires global optimization, where multiple optimization engines5 iterate collectively to find the best overall solution, across functions and enterprises. The vast majority of best-of-breed optimization engines are not designed to iteratively cooperate with other function-specific optimization engines to mutually analyze the tradeoffs. Some solution providers are building exactly that capability into their supply chain application network — the ability to iteratively optimize across multiple different functions and enterprise nodes to discover globally optimal solutions.
In Part 2D of this series, we start to look at the functionality required by an ADSA platform, including sourcing, supplier/production management, quality management, logistics, and global trade functionality.
1 The delays mean each party has an out-of-date picture of what is happening. Errors cause incorrect execution and costly disputes about what actually happened. These problems are especially acute when manual processes are used for sending transactions back and forth — e.g., PDFs or spreadsheets are sent via email, with data entry on the receiving side. Even with EDI, there can be days of delays between the time a transaction is entered in one system until it finally appears in the trading partner’s system. — Return to article text above
2 For example, some platforms allow self-service onboarding to be integrated into an auto-generated email when the platform is sending a document or transaction to the trading partner, such as an RFQ or an invoice. The trading partner clicks a link to do self-service onboarding and the platform pre-fills as much information as possible. This way the trading partner is confirming rather than entering data and the onboarding happens in a few minutes. The trading partner is motivated to complete the process at that moment in time because they want to respond to that specific business request or transaction — i.e., they want to bid on the project or get paid. — Return to article text above
3 Some platforms do this division of labor more elegantly, allowing more granular division of responsibility with legacy applications and easier migration to the platform. — Return to article text above
4 We are seeing three different levels of data integration for these platforms, with many platforms providing some hybrid combination of these: 1) the data is internal because it is part of the planning and execution application functionality of the platform, 2) the platform has pre-built connectors and data models to get external data, and 3) the platform does not have pre-built connectors but has mechanisms for creating new integrations to external data. For #1, there is no integration work required as the data already exists in the system. For #2, there is some integration work required (setting up and testing connections), but it is a relatively modest effort, and you can reasonably expect the vendor to update the mappings when the destination data changes. For #3, there is a significant amount of upfront work required to get a new integration to work and the user’s IT group is on the hook for maintaining that integration, which entails a lot of ongoing investment, unless the solution provider explicitly agrees to take on that responsibility (usually for a non-trivial fee). — Return to article text above
5 In theory, a solution provider could build a single cross-functional, multi-enterprise optimization engine that works on an entire end-to-end supply chain model, but we know of no provider taking that kind of monolithic approach. — Return to article text above
To view other articles from this issue of the brief, click here.