When selecting a spend analytics solution, it is useful to understand a few things about the architecture of the solution, as these can have a big impact on what the tool can and cannot do, what types of environments and problems it is suitable to solve, and how much time and effort it will take. This is particularly important for companies that have one or more of the following goals and characteristics:
- Have already reaped the ‘low-hanging fruit’ and look to drive additional savings and benefits
- Are looking to deal with a broad set of categories of spend with unique attributes, such as travel, contingent labor, services, and so forth
- Want to use the analytics platform to achieve broader goals beyond basic spend reduction, including enterprise-wide objectives (e.g. risk reduction, sustainability, diversity, lowering total cost to the organization, etc.)
- Have a need to analyze real-time data, rather than only historical data
We’ll look at the architectural attributes of spend analytics as follows:
- Analytic Architecture
- Data Ingestion, Integration, Cleansing
- Data Handling Capabilities
- Pattern Matching and Recommendations
Analytic Architecture
Data Warehouse vs. Flexible OLAP — Is the spend analytics solution based on an inflexible data warehouse technology (whether OLAP or not)? If so, it requires anticipating the lines of inquiry — i.e. types of queries, slicing and dicing, and roll-ups that the users might want to do with the data — and then IT needs to configure and build the appropriate structures to enable those queries. Users cannot do ad hoc exploratory queries that fall outside that predefined set. Most analysts will quickly bump into these limitations when exploring new ways to achieve spend reduction or other corporate goals. Fortunately, many solutions today are based on in-memory OLAP technology which allows for these types of ad hoc queries without the need to pre-configure the cubes. When evaluating solutions, buyers should consider current and future data size and needs in order to ensure that they are selecting a platform that provides the responsiveness and flexibility they need both now and in the future.

Single vs. Multiple Cubes — s described in our earlier article on ‘Applications of Spend Analysis,’ there are many different applications of spend analytics. Different applications of analytics usually require different dimensions of data and therefore different cubes or models. Some solutions support only a single cube, making it more difficult to analyze the variety of analytics that mature organizations desire.
(See Figure 1)
Data Ingestion, Integration, Cleansing
Data Ingestion Process — Is the system designed so that knowledgeable end users can configure and input (map, normalize, consolidate, cleanse) new data sources themselves, or does it require the IT department (or vendor) specialist to bring in the data? The ability for end users to set up and integrate new data enables ad hoc and novel analytics.




Data Variety — Can the system incorporate a large variety of types of data from any possible source, such as spend, supplier performance, product data, geographic data, risk, tax, benchmark and/or commodity prices, P-card, invoices with line-item details, and logistics data, all at once. For example, in an engineering setting, this could include data about the product’s lifecycle stage (in development, limited release, general release, end-of-life, unsupported). Or to do total cost analysis might requireactivity-based costing data for allocating labor and shared resource costs. Some solutions provide data enrichment as part of the analysis process, rather than requiring separate input and integration efforts. This can speed analysis time and reduce costs.
Pre-integration with Spend-related Systems — Spend analytics solutions sometimes come pre-integrated with various other systems including specific vendors’ supplier information management, sourcing, contracting, procure-to-pay systems, as well as the accounting and/or procurement modules of major ERP systems. This pre-integration can help accelerate the basic initial implementation for organizations, to the extent that they are already using those specific systems that have been pre-integrated by the solution vendor. However, don’t assume these pre-integrated systems adequately encapsulate all of the activity in that area.For example, e-procurement systems typically capture only a fraction of enterprise spend; and often the fraction that is less interesting for spend analysis. Furthermore, the pre-integrated systems usually contain very little, if any, of the ancillary data required for more sophisticated analysis, such as risk, diversity, logistics, total cost, engineering data, and so forth. Pre-integration provides value by accelerating implementation for firms already using the pre-integrated solutions, but often there is considerable additional integration required to be able to analyze the full range of spend and desired dimensions of analysis.
Real-time Cleansing and Classification — Some vendors allow corrections to data and classifications to be done in real-time, on-the-fly, by the end users. Others require that those changes be made offline in batch mode, typically by the vendor as a service, refreshing the data on a monthly or quarterly basis. The ability to do on-the-fly corrections can significantly accelerate the upfront implementation, because rather than trying to get the data all correct before going live, it can be cleaned to ‘good enough’ and then further fixed as it is being used. Moreover, with on-the-fly corrections, the fixes are available immediately, rather than waiting for the next refresh. One critical caveat — when the user is allowed to make on-the-fly corrections to classification, parent-child, and other rules or data, there must be a mechanism (such as workflow and/or collaboration tools) along with strict policies and an enforceable acceptance process to establish consistency, security, and transparency to the changes made. This helps to get everyone on the same page to reduce the chances that changes made by one buyer or commodity manager cause issues with other buyers, commodities, or situations. Organizations that let their users make on-the-fly changes, but lack this level of rigor for the change process, will have problems.
Embedded Cleansing and Classification — Some solutions make their classification services available as web services, allowing them to be embedded into other applications (such as procurement, catalog, or sourcing applications) so that the end user can reclassify the item in situ within the application they are using, rather than having to go back out to the spend analytics application. Some even let suppliers suggest corrections via their portal as well.
Data Handling Capabilities
Data Granularity — Depending on your objectives and applications, different levels of detail and granularity are needed. For example, you may need line item data from the PO or invoice and need to ensure that this is supported by the system.




Parametric Searching — Companies that deal in technical commodities usually need parametric searching. Some solutions support attribute-value pairs, such as the number of pins on a chip, inside diameter of a bearing, etc. (see example in Figure 2) , and they automatically create an index for each parameter. Some are even able to analyze unstructured free form text (such as found in product data sheets, catalogs, or web pages) and identify product attributes from that data. For example, if there is a web page describing a chair, it can identify and extract from that the chair’s size, color, adjustability, frame material, seat material, wheeled vs. fixed, and so forth.
Unstructured Data Handling — Some solutions are better at handling unstructured data and can do on-the-fly normalization; for example finding misspellings, understanding context (to get most likely meanings), identifying and classifying by common terms used, and so forth.
Pattern Matching and Recommendations




Templates, Rules, and Pattern Matching — Most solutions let you build libraries of pre-built views (dashboards and queries) to encapsulate some specific type of analysis or ‘best practice.’ Some also provide some sort of rules engine (not just classification rules, but data analysis and alerting rules) and a pre-built library of rules, as well as the ability for users to create their own rules.However, there are big differences between the sophistication of those libraries, where some vendors have invested dozens or hundreds of man-years developing purpose-specific rules for different types of analytics. These can provide a whole layer of added value out-of-the-box.
Alerts and Recommendations — Some solutions go so far as to generate recommended actions based on the rules and pattern recognition, and deliver these recommendations proactively to the end user, rather than waiting for the user to discover them.
Architecture Matters
When selecting a solution, it is important to understand the key architectural attributes of the system, some of which we have outlined here. These can make a great difference in the performance, ease-of-use, and flexibility of the system — and ultimately determine whether or not it fulfills your needs.
To view other articles from this issue of the brief, click here.