In The Achilles’ Heel of Supply Chain Management (Harvard Business Review, May 2001), the authors write :
Ever since retailers equipped their cash registers with bar code scanners, we’ve been promised a brave new world of supply chain management. Stores would automatically track the flow of goods and electronically transmit precise replenishment orders. Suppliers would synchronize their production schedules to real-time demand data. Fewer goods would sit around in warehouses; fewer customers would find products out of stock.
It’s a great vision, and one that may still come to pass. But to get there, retailers will have to clean up their act. In an in-depth study of 35 leading retailers, we were dismayed to discover that the data at the heart of supply chain management are often wildly inaccurate.
Thirteen years later, despite all the advancements in technology, that promise remains unfulfilled — at least not to the extent you would expect by now. Take, for example, the problems Target has experienced in Canada. According to an article published by Reuters last week, data quality issues with Target’s logistics processes are partially to blame:
Goods were coming into the warehouses faster than they were going out, in part because the barcodes on many items did not match what was in the computer system [emphasis mine]. As shipments stacked up, Target flew in dozens of red-shirted staff from the United States to shore up the operation, the sources said…But the U.S. workers were used to different computer systems and stocking procedures, limiting the amount they could help.
Target does much of its own distribution in the United States, but it hired Eleven Points Logistics, a subsidiary of Pittsburgh-based Genco, to run its three warehouses in Canada…As goods arrived at the warehouses, workers found errors, 12 shirts per box when the computer system expected 24, for example, the two former Eleven Points employees said.
It is not clear whether these errors were caused by Target’s buyers entering bad data, vendors making mistakes, some glitch in Eleven Points’ warehouse computer system or all three.
Today, everyone is talking about Big Data and the brave new world it will bring, but that promise will also go unfulfilled because we still have a big, crappy data problem in supply chain management.
We still have companies, large and small, using fax machines to transmit orders, invoices, and other documents with trading partners.
We still have companies that cannot affix labels correctly, package items properly, or execute the small stuff that really matter.
We still have companies that have an incomplete map and understanding of their supply chain.
Yes, we still have a big, crappy data problem in supply chain management and it’s only getting worse.
What’s the solution?
There are many root causes to the data quality problem, including the fact that most data “standards” like ANSI X12 or EDIFACT are not standard in practice; most companies modify them, making each customer-supplier link a custom integration using non-standard syntax, unique variables, and reordered transmission. The net result is a “Tower of Babel” situation where every party has to translate each other’s messages. If companies all along the supply chain actually adhered to common standards, many data quality issues would go away.
But at a higher level, solving the data quality problem requires answering these two basic questions:
- Who owns data quality management?
- Do we really need all of this data and complexity?
Many operations people believe that IT is responsible for data quality, while IT points the finger back to operations and the countless trading partners (suppliers, customers, logistics service providers, and so on) that send them data. Simply put, the responsibility for data quality management is not clearly defined at most companies, or it’s assumed that data quality is everybody’s responsibility, but the required governance and accountability structures don’t exist.
Therefore, companies need to clearly defines roles and responsibilities in this area. But before that, they need to view data as a corporate asset and assign value to it, just like they do to other assets like buildings, equipment, and intellectual property. Some frameworks already exist, such as the emerging field of Infonomics, which Wikipedia defines as “the emergent discipline of quantifying, managing and leveraging information as a formal business asset. Infonomics endeavors to apply both economic and asset management principles and practices to the valuation and handling of information assets.”
At the same time, companies need to take a step back and question why they are collecting certain data, and why they continue to add complexity to their supply chains. When I started my career as an engineer, for example, I remember asking my boss why we collected certain types of manufacturing data. In many cases, she didn’t know; it seemed like we just always collected that data. I later learned that in many cases, the data collection started to better understand and fix a problem, but once the problem was solved, nobody hit the stop button on the data collection, and years of unused and unnecessary data continued to accumulate.
Finally, I believe companies, especially manufacturers and retailers, should get out of the B2B connectivity business. It’s not their core competency, and as history has proven, they’re not doing a good job at it either. Instead, companies should view B2B connectivity as a utility and outsource it to a Supply Chain Operating Network, where data quality management is central to their business model and value proposition.
Last Tuesday, Target fired Tony Fisher, the president of its operations in Canada, and replaced him with Mark Schindele, “a veteran U.S. executive with deep experience in managing supply chains,” according to the Reuters article. Let’s hope that experience includes fixing big, crappy data problems in supply chains, otherwise it’s only a question of time before another arrow strikes that Achilles’ Heel.