December 28, 2006

Is Product Level Data Accurate Enough to Manage Categories?

By Paul Waldron, Gladson
Interactive
, special to GMDC

Smooth functioning of category management execution depends on accurate, timely information of all types. Without it, the seamless flow of goods onto mass-market retail shelves runs the risk of being interrupted. That risk is apparently real, since much of the product item dimensions, sizes, descriptions and even UPC/GTIN information driving retailer planograms are often seriously flawed.

According to a recent study on product level data accuracy conducted by Gladson Interactive, 17 percent of all products had severe dimension errors of more than 25 percent.

The study was conducted with product measurements used by retailers and category captains to build planograms. Beginning in October 2005, Consumer Unit Product Data were collected from retailers with more than $250 billion in annual CPG turnover.

Errors in product width have the biggest impact on in-store decisions. Crews and sales agencies resetting the categories are forced to make decisions about which product facings should be cut, expanded or eliminated altogether.

Height errors are the most costly from the reset labor perspective. When the shelf height proposed in the planogram is too short for the actual height of the product, a complete rework of the shelf is necessary or creative product placement is determined, once again, by the reset crew.

Depth measurements most often affect shelf capacity. Errors in product pack estimates cripple computer aided ordering systems and create similar havoc with many operational functions.

As to the cause of this, retailers point out that some suppliers sometimes appear to be estimating or assuming dimensions, using outdated measurement tools or just providing the information late and uncertified. Suppliers respond that retailer requests may come either via internet or hard copy, are not standardized, and may not be easy to comply with.

Discussion Questions: How serious is this problem?
How can retailers make sure specifications are accurate? How can suppliers more
effectively publicize correct dimensions? Is this issue a supplier responsibility?
Is there a communication gap between retailer and supplier? What technology
is available to remedy the situation? What services should be used? Will this
have an impact on DataSync?

Discussion Questions

Poll

17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark Lilien
Mark Lilien

Well-run retailers test everything before rollout. Many retailers have prototype stores used by professional display folks and the buyers to test displays before they’re formalized. And buyers work with the brand salespeople to make sure their databases are kept up to date. Could this be done more crisply? Sure. Is it a disaster? Well, certain brands are careful to communicate their data changes in advance. And some brands just don’t. Just like some retailers take care to manage the data appropriately and others just aren’t as professional.

Dan Nelson
Dan Nelson

It falls on the suppliers’ shoulders to provide accurate item information through the new item and promotion forms required by retailers to set up those items in their respective systems. There is probability for misinformation to occur with so many “touch points,” so a strong universal product system to track and maintain accurate item and case data would help.

The retailers need to work collaboratively with regard to a standardized new item and promotion forms process to assist, and “if” they could agree to support and require all information to flow from a universally accepted depository of information, it would greatly curtail the problems we now deal with in item/case dimension integrity.

Stephan Kouzomis
Stephan Kouzomis

First make sure the retailer and supplier are offering the right product information. But, this should be a given…not an issue!

It is the consumer that both should be focusing on! This drives the category information, strategy and business….

Hmmmmmmmmmm

Ken Yee
Ken Yee

Coming from a career always on the supplier side, it is up to the supplier to provide accurate information. The retailer can only trust the final product will actually match what was promised. Even with demo stores to test out planograms, suppliers can always switch it at the last moment.

There are so many ways the final product dimensions are different. Responsibility comes from both factory spec sheets and/or the category/sales team.

Worst of all, lots of new products aren’t finalized until near shipping, but item management processes must be done months and months before.

1. Factory gives initial specs

2. Sales rep proposes new SKU and inputs into item management system at the retailer based on what they know. Buyer goes with it. Often a prototype/sample.

3. Factory adjusts specs, announces changes

4. Sales rep may miss the announcement or forget to change item dimensions in the retailer’s system, or purposely not update it in fear the new specs would cause alarm and potential delistment.

5. Final product arrives, dimensions may again be slightly adjusted regardless of previous factory announcement months earlier.

6. Since no sales rep is going to manually measure any product themselves, they then rely on shipping/logistics for final spec check on arrival. If nobody does it, sometimes due to sales pressure, it ships out the door!

Lots of places for improvement.

Factories have to announce spec changes and ship what they announce, not to find out 3 months later when the shipment arrives through import that it’s changed again.

Since sales/category management rely on systems that display dimensions, the systems must be updated with the latest info.

Sales should update retailer’s item systems when they know current info. Updating dimensions is a frequent thing, everyone knows it and accepts it as longs as it’s not too drastic. Best is to keep retailer’s replenishment/merchandising depts in the loop. (The buyer likely won’t care.) Also, by doing so, it protects the supplier from vendor fines. As long as the retailer knows about it and accepts it, any automated fines have a good chance at being canceled… at least where I’ve worked.

For retailers, I’m not sure what the best way is, since I’ve never worked at a retailer. One thing that always caused fear for me was vendor fines. For chronic bad apples, retailers can stick to fining until there’s improvement (no leniency). Also, stress delistment, risk of losing valuable shelf location and flack merchandising teams are getting from your product lines.

Barb Golden
Barb Golden

My company sells to all of the top retailers in the US, and a lot of smaller ones as well. And they each have their own form(s) or vendor website or data sync company to use for item information.

The item and case dimensions and weights are on the initial item quote sheets, in most cases. Usually, the quote sheets (or even sometimes item setup forms) are required by the retailers months before production even begins on the items and before the item packaging is finalized, so there is no possible way that we can give accurate dimensions. But the retailers demand data in those form fields, so we “guesstimate”.

In a perfect world, we would re-issue the data to every retailer that we know we sent “guesstimates” to when the dimensions become reality, but we are a small company with minimal staffing. There are always other fires that need to be dealt with that take priority over correcting forms that have already been submitted.

If anyone can suggest a fix to this issue, I would love to hear it.

James Tenser

Several excellent comments here that describe the challenges caused by lack of accuracy in package dimension data. Improving this accuracy is a clear need, especially in an era when planograms are increasingly customized at almost a store-by-store basis.

I’d submit there’s another missing link here, which if addressed could help this situation to continuously improve: lack of shelf visibility. This is a key element missing from in-store execution that must be addressed for many reasons, not the least of which is planogram accuracy.

Under present methods, the kinds of shelf set problems caused by package dimension inaccuracies can only be evaluated long after the failures occur. If we substitute a set of practices that incorporate continuous shelf-status visibility, systemic errors would come rapidly to light, so that timely corrective action may be taken.

Shortening the shelf-status feedback loop to near real time would bring beneficial pressure on manufacturers to get the dimensions right in the first place, and on retailers to make sure the sets are efficiently done, since it would clarify the real cost of both parties’ mistakes.

Herb Sorensen, Ph.D.
Herb Sorensen, Ph.D.

When we study the behavior of millions of shoppers on a second by second basis, we are absolutely dependent on three pieces of information:

* The shopper – their location (x,y)/behavior/time

* The sales – item/category relationships at purchase

* The store – location of all fixtures and SKUs

It is this third item, what we refer to as Store Infrastructure, that is the most gnarly. And this present discussion here is dealing with one piece of this problem, in great detail.

As should be obvious from the discussion, those intimately involved are well aware of the lapses and complications. What is more interesting is the number of retail involved partners who are seriously affected by these problems, and yet seem to be virtually unaware that they exist (in contrast to how well they are known and understood by retailers themselves.)

The present discussion focuses on the “planogram,” which presumably refers to the main display on the gondola. And that is for sure important. But the reality is that 42% of all sales occur at locations other than from the planogram. The problem of accounting for that other huge share of sales is far more complicated than the management of the planogram.

By the time you make any significant progress on the problems discussed with the planogram, passive RFID may make this irrelevant, to an extent. Meanwhile, store infrastructure remains the gnarliest issue in doing full store, every shopper, every SKU studies of any kind.

Camille P. Schuster, Ph.D.
Camille P. Schuster, Ph.D.

When UPC codes were introduced and prices were taken off the products, the issue of data integrity and reliability received a lot of media attention because consumers were concerned about being charged the wrong price. With the media and consumer scrutiny, establishing, testing, and reporting accuracy was critical. Systems were not rolled out if they were going to receive bad press.

Data integrity and reliability is just as important with all the data being used for transferring information in the supply chain and the same amount of effort needs to be placed on establishing reliability before rolling out ANY system as there was when rolling out the UPC codes.

Mike Mohaupt
Mike Mohaupt

It really depends on the depth one takes Category Analytics. By its very premise Category Management is about managing categories based upon gaining knowledge of the consumer and through knowledge comes understanding from which fact based strategies are developed. Nowhere in here does it refer to managing skus. But herein lies the issue: when it comes time to implement these fact based strategies, product level data inaccuracies most certainly negatively impact managing categories.

I would disagree with some of the comments above. It is all about Space Management. Every category (segment) has a role to play, almost on a store by store basis. Since the very premise of Category Management is about knowledge of the consumer, it has to be done on a store by store basis. And based on the role the category plays, it should dictate how much space is allocated to that category. Factoring in the scale of that on categories that are as sku-intense as ours with a store by store approach, you could have several different planograms. After all, our objective has to be to maximize space productivity GMROI per Cubic foot (GMROF). Then when you consider improving supply flow and downstocking. Coordinated planograms to product placed on skids. Accurate data has to drive these strategies as well.

No way a retailer would have the ability to preset all the possible variations to work the bugs out in advance. Space Management software has to do this for us which in turn requires product data accuracies. Central data libraries such as UCCnet will take on a bigger strategic role in the near future.

Steven Collinsworth
Steven Collinsworth

This is a huge problem as Paul has stated. Especially when you consider the size of some categories and the disparate number of package sizes within some of the these categories. In some categories, a dimension error of an 1/8th of an inch multiplied by say only ten items results in 1-1/4 inches of error. It may not seem like much, but it has the implication of not being able to place an item in the set or cutting inventory on an item which merits the space.

Now extrapolate this throughout the store. Makes me dizzy just thinking about it.

I have worked for several large food and gm manufacturers over the years and it still puzzles me how they can provide inaccurate dimensions, incorrect UPCs and get their own case counts wrong.

One upper Midwest wholesaler has their own person who does nothing but image, measure, and input the data into the wholesaler’s database. They say they have cut down on dimension errors to almost zero. UPC errors are another issue.

Finally, the inaccuracies of what happens in a space management software program have large implications. (Remember…”garbage in; garbage out.”) If the errors are large enough, most measures involving space are inaccurate and devalued.

The Single Solution of one database is a good one. The issue is the number of companies trying to sell the service and the fees they wanted to assist the manufacturers in this process. Another issue was the need for the manufacturers was never sold hard enough for them to say “yes.”

On both sides of the desk — manufacturers and retailers — many have devalued the necessity of accurate data. The need is there for both sides of the desk to provide cleaner data, whether through IT systems or from the engineer or clerk who performs these tasks, or even both. All companies must invest in processes, systems and people in order for issues such as this to become moot.

Dennis Serbu
Dennis Serbu

Product level measures, particularly with flexible packaging are largely subjective. Even the measure standard for GDSN is in my opinion ill advised. The measure standard states that on potato chip bags the depth, width and height measure should be taken with the Bag laying flat to evenly settle the contents. That is not the reality of the shelf and the depth measure can vary by as much as an inch. This clearly affects shelf capacity calculations and overstates the number of packages that will fit by one unit or more in some cases. There go Days of Supply and Turn calculations. The dynamics of product settle also will affect height and width. This is a nightmare in the Salty Snacks category where most products are in flexible packaging. I would venture to say that there are not two identical measures on the same product in any database in the country. Even Gladson will show variances on the same package size — the same product, same bag, different seasoning.

On hard siding product packaging, I cannot understand or accept errors in measures. It is what it is.

Unscrupulous Category Partners will use measure variations to their advantage and show slower turns, higher Days of Supply for their competitors. Retailers using partner data should be vigilant and spot check against actual shelf capacity.

Race Cowgill
Race Cowgill

This is an excellent example of the power of organizational Master Systems: generally, organizations have weak processing systems (I’m not speaking only of IT or materials processes here); they tend to require huge amounts of human support and interaction, they handle relatively few of the overall processing needs, they generate huge numbers of errors, and the rest of the processing needs are handled by ad-hoc flows. Let us not fool ourselves — organizations of all kinds are generally not created or managed by systems experts, they are generally created and run by industry insiders who know little about system design. (Again, I am not speaking only of IT systems here.) The System that manages all of this, then, is the Master System, and trying to view this System, when you are not a systems expert, much less measure it and change it, is literally impossible. The result is that this entire subject becomes very complicated very quickly, and has few or no practical answers. Of course, shifting the Master System is the answer, but the players here are not really able do that.

Mario Vellandi
Mario Vellandi

The problem is multivariate for the many reasons cited above. But let’s face it – the greatest number of errors originate with manufacturers or their reps. Having worked in sales for a manufacturer with a large variety of SKUs and quite enviable number of retailers, maintaining accuracy can be a daunting task. Based on my experience:

1) I learned slowly to go from LxWxH to WxHxD;

2) I relied on Product Design & Development to provide me with item specs;

3) I relied on the Logistics dept. to provide me with case dimensions;

4) Although the concept of inner case and master case seemed logical to me, when working with trays/PDQs, the issue got a little confusing;

5) The issue was compounded by the variety of forms we dealt with from different retailers;

6) Product Design & Development may have updated the bottle size but failed to update their internal system specs or notify us of this update;

7) In the absence of such data, we often had to wing it to our best judgment;

8) Although I used a variety of sales agents, I can only imagine their not ever wanting to know about such problems. After all, such data accuracy is beyond their reach of influence, nor do they generally care. If they enter inaccurate data in retailer forms, it’s either their fault or the manufacturer’s;

9) Lastly, our staff was trained to push SALES. Day in, Day out. The little intricacies of data formats, industry practice, and so on were never really taught. It’s just something we learned along the way. And if data was inaccurate, oh well. There were plenty of potential factors that could’ve contributed to the problem; it’s not our job to figure it out. I’m going to lunch now, then the meeting afterward, then calling my rep, checking the CRM system, printing out the brochures, sending out the samples…………

Shaun Bossons
Shaun Bossons

I don’t think that there can be any doubt that data integrity is a large problem within the retail community. Issues with product dimensions, case pack information and product segmenting have caused headaches for retailers for many years.

Over the last 12 months, I have seen a number of the largest retailers in the US starting to treat this problem by investing in the improvement of data quality. Improving the process and accuracy of new product development and then the introduction into the business has become a focal area. As retailers plan on a far more granular basis than at any other time, accurate data becomes critical as planograms simply can no longer be tested, due to the amount of specific versions.

Moving forward, it will be essential to ensure the accuracy of data and the ability to ensure that it is stored in a consistent, controlled and centralized manner. Most retailers face a daily battle to consolidate, massage and change data from many isolated locations across the business; this simply results in sub-optimal knowledge. Working with one version of the truth, with customer insight and rest-of-market overlays will allow retailers to truly plan to meet the demanding needs of their customer base.

Ryan Mathews

Not only is data integrity a problem, it’s an ongoing problem. There isn’t a simple, one time fix for any of these issues. “Solving” the data integrity issue requires a higher level of cooperation (and cost sharing) between manufacturers and retailers than we’ve seen up to this point.

Dan Raftery
Dan Raftery

I am repeatedly amazed that the issue of product data accuracy and integrity continues to be treated as news. Some of us have been proposing the elements of data synchronization for 20 years now. It is the great enabler of supply chain analytics and efficiency. Without a universally accepted repository as the source, individual databases have no chance of containing the same data, much less accurate data.

It would be nice if we could place the mantle of responsibility on manufacturers, but that’s not realistic. Heck, even with the database advances from enterprise wide systems, most manufacturers still maintain more than one data repository. In many cases it’s because they need to satisfy individual retailer’s proprietary systems requirements if they want to do business with them.

This issue goes beyond space management. Any information system based technology needs correct data to function correctly. Outsourcing this responsibility to a reliable resource is the only way to have a chance at keeping the databases clean.

Bill Akins
Bill Akins

The responsibility for item attribute maintenance has traditionally fallen under the umbrella of Category Management at a number of large big box retailers. Category advisers have been the ones to maintain vital dimension statistics during the modular (planogram) review process, but found the task overwhelming when there were thousands of items on a set and only a few days to complete the analysis. In my experience, if the dimensions were off, it was because the manufacturer and the retailer were not communicating appropriately on package changes, new sizes, or differences between packaging on promotional skus versus the “parent” shelf item.

With the advent of UCCnet, the attempt was made to have a global synchronization of item information, but adaptation was slow. Global Data Synchronization Network (GDSN) is proving to be much more widely adopted and should ease the strain on category advisers getting stuck with digital tape measures and calipers to audit product heights, widths, and depths.

17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark Lilien
Mark Lilien

Well-run retailers test everything before rollout. Many retailers have prototype stores used by professional display folks and the buyers to test displays before they’re formalized. And buyers work with the brand salespeople to make sure their databases are kept up to date. Could this be done more crisply? Sure. Is it a disaster? Well, certain brands are careful to communicate their data changes in advance. And some brands just don’t. Just like some retailers take care to manage the data appropriately and others just aren’t as professional.

Dan Nelson
Dan Nelson

It falls on the suppliers’ shoulders to provide accurate item information through the new item and promotion forms required by retailers to set up those items in their respective systems. There is probability for misinformation to occur with so many “touch points,” so a strong universal product system to track and maintain accurate item and case data would help.

The retailers need to work collaboratively with regard to a standardized new item and promotion forms process to assist, and “if” they could agree to support and require all information to flow from a universally accepted depository of information, it would greatly curtail the problems we now deal with in item/case dimension integrity.

Stephan Kouzomis
Stephan Kouzomis

First make sure the retailer and supplier are offering the right product information. But, this should be a given…not an issue!

It is the consumer that both should be focusing on! This drives the category information, strategy and business….

Hmmmmmmmmmm

Ken Yee
Ken Yee

Coming from a career always on the supplier side, it is up to the supplier to provide accurate information. The retailer can only trust the final product will actually match what was promised. Even with demo stores to test out planograms, suppliers can always switch it at the last moment.

There are so many ways the final product dimensions are different. Responsibility comes from both factory spec sheets and/or the category/sales team.

Worst of all, lots of new products aren’t finalized until near shipping, but item management processes must be done months and months before.

1. Factory gives initial specs

2. Sales rep proposes new SKU and inputs into item management system at the retailer based on what they know. Buyer goes with it. Often a prototype/sample.

3. Factory adjusts specs, announces changes

4. Sales rep may miss the announcement or forget to change item dimensions in the retailer’s system, or purposely not update it in fear the new specs would cause alarm and potential delistment.

5. Final product arrives, dimensions may again be slightly adjusted regardless of previous factory announcement months earlier.

6. Since no sales rep is going to manually measure any product themselves, they then rely on shipping/logistics for final spec check on arrival. If nobody does it, sometimes due to sales pressure, it ships out the door!

Lots of places for improvement.

Factories have to announce spec changes and ship what they announce, not to find out 3 months later when the shipment arrives through import that it’s changed again.

Since sales/category management rely on systems that display dimensions, the systems must be updated with the latest info.

Sales should update retailer’s item systems when they know current info. Updating dimensions is a frequent thing, everyone knows it and accepts it as longs as it’s not too drastic. Best is to keep retailer’s replenishment/merchandising depts in the loop. (The buyer likely won’t care.) Also, by doing so, it protects the supplier from vendor fines. As long as the retailer knows about it and accepts it, any automated fines have a good chance at being canceled… at least where I’ve worked.

For retailers, I’m not sure what the best way is, since I’ve never worked at a retailer. One thing that always caused fear for me was vendor fines. For chronic bad apples, retailers can stick to fining until there’s improvement (no leniency). Also, stress delistment, risk of losing valuable shelf location and flack merchandising teams are getting from your product lines.

Barb Golden
Barb Golden

My company sells to all of the top retailers in the US, and a lot of smaller ones as well. And they each have their own form(s) or vendor website or data sync company to use for item information.

The item and case dimensions and weights are on the initial item quote sheets, in most cases. Usually, the quote sheets (or even sometimes item setup forms) are required by the retailers months before production even begins on the items and before the item packaging is finalized, so there is no possible way that we can give accurate dimensions. But the retailers demand data in those form fields, so we “guesstimate”.

In a perfect world, we would re-issue the data to every retailer that we know we sent “guesstimates” to when the dimensions become reality, but we are a small company with minimal staffing. There are always other fires that need to be dealt with that take priority over correcting forms that have already been submitted.

If anyone can suggest a fix to this issue, I would love to hear it.

James Tenser

Several excellent comments here that describe the challenges caused by lack of accuracy in package dimension data. Improving this accuracy is a clear need, especially in an era when planograms are increasingly customized at almost a store-by-store basis.

I’d submit there’s another missing link here, which if addressed could help this situation to continuously improve: lack of shelf visibility. This is a key element missing from in-store execution that must be addressed for many reasons, not the least of which is planogram accuracy.

Under present methods, the kinds of shelf set problems caused by package dimension inaccuracies can only be evaluated long after the failures occur. If we substitute a set of practices that incorporate continuous shelf-status visibility, systemic errors would come rapidly to light, so that timely corrective action may be taken.

Shortening the shelf-status feedback loop to near real time would bring beneficial pressure on manufacturers to get the dimensions right in the first place, and on retailers to make sure the sets are efficiently done, since it would clarify the real cost of both parties’ mistakes.

Herb Sorensen, Ph.D.
Herb Sorensen, Ph.D.

When we study the behavior of millions of shoppers on a second by second basis, we are absolutely dependent on three pieces of information:

* The shopper – their location (x,y)/behavior/time

* The sales – item/category relationships at purchase

* The store – location of all fixtures and SKUs

It is this third item, what we refer to as Store Infrastructure, that is the most gnarly. And this present discussion here is dealing with one piece of this problem, in great detail.

As should be obvious from the discussion, those intimately involved are well aware of the lapses and complications. What is more interesting is the number of retail involved partners who are seriously affected by these problems, and yet seem to be virtually unaware that they exist (in contrast to how well they are known and understood by retailers themselves.)

The present discussion focuses on the “planogram,” which presumably refers to the main display on the gondola. And that is for sure important. But the reality is that 42% of all sales occur at locations other than from the planogram. The problem of accounting for that other huge share of sales is far more complicated than the management of the planogram.

By the time you make any significant progress on the problems discussed with the planogram, passive RFID may make this irrelevant, to an extent. Meanwhile, store infrastructure remains the gnarliest issue in doing full store, every shopper, every SKU studies of any kind.

Camille P. Schuster, Ph.D.
Camille P. Schuster, Ph.D.

When UPC codes were introduced and prices were taken off the products, the issue of data integrity and reliability received a lot of media attention because consumers were concerned about being charged the wrong price. With the media and consumer scrutiny, establishing, testing, and reporting accuracy was critical. Systems were not rolled out if they were going to receive bad press.

Data integrity and reliability is just as important with all the data being used for transferring information in the supply chain and the same amount of effort needs to be placed on establishing reliability before rolling out ANY system as there was when rolling out the UPC codes.

Mike Mohaupt
Mike Mohaupt

It really depends on the depth one takes Category Analytics. By its very premise Category Management is about managing categories based upon gaining knowledge of the consumer and through knowledge comes understanding from which fact based strategies are developed. Nowhere in here does it refer to managing skus. But herein lies the issue: when it comes time to implement these fact based strategies, product level data inaccuracies most certainly negatively impact managing categories.

I would disagree with some of the comments above. It is all about Space Management. Every category (segment) has a role to play, almost on a store by store basis. Since the very premise of Category Management is about knowledge of the consumer, it has to be done on a store by store basis. And based on the role the category plays, it should dictate how much space is allocated to that category. Factoring in the scale of that on categories that are as sku-intense as ours with a store by store approach, you could have several different planograms. After all, our objective has to be to maximize space productivity GMROI per Cubic foot (GMROF). Then when you consider improving supply flow and downstocking. Coordinated planograms to product placed on skids. Accurate data has to drive these strategies as well.

No way a retailer would have the ability to preset all the possible variations to work the bugs out in advance. Space Management software has to do this for us which in turn requires product data accuracies. Central data libraries such as UCCnet will take on a bigger strategic role in the near future.

Steven Collinsworth
Steven Collinsworth

This is a huge problem as Paul has stated. Especially when you consider the size of some categories and the disparate number of package sizes within some of the these categories. In some categories, a dimension error of an 1/8th of an inch multiplied by say only ten items results in 1-1/4 inches of error. It may not seem like much, but it has the implication of not being able to place an item in the set or cutting inventory on an item which merits the space.

Now extrapolate this throughout the store. Makes me dizzy just thinking about it.

I have worked for several large food and gm manufacturers over the years and it still puzzles me how they can provide inaccurate dimensions, incorrect UPCs and get their own case counts wrong.

One upper Midwest wholesaler has their own person who does nothing but image, measure, and input the data into the wholesaler’s database. They say they have cut down on dimension errors to almost zero. UPC errors are another issue.

Finally, the inaccuracies of what happens in a space management software program have large implications. (Remember…”garbage in; garbage out.”) If the errors are large enough, most measures involving space are inaccurate and devalued.

The Single Solution of one database is a good one. The issue is the number of companies trying to sell the service and the fees they wanted to assist the manufacturers in this process. Another issue was the need for the manufacturers was never sold hard enough for them to say “yes.”

On both sides of the desk — manufacturers and retailers — many have devalued the necessity of accurate data. The need is there for both sides of the desk to provide cleaner data, whether through IT systems or from the engineer or clerk who performs these tasks, or even both. All companies must invest in processes, systems and people in order for issues such as this to become moot.

Dennis Serbu
Dennis Serbu

Product level measures, particularly with flexible packaging are largely subjective. Even the measure standard for GDSN is in my opinion ill advised. The measure standard states that on potato chip bags the depth, width and height measure should be taken with the Bag laying flat to evenly settle the contents. That is not the reality of the shelf and the depth measure can vary by as much as an inch. This clearly affects shelf capacity calculations and overstates the number of packages that will fit by one unit or more in some cases. There go Days of Supply and Turn calculations. The dynamics of product settle also will affect height and width. This is a nightmare in the Salty Snacks category where most products are in flexible packaging. I would venture to say that there are not two identical measures on the same product in any database in the country. Even Gladson will show variances on the same package size — the same product, same bag, different seasoning.

On hard siding product packaging, I cannot understand or accept errors in measures. It is what it is.

Unscrupulous Category Partners will use measure variations to their advantage and show slower turns, higher Days of Supply for their competitors. Retailers using partner data should be vigilant and spot check against actual shelf capacity.

Race Cowgill
Race Cowgill

This is an excellent example of the power of organizational Master Systems: generally, organizations have weak processing systems (I’m not speaking only of IT or materials processes here); they tend to require huge amounts of human support and interaction, they handle relatively few of the overall processing needs, they generate huge numbers of errors, and the rest of the processing needs are handled by ad-hoc flows. Let us not fool ourselves — organizations of all kinds are generally not created or managed by systems experts, they are generally created and run by industry insiders who know little about system design. (Again, I am not speaking only of IT systems here.) The System that manages all of this, then, is the Master System, and trying to view this System, when you are not a systems expert, much less measure it and change it, is literally impossible. The result is that this entire subject becomes very complicated very quickly, and has few or no practical answers. Of course, shifting the Master System is the answer, but the players here are not really able do that.

Mario Vellandi
Mario Vellandi

The problem is multivariate for the many reasons cited above. But let’s face it – the greatest number of errors originate with manufacturers or their reps. Having worked in sales for a manufacturer with a large variety of SKUs and quite enviable number of retailers, maintaining accuracy can be a daunting task. Based on my experience:

1) I learned slowly to go from LxWxH to WxHxD;

2) I relied on Product Design & Development to provide me with item specs;

3) I relied on the Logistics dept. to provide me with case dimensions;

4) Although the concept of inner case and master case seemed logical to me, when working with trays/PDQs, the issue got a little confusing;

5) The issue was compounded by the variety of forms we dealt with from different retailers;

6) Product Design & Development may have updated the bottle size but failed to update their internal system specs or notify us of this update;

7) In the absence of such data, we often had to wing it to our best judgment;

8) Although I used a variety of sales agents, I can only imagine their not ever wanting to know about such problems. After all, such data accuracy is beyond their reach of influence, nor do they generally care. If they enter inaccurate data in retailer forms, it’s either their fault or the manufacturer’s;

9) Lastly, our staff was trained to push SALES. Day in, Day out. The little intricacies of data formats, industry practice, and so on were never really taught. It’s just something we learned along the way. And if data was inaccurate, oh well. There were plenty of potential factors that could’ve contributed to the problem; it’s not our job to figure it out. I’m going to lunch now, then the meeting afterward, then calling my rep, checking the CRM system, printing out the brochures, sending out the samples…………

Shaun Bossons
Shaun Bossons

I don’t think that there can be any doubt that data integrity is a large problem within the retail community. Issues with product dimensions, case pack information and product segmenting have caused headaches for retailers for many years.

Over the last 12 months, I have seen a number of the largest retailers in the US starting to treat this problem by investing in the improvement of data quality. Improving the process and accuracy of new product development and then the introduction into the business has become a focal area. As retailers plan on a far more granular basis than at any other time, accurate data becomes critical as planograms simply can no longer be tested, due to the amount of specific versions.

Moving forward, it will be essential to ensure the accuracy of data and the ability to ensure that it is stored in a consistent, controlled and centralized manner. Most retailers face a daily battle to consolidate, massage and change data from many isolated locations across the business; this simply results in sub-optimal knowledge. Working with one version of the truth, with customer insight and rest-of-market overlays will allow retailers to truly plan to meet the demanding needs of their customer base.

Ryan Mathews

Not only is data integrity a problem, it’s an ongoing problem. There isn’t a simple, one time fix for any of these issues. “Solving” the data integrity issue requires a higher level of cooperation (and cost sharing) between manufacturers and retailers than we’ve seen up to this point.

Dan Raftery
Dan Raftery

I am repeatedly amazed that the issue of product data accuracy and integrity continues to be treated as news. Some of us have been proposing the elements of data synchronization for 20 years now. It is the great enabler of supply chain analytics and efficiency. Without a universally accepted repository as the source, individual databases have no chance of containing the same data, much less accurate data.

It would be nice if we could place the mantle of responsibility on manufacturers, but that’s not realistic. Heck, even with the database advances from enterprise wide systems, most manufacturers still maintain more than one data repository. In many cases it’s because they need to satisfy individual retailer’s proprietary systems requirements if they want to do business with them.

This issue goes beyond space management. Any information system based technology needs correct data to function correctly. Outsourcing this responsibility to a reliable resource is the only way to have a chance at keeping the databases clean.

Bill Akins
Bill Akins

The responsibility for item attribute maintenance has traditionally fallen under the umbrella of Category Management at a number of large big box retailers. Category advisers have been the ones to maintain vital dimension statistics during the modular (planogram) review process, but found the task overwhelming when there were thousands of items on a set and only a few days to complete the analysis. In my experience, if the dimensions were off, it was because the manufacturer and the retailer were not communicating appropriately on package changes, new sizes, or differences between packaging on promotional skus versus the “parent” shelf item.

With the advent of UCCnet, the attempt was made to have a global synchronization of item information, but adaptation was slow. Global Data Synchronization Network (GDSN) is proving to be much more widely adopted and should ease the strain on category advisers getting stuck with digital tape measures and calipers to audit product heights, widths, and depths.

More Discussions