Benchmarking is too hard, so we don’t do it

Benchmarking is hard workThere is a misconception that benchmarking is a very involved and arduous process. I’m not going to tackle that issue in the blog post; I’ll cover that in my next one. But, that is one of the largest barriers we hear from organizations wanting to use benchmarking to examine a real business problem.

The Logic Falls Apart

The thing that has always struck me about this objection is that it doesn’t hold up, logically, on any level.

  • It is too hard, so I won’t do it – Do you only do easy things? Is that what made your organization successful?
  • We have a real problem, but don’t want to go learn about it – Then are you going to ignore the problem and hope it goes away?

I think most organizations are rewarded to stay with the status quo. They’ve already gotten permission to perform at their current level through an approved budget for the year. They don’t have any real impetus to change, so they would rather deal with the problems and trade-offs they currently have versus finding out what might be required to fix them.

What is your CONK?

We call this CONK (the Cost of Not Knowing), and find most organizations don’t consider what they might be able to achieve through improving their processes or performance. Organizations using tools like Lean or Six Sigma are focused on reducing waste, defects, and better performance, but even those organizations are looking at theoretical (and usually incremental) improvements or only look internally for new ideas.

It is scary to look externally, and, to be honest, most of the organizations not looking externally don’t really know where to start. Taking that first step is the hardest, so I’ll tackle that in my next post. In the meantime, how does your organization view CONK and where do you look for ideas?


Categories: Benchmarking, Continuous Improvement, Process Management

Subscribe to the blog

Subscribe and receive an email when new articles are published

10 Comments on “Benchmarking is too hard, so we don’t do it”

  1. Rob Mian
    March 1, 2013 at 6:57 am #

    Great topic! I have led and participated in numerous benchmarking studies with large, credible, external agencies. While the studies can be useful, they are inherently difficult to execute for a number of reasons:

    1) A meaningful benchmark study requires an apples-to-apples comparison. But not all companies are organized the same way. Many have departments with proprietary names and complex functions comprised of employees with unconventional titles and roles with varying degrees of responsibility. The processes and infrastructure being managed often have a similar degree of variability and complexity.

    2) This variability requires a common framework be applied to normalize the results – to make everything look like an apple. The framework is typically provided by the consultancy and if your company does not buy into the validity of the framework, the study begins to loose credibility.

    3) The process of mapping a company to the benchmark framework is an incredibly tedious and subjective piece of work. If you don’t get the mapping right, the study looses more credibility. People begin to question the motives behind commissioning the study.

    4) Collecting data can in many cases be subjective. If you’re simply pulling data off of servers then it’s a walk in the park. But in many cases the study requires that humans enter information into databases provided by the consultancy. How many people in your department provide function x? Hmmm….if allocate too many people in this category I may look inefficient and will loose people. If I allocate too few my department may be consolidated into another. I could loose my job. What to do…

    5) Once the preliminary results come in the games begin. If you don’t like the results and you have the institutional power to change them, you revise your submission on the grounds that you didn’t fully understand the question. Since the framework questions are often necessarily ambiguous, it can be difficult to challenge the proposed revision.

    6) Once the results are finalized people wait to see what’s done with them. If nothing comes of it they know not to waste their time on the next study. If the company puts blind faith in the results and makes irresponsible decisions they will be ready to game the next study. However, if positive change comes of it then you will have established trust and brought everyone a little closer together.

    Benchmarking can be an incredibly effective tool, however, it must be done properly which does indeed require a lot of hard work. Complex organizations should carefully consider whether the potential benefits are worth the risks outlined above. In many cases, a more focused series of internal studies will achieve better results.

  2. March 1, 2013 at 7:08 am #

    Rob, great response and well thought out. I feel your pain because you’ve described many of the same issues I’ve dealt with in conducting benchmarking projects for our members.

    I can’t agree more about needing a framework, or a common language. That is why we worked with our members in the early 1990s to develop the APQC Process Classification Framework ( It is the most downloaded document on our site and we want to keep it that way. We want as many people using it as possible so they can have a solid benchmarking experience. )We also have a benchmarking code of conduct to go along with it. Another really useful tool.)

    Your description of the process was spot on, as well. The things that sideline a good project are seldom related to the information collected, but the personalities collecting it. It is easier to blame the data that admit your performance might not be up to par.

    I’m going to post a series on benchmarking over the coming weeks and I’d love your input. We’ll agree on some things, I can tell. There are probably a few good debates we’ll have, too, though. I look forward to keeping the conversation going!

    Thanks again!

  3. Rob Mian
    March 1, 2013 at 7:11 am #

    Thanks, Ron. I can see we share the same passion and look forward to reading your upcoming posts.

  4. March 1, 2013 at 10:16 am #

    The cost of not knowing can be staggering large in most companies. Unfortunately, the medicine as prescribed is too much. Benchmarking does not have to be an onerous, “an incredibly tedious and subjective piece of work” as Rob said. I don’t disagree it is…I just think it can be done differently so that it isn’t.

    The ultimate benchmark in any company is revenue and margin. These values are derived as a result of the value your company provides to its customers. When we help our customers ‘benchmark’ their operation we assess their current throughput. That has an associated cost and when applied to the throughput a measurement can be derived. While it may seem overly simple; it is like horse shoes and hand grenades. Close enough to count and certainly instructive enough to identify opportunities for immediate, internal improvement that can also build or sustain a competitive advantage.

    I look forward to your follow on blog to learn more.

    • March 1, 2013 at 11:28 am #

      The complexity of any benchmarking exercise is proportional to the complexity of the organization and the objectives of the study. A high level study will be painless but it won’t necessarily provide the detail necessary to take specific actions.

      For example, a high level benchmark may reveal that global IT operational expense is higher than peer groups. The data for this benchmark can come straight from the accounting team. Painless. This leads to a decision to cut IT expenses by 20%. Easy. Managers are forced to react and make it happen. Uh oh. Is this necessarily the right strategic approach?

      A more comprehensive albeit painful benchmark would capture the details driving the higher expenses which might in turn lead to a better strategic solution. Maybe the 20% needs to come from one specific area. Perhaps the savings can come through consolidation of resources and standardization on best practices within the org.

      It comes down to one’s expectation of a benchmark study. If you simply want to find out where you stand in the world it can be relatively painless. If you want to get detailed, actionable recommendations from the study then there is no way to get around the effort.

      I hope this makes sense.

  5. March 1, 2013 at 11:20 am #

    Steve, thanks for the reply. In my experience, the “incredibly tedious…” aspect of benchmarking comes down to the approach taken and the type of information needed.

    There are times you need very precise metric information. By nature that is tedious to try and get to the “apples-to-apples” comparison. I will assert that you will never get that precise of a comparison, no matter how much work you put into it. We’ve had multiple groups from the same company provide data to the same benchmarking survey and give different answers to key questions (like “what was your annual sales revenue for FY 20xx). These issues are inherent in humans and can never be fully removed.

    There are some core, basic things you can do (use a framework, have strong definitions, use strong validation steps, etc.) to help with that, but at the end of the day, it is a fruitless (darn fruit references!) task. There is another way to achieve this goal of comparison.

    I think organizations get into this paradigm because they are looking for “the number” to compare themselves against. I’ll assert there isn’t “a” number, but you should look for “the” numbers that are most relevant to the process you are looking for. Then, you (the person employed by the company to make this decision) have to understand the inherent biases in each number and make the best decision for your organization.

    That goes back to the title. Organizations go into a metric benchmarking exercise looking for data from their competitors that directly lines up to their data. That makes benchmarking hard. So they don’t do it. But, if you take a logical approach to it, you end up with numerous relevant data points, can understand each, and make a better decision. Before the benchmarking activity, you had nothing to use for decision-making.

    My thought is, at the end of the day, a company usually doesn’t see a great value in an employee that can look at 100% relevant data to make decisions. They value someone that can take inputs, understand them, apply context and experience, make the best informed decision possible, and adjust.

    • March 1, 2013 at 11:33 am #

      Very well said. I agree 100%

  6. Al Burns
    March 2, 2013 at 6:33 am #

    Hi everyone,

    I am new to the forum but wanted to share a recent experience. I just finished a 6 month project that was, on the surface, a matter of transferring best practices in higher education. The benchmarking we did was not particularly formal, but we did some informally against some published survey data and scholarly sources.

    I agree especially with the last paragraph above, about a 100% relevant data comparison. For us, the value was the conceptualizing of principles from specific, successful practices within, then comparing them to principles we found in the published research. Later, we developed a phased plan for implementation that enables us to learn and evolve.

    If it’s easy, then everyone can do it and it is not particularly valuable. Organizations need to balance the short and long term, by devoting some time to difficult initiatives. Easy benchmarking is uninteresting. Critical thinking is by nature difficult, necessary and absolutely valuable.

  7. March 3, 2013 at 10:59 am #

    Al, that sounds like a very interesting benchmarking project. I should introduce you to my colleagues at APQC Education. They are doing a lot of really great work transforming K12 education in the US.

  8. March 3, 2013 at 11:52 am #

    Thanks, Ron. I just started following them on Twitter and would love to make some connections. I manage several programs (MBA, DBA, Org Leadership, MPA, MPH…), so I have the privilege of working across many industry segments, including health care.

    I enjoyed an APQC conference a few years back in Chicago, back in the days of document management and CoPs. Their CMM was just beginning. Now, I see that social KM, analytics, and SNA are providing some new energy to the field. Great to see. SNA shows particular promise, especially for Org Development and Talent Management.

    Which brings to mind a question, what’s the next big thing? ;^)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: