Metrics & data & charts, oh my!

Stan Garfield
7 min readMar 12, 2018

Originally published July 13, 2015

My father, Sol L. Garfield, was one of the preeminent clinical psychologists of the 20th century. When someone would make a claim of some kind, he would demand of them to “give me data!” He knew his way around basic statistics such as mean, mode, median, standard deviation, and correlation.

My sister, Joan Garfield, is a leader in improving student learning of statistics. I am not at her PhD level of statistics, but I have long been interested in collecting, compiling, and interpreting statistics. When my family drove across the country to take vacations when I was young, I would doggedly record the number of gas stations of each brand as we drove by. There was no particular value to gathering this data — I did it to pass the time.

In high school, I was the statistician for the boys’ basketball team. As a junior, I invented a new stat, which I dubbed “OPERA” (Overall Player Efficiency Rating Average). Although advanced analytics are now commonplace in sports, this was an early basketball stat that went beyond the traditional ones, such as points, rebounds, shooting percentages, etc. OPERA was computed by adding, subtracting, and multiplying using this formula: Points + Rebounds + (3 x Assists) + (3 x Steals) + Blocks — (.5 x Missed Shots) — Missed Free Throws — Fouls — (3 x Turnovers). There were both positive and negative values, and some stats were weighted more heavily than others (assists, steals, and turnovers). The goal was to give the coach information he could use to decide who should play more and who should play less, and to highlight some of the stats which tended to be overlooked at the time (assists, steals, and turnovers). I went on to keep statistics at every level of basketball, including every grade from 4–12, CYO, AAU, AYBT, college (Washington University in St. Louis), and pro (the Spirits of St. Louis). During the two years of the Spirits’ existence, I kept minutes played for each player.

Analytics, big data, visualization, and infographics are in vogue today. (Aside: Although Google offers Google Analytics, Google Chrome’s spell-checker doesn’t recognize “analytics.”) These can all be valuable, but there is also the potential for abuse, misuse, and mindless use. As both a fan and a critic of statistics, metrics, and analytics, I offer a few suggestions.

The main reasons to collect metrics are to:

1. Take action based on what the numbers indicate. For example, if you are leading a communities initiative, report on the health of each community every month, and retire the inactive ones using a report such as:

2. Track and communicate progress against goals. For example, if you are leading a knowledge management initiative, identify the top 3 objectives, track and report on how the organization is doing in a monthly report, and inspect and discuss progress (or the lack thereof) in management team meetings:

3. Persuade others, answer typical questions, and refute baseless assertions. For example, I sometimes receive comments such as “no one uses our enterprise social network (ESN).” I refute these by pointing out that the ESN currently has:

  • 118,652 Total Members
  • 1,256,806 Total Messages
  • 144,432 Total Files

On the other hand, be wary of publicizing numbers which reflect actions you don’t want to encourage. For example, if you don’t want lots of groups being created in your ESN, don’t promote these metrics:

  • 30 New Groups Created
  • 1,148 Total Public Groups
  • 1,186 Total Private Groups
  • 2,334 Total Groups

Here is an example of metrics which don’t necessarily adhere to any of these reasons. The following graphs depict data collected as part of a survey:

The problem is: what will be done with this data? Are there any important conclusions to be drawn, key insights to be realized, or actions to be taken? If not, then the charts don’t offer any useful information and need not be produced.

Many people express a desire for data that doesn’t drive any action or insight — collecting data for data’s sake. For each metric to be captured and reported, there should be an associated action or insight which is expected to be driven. Avoid collecting every random thing, sliced and diced every possible way, which someone might want to know once, but has no intent to do anything with other than say, “Oh, that’s interesting.”

Also watch out for

An example of chartjunk:

Metrics and Reporting

Metrics and reporting include capturing operational indicators and producing reports to communicate performance against goals, areas for improvement, and progress toward the desired state.

There is a wide spectrum of opinion about the importance of measuring a major initiative’s activities. Some believe that it is essential, and want to collect data and create reports on a long list of detailed metrics. Others believe that this is a waste of time, and that effort is better spent on taking action to implement the initiative.

Three different kinds of metrics are typically captured and reported. Goal-oriented measurements which directly relate to employee goals allow assessment against those goals. Operational metrics are based on data captured by systems used by the initiative. For example, a knowledge sharing initiative would capture details such as web page hits, uploads, and downloads; threaded discussion subscribers, posts, and replies; and repository submissions, searches, and retrievals. Business impact metrics attempt to determine the return on investment (ROI) of initiatives, and include costs saved, costs avoided, incremental revenue, improved quality, increased customer satisfaction and retention, new business attracted, increased market share, revenue from innovation, etc.

Collecting and reporting on goal-oriented measurements ensures that the organization is aware of how it is performing and that individuals can be held accountable for achieving their goals. Reports should be produced and distributed every month to track progress, reinforce good performance, and encourage improvements where needed. Reporting metrics by group within the organization, for example, regions of the world or countries within a region, allows each group to compare its performance against other groups, and create a friendly competition to excel. Reporting metrics by individual may be limited by data privacy laws, and if allowed, transmitted confidentially to the manager for use in performance coaching and appraisals.

Operational metrics can be helpful in analyzing how an initiative’s infrastructure is being used, who is using it, and identifying areas for improvement. However, there is only so much which can be inferred from data such as page hits, uploads, and downloads. These metrics don’t indicate the value of any of these activities. If a user visits a web page, they may not find what they need there. If a document is uploaded, it may not be of any use to anyone. If a document is downloaded, it may not be reused. Follow these rules when deciding on which operational metrics to collect and report: Keep the time and effort required to a minimum, automating as much of the collection, extraction, and production as possible. Ask your team about which metrics will help them the most. Focus on a few key metrics which relate to your overall objectives. Use the metrics to improve the environment and test for this in user surveys. Communicate the metrics regularly so that they influence behavior.

Business impact metrics are potentially useful in justifying the expense of a program, in garnering management support, and in communicating the value of spending time on recommended activities. Anecdotes and success stories can be collected and converted into numerical values. Data can be captured in incentive points systems about the value of an initiative. Processes can be created or modified to ask participants about the business impact of initiative tasks. But there are few definitive ways to prove that a particular business indicator was solely influenced by the initiative. There are usually multiple reasons for a specific business result, and the initiative in question may be one of those reasons.

Some firms have conducted one-time surveys to prove the case for an initiative. For example, Caterpillar commissioned a one-time study by an independent consulting firm to identify the benefits and ROI for two established communities of practice: Joints and Fasteners and Dealer Service Training. The results were:

  • Qualitative ROI: Productivity (up 40%), Cost (reduced 25%), Speed (up 15%), Quality (up 4%)
  • Tangible ROI: 200% for internal CoPs; 700% for external CoPs

Based on these results, the Caterpillar KM program was justified, and has been supported ever since. There is no need for ongoing collection and reporting of ROI, since it has been done once.

If there is a way for you to collect business impact metrics, then do so. They have more significance than operational metrics. But follow the same guidelines about limiting the effort involved to a reasonable amount.

Collecting and reporting on the measurements used in your program will help you to communicate progress, motivate people to improve their performance, and reassure management of the value of the initiative. Keep the effort required to do so in the right balance with other projects, look for ways to continue to streamline the process, and review the reporting process annually to keep it relevant to the current state.

See also: Goals, Measurements, Metrics, Analytics, and Reports

How have you seen metrics used successfully? And how have you seen them misused?

--

--

Stan Garfield

Knowledge Management Author and Speaker, Founder of SIKM Leaders Community, Community Evangelist, Knowledge Manager https://sites.google.com/site/stangarfield/