This is a true story. Back in my market research days, 20-some years ago now, I watched one of my friends and colleagues (call him Fred) present a market forecast to a committee of IBM executives. They objected to his numbers, which seemed way off of what they expected, which was also what they’d received from competing market research companies.
The pause that followed seemed to me to take forever. I watched with excruciating pain as the IBM executives in the room exchanged glances. I was sure they thought we were idiots; and expensive idiots too.
It was probably only a second, maybe two. Fred, thank goodness, was a seasoned veteran of this kind of moment, an alumnus of Stanford Research Institute with a lot of degrees and a lot of experience. He responded immediately. “Oh, that’s because we define the market differently,” he said, quickly and confidently. Then he started asking questions. “How do you define PCs? Does the processor make a difference?” Our clients seemed bewildered first, then apologetic. Fred resumed his presentation. We were home free.
What I knew, but kept to myself, was that Fred had changed definitions as quickly as a carnival con man changes peas underneath walnut shells. There’s sleight of the hand and sleight of the definitions. Definitions are a market research consultant’s best friend.
Which brings me to the subject of metrics. By that I mean numbers, measurement, a scorecard. People want to see how they’re doing and we’re used to scores in numbers. I’ve posted on this blog before about the importance of metrics, which drive accountability into planning and management. Metrics are to management what mortar is to a brick wall. How can you have accountability if you can’t measure performance?
In three ways to make your employees miserable I quoted author Patrick Lencioni saying that employees need, deserve, and want metrics. Earlier I had posted the magic of metrics, about how much I like metrics in my own work life.
My story here is about another side of metrics, the pliability of metrics. There’s an old quote, attributed to Mark Twain, Alfred Marshall, or Benjamin Disraeli: “There are three types of lies: lies, damn lies, and statistics.”
Sometimes we manufacture truth to fit our purposes. It’s not necessarily bad. When our Palo Alto Software soccer team went 8-1 in the city league last year we called a local trophy shop and ordered our own trophy, which most of the players assumed was given by the league for finishing in first place (and if this is you reading this, sorry, we thought you deserved it). This gets particularly interesting when we do this with the metrics that are built into our management.
That comes to mind today for the juxtaposition of a story in the New York Times and a post by Seth Godin. In How Many Site Hits? Depends Who’s Counting, Louise Story reports on widely varying ranges of traffic statistics on different websites, depending on the source.
…big media companies — including Time Warner, The Financial Times and The New York Times — are equally frustrated that their counts of Web visitors keep coming in vastly higher than those of the tracking companies. There are many reasons for the differences (such as how people who use the Web at home and at the office are counted), but the upshot is the same: the growth of online advertising is being stunted, industry executives say, because nobody can get the basic visitor counts straight.
I do remember that in the old days of the dot-com boom we could manipulate definitions to change Web traffic numbers. For example, there were visits, visitors, unique visitors, unique visits, page views, hits, lots of different but related numbers. Of course the true measure was sales, but even with sales, did we count returns? Did the month end at midnight of the last day of the month, when orders were made, or when the orders were tallied and posted? There’s always some wiggle room.
In his post “The New York Times Bestseller List”, Seth Godin points out that the New York Times is manipulating the list to serve various purposes. But, he adds,
The best part… it doesn’t matter. Cumulative advantage is so powerful that even though the accurate reports of book sales often completely contradict the Times list, authors and others still obsess over it. We’re always looking for clues, especially in crowded markets.
The easy answer to this puzzle is consistency. Measure performance by one consistently applied measure, without making it matter much which measure you use, but stick to the same measure so you can see change over time. In the real world, however, even those consistent measures over time are subject to change.
Fred, the market researcher in my opening story, was in his early fifties when I was working at Creative Strategies, myself in my early thirties, and he taught me a thing or two. One of them was his use of the term SWAG as a data source. SWAG stands for “scientific wild-ass guess.”