Firstly, let’s just say that “average” gets a bad rap. Look it up in a thesaurus and you’ll find it associated with words like mediocre, moderate and ordinary.
That’s a lot of bad press for a statistical measurement that identifies “the middle”. Is average such a bad place to be?
The truth is that it’s an important number, but it doesn’t tell you much in itself. If I have one hand in an ice bucket and the other in boiling water then, on average, the temperature of my hands is comfortable.
The problem is that average doesn’t cater very well for extremes. So when we start using it in our web metric calculations we risk missing the full picture.
Measuring difference in performance
What we should be interested in is the “standard deviation”. Without getting bogged down in statistical definitions, this measures the spread of performance.
Let’s take a real-life example with the measure “Average time on page”. This is an important metric to understand how well audiences are engaging with your content.
But what if half of your blog posts perform well and the other half poorly? The “Average time on page” could imply that users are reading half of the content on all of the pages.
By aggregating measurements, we miss the opportunity to identify which articles perform well (or not).
That prevents us from learning and using that information to improve the “cold” content so that it works equally hard.
Accurately benchmarking our content performance against other organisations is almost impossible: the data just isn’t available.
Internally though, we can easily benchmark similar content to identify the characteristics of high performance.
Define best practice
Armed with that knowledge we can then analyse why some content works better than others. Equally important, we can identify why some, seemingly “good”, content performs poorly.
Improving it could be as simple as fine tuning the title or summary text to make it work better in search results. Maybe some authors have a more engaging writing style than others or particular topics are more popular.
The resulting best practice means that new content can be created in a style, and with a purpose, that is proven to be effective.
Improving the average
Of course, some content will always perform better than others: every site has its star performers. The trick is to work out how to narrow the gap between the highest and lowest performing content.
Systematic benchmarking, applied to each type of content on your site, allows you to improve poor performance and identify gaps where new content is required.
That informs and improves the editorial process so that new content is created, safe in the knowledge that it is more likely to be “hot” than “cold”.
In turn, this improves the average in a consistent and repeatable way. That’s a far from mediocre outcome and makes “average” a valuable metric for our content marketing reporting.
If you’d like to keep up to date on the latest thinking, sign up for our email newsletter below.