Before Isaac Newton, words like mass and force were general descriptors, as James Gleick writes in The Information:
“the new discipline of physics could not proceed until Isaac Newton appropriated words that were ancient and vague—force, mass, motion, and even time—and gave them new meanings. Newton made these terms into quantities, suitable for use in mathematical formulas.”
The term information was similarly amorphous until Claude Shannon, while working at Bell Labs, quantified the concept in bits.
* * *
The journalism goals and business goals for news organizations are out of sync.
Pageviews. Unique visitors. Time on site.
Some journalism might be best quantified partly or wholly by one or more of those ways, but we need to explore deeper beyond these fairly simplistic metrics.
We know how these terms are defined, but what do they really mean? What do they help us achieve?
In creating a theory of information and quantifying information in bits, Shannon aimed to remove meaning. “Shannon had utterly abstracted the message from its physical details,” Gleick says.
For journalism, the goal should be to add more meaning to the information we use to measure our work. Granted, our current metrics aren’t meaningless. We use them because they do have meaning: views, comments, shares, etc. each has a meaning and can be measured based on that one-dimensional measure. The quantities of metrics increase because the works of journalism they describe are meaningful. Or, put another way, impactful.
So, what if we measured journalism by its impact?
Impact is something journalists have long valued. Whether it’s to inform, to hold accountable or to entertain, this more qualitative effect is an oft-cited goal.
Impact also addresses, at least in part, a fundamental disconnect between our main modern metrics and our journalistic values. Setting goals in terms of pageviews, for example, offers the wrong incentives. Yes, sometimes accountability journalism, for example, can drive as many pageviews as the latest celebrity gossip and the two goals align, but it’s far from a perfect model for success.
More specifically, these widely-used one-dimensional metrics value quantity over quality. True, we do have two-dimensional metrics that focus more on quality (time on site, pages per visit, etc), but quantity still seems to reign because they are at the heart of our models. There’s a danger in narrowly focusing on the one-dimensional “how much?” and ignoring the “so what?” of varied, complex, nuanced results of journalistic work. But, really, this is a problem because we’re looking at the wrong kind of “how much.”
Modern technology enables better quantitative and analytical tools, conceivably offering better ways to evaluate the results of journalism. Specifically, by weighing various measures that were not available in the past, it’s possible to devise a way to more concretely — albeit still imperfectly — define what impact means.
Yes, it’s true that this concept of impact — as currently defined — is very subjective and complex, as information used to be. It’s not easy, nor is it perfect or precise, but it deserves experimentation and effort in trying. And it won’t improve if no one starts somewhere.
* * *
Currently, works of journalism (articles, videos, galleries, graphics, etc.) no matter what subject (news, sports, entertainment, business, features, investigations, etc.) are quantitatively measured the same. An investigative piece that might be nowhere near as popular in pageviews across a mass audience (yes, sometimes, they can be) is quantitatively measured the same way a celebrity death story is. Either story could make a sensational splash, truly connect emotionally with readers, or both. Each has value, but there are different kinds of values across different subjects journalists cover.
If we value impactful accountability journalism, why are we quantitatively equating it one-to-one to entertainingly impactful news? For example, when an investigation is published that saves taxpayer money or even human lives, we should instead try to measure these in a more multi-dimensional way — instead of merely the simplistic ones — and measure them differently from journalism works that have different goals. We should do this not just because the quantification would be more accurate (again, still imperfect), but because it would be a better model of the complex real-world response.
Yes, this is already done in general, but informally and mostly qualitatively.
Along with weighing disparate works differently, we should consider that the immediate impact at a wide scale maybe be minimal, while the immediate impact on a small scale may be huge and that, in turn, leads to wider impact. Moreover, impact also needs to be measured over time — whether it grows, diminishes or stays the same — and perhaps even at what rate.
* * *
If metric goals are defined by executives, the business department and top editors, how can we build a bridge between the business reality and journalism reality? Where’s the middle where the two meet?
“How can we possibly meet these goals?” some in the newsroom might ask. “What does this have to do with good journalism?”
An ad-supported model traditionally sustained most newspapers. New models for selling ads online — based on CPM, for example — evolved to where we are today.
Relying on the one-dimensional metrics that comprise many of these online ad models can also be inaccurate.
We should not longer passively accept this status quo.
Define your own metrics. That was the key lesson of an unconference session Alexis Madrigal led at the first SparkCamp in New York City last June. It resonated and stuck with me, although mostly as a good idea that I had no idea how to implement.
He dubbed the example metric as something along the lines of “awesomeness,” which would be calculated by an equation weighting different metrics against each other. Really, what he’s talking about is a multi-dimensional metric:
(Pageviews / 1,000) + (video views / 100) + … etc
Tools like ChartBeat and its niche product NewsBeat offer a distinction between active and non-active users, as well as how many are merely reading or writing on the site. They are also developing a new metric called “engaged minutes.” These measures are important and helpful, but I think — invoking Madrigal again — we need to define our own metrics.
Impact makes sense because different organizations can define it in different ways. It’s not a set formula — it’s framework for which to think about how we measure journalism.
If Madrigal’s session was the unlit match, reading Melissa Rach’s second post on the value of content provided the spark when she said, “the baseline will help you measure the impact of your future content work .” Besides that inspiration, I found this nugget particularly insightful:
“most scientists, mathematicians, and statisticians say exact measurement is a myth. To them, the goal of measurement is to reduce uncertainty. Get this: it’s impossible to eliminate uncertainty all together– all measurement is based on assumptions. That means, when measuring content value, you don’t have to come up with precise numbers.”
We should consider the same in measuring impact.
And, to be clear, impact would not replace all other metrics. Instead, it could be an umbrella, a supplement to current measures or a step to something better than where we are now.
I envision a tool that allows different organizations, their departments and other levels (education under local, finance under business, etc.) to customize how they define impact and have that transparently communicated to journalists, readers, executives, advertisers, sponsors, etc.
You could factor in all the usual metrics of pageviews, pages per visit and time on site along with others such as comments, social mentions of a story (and by what kind of people) and links. You could track the larger conversation around a story (hat tip to Marc Lavallee for his influence in my thinking on that). You could also account for actions taken by governments, non-profits, community groups, registered voters, parents and others.
* * *
Upon further reflection separate from my initial writing, two words that have been mentioned several times deserve particular attention: weight and value.
In one sense, the current standard measuring tools are like different variations of the same ruler when instead we should really be using different kinds of scales to measure the weight of our work. Weight and impact are not the same, but they’re related.
At the first NewsFoo in December 2010, Tim O’Reilly led a discussion on philosophy and remarked on building a product to reinforce a value. He later shared how he built his business around values rather than business models and emphasized the need to ask, “Are those the right values?”
So, are pageviews, pages-per-visit, time on site, even engagement (which is moving in a more quantitative direction) conveying the right values?
* * *
Many of these core ideas formed in my mind before fully exploring if this specific idea had been discussed in a journalism context and, as I found, it has. Here are a few interesting ones I found, but — even though they didn’t influence the thoughts above because of timing — I thought I should share them:
- Metrics for civic impacts of journalism by Ethan Zuckerman
Another interesting thread I learned about after the main ideas above formed involves new ways to measure science and altmetrics (and thanks to Jonathan Stray the heads-up on the almetrics manifesto).
A direct inspiration, as is obvious above, was my reading and partial re-reading of Gleick’s The Information. Some less direct inspiration as I conceived the ideas above include many of Stray’s excellent posts on rethinking journalism; specifically, designing journalism to be used. Journalism is “used” because it has impact, meaning and a purpose. A number of journalists might be concerned about thinking about their work this way, but it’s true — whether direct or indirect. By observing and reporting, you’re on the field (to borrow a Jay Rosen metaphor).
Maybe you could even compare journalism to quantum mechanics, in a limited way. By your mere presence and observation, you can affect the outcome. And then by publishing journalism — from citing facts to quoting opinion to offering analysis — you are having an impact.
That impact is what we should measure.
* * *
I plan to soon follow up with some ideas on how we could measure impact. I have some initial thoughts and aim to get more from a Spark Camp Austin session I’ve proposed on measuring journalism.
But, overall, my goal is not necessarily to specifically outline the framework for how we could measure impact — though I do plan to try. Instead, the main goal is to advance the conversation and reality of how we measure journalism. What do those metrics mean? What do we value? How do we weigh different types of journalism in ways that are appropriate to their purposes? And, fundamentally, are our journalistic goals and business goals in line?
Update 1: This post has been updated with minor tweaks and two specific quotes from Melissa Rach’s post. Also, thanks to several people who I explained the idea to before writing this, including Albert Sun and Max Cutler for their feedback as I worked on finishing the draft en route to Spark Camp in Austin.
Update 2: Read notes from a BarCamp News Innovation Philadelphia session I led on defining new metrics for journalism.
Update 3: In Finding the Right Metric for News, Aron Pilhofer explains the goal of The New York Times’ future Knight-Mozilla fellow.
Update 4: Metrics, metrics everywhere: How do we measure the impact of journalism is an excellent piece by Jonathan Stray.