Before you read this, I encourage you to visit the thoughtful post by the Washington Post’s Greg Linch on the same subject. It gets to the heart of an issue that, at least in my opinion, is going to be extremely important to the success of public service journalism – and particularly the non-profit variety – going forward.

It’s also an issue I happen to be passionate about, as anyone who endured my Ignite talk at NewsFoo (or spoke to me thereafter) can attest. Journalism’s standard metrics – pageviews, uniques, what have you – are designed around a CPM-based economy that values quantity over quality.

That’s fine when you’re running a CPM-based business, but without some kind of counterbalancing force, they create a set of incentives that fundamentally discourages some of our best work. I’m a huge advocate of journalism’s operations becoming more data-driven – but it has to be the right data. That’s where measuring impact comes in.

It’s important to note that when we say “measuring impact,” the measuring has to be quantitative. Qualitative assessment is great, but until you have some kind of standard for comparison, some kind of yardstick we can use to say Story X had more impact than Story Y, it will be hard to capture both the insights and cultural change we need to counterbalance the oversimplified metrics in place today.

Put another way, bosses like to see graphs go up. How do we put impact on a graph?

I’ll offer up one model, which Greg alluded to in his post: Tracking the conversation around a story. To that end, I’ve been playing around for a while now with two conceptual metrics, which I’ve taken to calling splash and sustain.

Big, impactful stories usually make a splash. News organizations make a big deal out of them with promotions, placement, etc. And if they catch on, they usually travel well on blogs and social media. Maybe they make the rounds among the commentariat. Splash is an indicator that people are listening, and that even at the most superficial level, they are engaged with what you’re saying.

I think of splash as the High Striker game at the carnival. If you come on strong enough, and hit hard enough, you’re going to see your splash metric shoot upward. Unfortunately, Kim Kardashian probably hits the High Striker harder than even the best investigations. That’s where sustain comes in.

The most impactful work we do creates its own self-sustaining narratives. Say we write a story about seismic safety in California’s schools. If it makes a big splash, other news organizations might quickly follow up on it, giving us some credit (“as reported by California Watch …”). A few months later, there might be a legislative hearing. Reporters from a few organizations be sent to cover it, and this time they’re less likely to credit us because even though our reporting put the subject of seismic safety on the radar, it’s not just our story anymore – it’s everyone’s story. The narrative survives without us because it’s part of the public consciousness.

One of the most fascinating projects I’ve seen along these lines is the Memetracker project, built in part by Stanford computer science professor Jere Leskovec. I had the chance to talk with Dr. Leskovec in his office last fall, and he explained the difficulty of tracking the mutations of a narrative as it moves through time and across sources. Still, major narratives tend to have signatures that can be traced – certain phrases, or unique words, or quotations – from place to place.

That, of course, is where the hard part comes in. Tracking a narrative involves working with massive amounts of data, using techniques that brilliant PhD’s are in a continual process of refining. That’s not to say it’s impossible, but we probably won’t figure it out any time soon.

That got me thinking: What if there was a way to cheat?

As it turns out, journalists are decent judges of their own impact. If we do a story that resonates, we tend to write follow-ups about it – sometimes ad nauseum – either because we want to provoke action or because our editors want to ride that wave as long as they can. On the other hand, if people stop responding, we tend to stop writing and move on to the next thing.

In that way, we can tell on a very superficial level what narratives have sustain because we are the ones who sustain them. A few months back, I used document similarity algorithms to cluster narratives within our California Watch stories. We found that stories with the biggest clusters (aka largest number of follow-ups) also tend to be some of our most qualitatively impactful work: the seismic safety series, our work on DUI checkpoints, our coverage of Prime Healthcare, etc.

My measurements were crude, no doubt, but they showed enough promise to make me optimistic.

There are serious technical questions to answer in the study of impact, but technical problems are almost always solvable. What we need first is a broader discussion of potential methodologies. I’m glad Greg Linch opened the discussion. Here’s hoping it can turn into a self-sustaining narrative all its own.

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.

Chase Davis is the director of technology for California Watch and its parent organization, the Center for Investigative Reporting. He also writes about money and politics issues for California Watch. Chase previously worked as an investigative reporter at The Des Moines Register and the Houston Chronicle and is a founding partner of the media-technology firm Hot Type Consulting. He is a graduate of the Missouri School of Journalism.