UPDATE, Sept. 22, 2015: This story updates to add the organizations that partnered with CIR to co-produce the TechRaking event.
In an age of digital photography, the old adage that “a picture is worth a thousand words” has become something of an undersell. In fact, most pictures we see on the Web tell thousands of stories, simultaneously, across two perceptual planes: There are the shapes, shades and colors in the image that the eye quickly discerns, and then there are the thousands of data points that generate a wealth of additional detail in the digital world. The eye also can see the latter, of course, but it helps to know where to look.
Understanding this interplay – it’s not just what we’re seeing; it’s also how we’re seeing it – was the backdrop for “Verifying the News,” The Center for Investigative Reporting’s 12th TechRaking event on Sept. 12. The event series, in partnership with the Google News Lab, aims to bring technical solutions to challenges in contemporary journalism.
This particular event was co-produced with the Future of News initiative at the MIT Media Lab, Knight Foundation, First Draft and Bloomberg.
At the MIT Media Lab in Cambridge, Massachusetts, approximately 50 journalists, technologists and entrepreneurs gathered for a day of conversation and constructive brainstorming. The goal was to underscore the importance of carefully validating the sources of images – even in an ever-churning landscape of Web content.
Journalists generally don’t make a habit of willfully discarding useful information. But according to Dartmouth College professor Hany Farid, who kicked off the event, that’s exactly what happens each time we download, resize or otherwise change a digital photo. The bundles of numeric metadata contained within image files such as JPEGs can provide clues about a picture’s authenticity, as well as when and where it was taken.
When we augment a photo, what we’re actually doing is making a sort of microscopic trade-off. We’re “throwing away information in order to save memory,” Farid said. “You degrade the image a little bit so that the footprint is a little less.”
The upshot? Forgeries, even Pulitzer Prize-winning ones, can be easy to detect through a close examination of metadata.
Still, journalists don’t always have time to sort through lines of code. Following Farid’s comments, Eliza Mackintosh, duty editor at Storyful, provided a case study in quickly cross-checking sources as a news event unfolds. The day Dylann Roof allegedly killed nine people in a South Carolina church, Mackintosh and her team combed through their roughly 1,000 Twitter lists for real-time conversations about the incident. Pulling in details – a Facebook photo here, a mugshot there – on the fly, they built a detailed picture of Roof almost immediately after the shooting occurred.
Data does have its limits. When Aric Toler, a contributor at the citizen journalism site Bellingcat, posited that shadows in a photo can help put a timestamp on a news event, Farid was quick to interject. Toler’s example – he’d shown how a flagpole’s shadow, cast across a house, was used by Web sleuths to determine the approximate time a missile allegedly was launched nearby – contained too many opportunities for error, Farid said. He added that it’s very challenging to pinpoint a precise time of day based only on the position of a grainy shadow in a digitally enhanced photo.
Before attendees broke for lunch, Andy Carvin, editor-in-chief of Reported.ly, explained how images shared between friends on closed messaging services like WhatsApp can migrate onto social media and create viral misconceptions. He displayed a photo from a highway between Iraq and Kuwait, taken during the Gulf War, which had gained traction online a few months ago. Someone had claimed it portrayed a Yemeni battlefield.
“It becomes a case of what you might call footage laundering,” he said of the photo. “If you’re not very, very careful, you’ll think it’s a different incident.”
For this reason, he added, journalists ought to take information they see on social media with a grain of salt. A social network like Twitter, Carvin said, should not be treated as a standalone source; instead, it’s most useful to his news team as a “seismograph” to help track emerging events and conversations in real time.
After lunch, attendees broke into groups for a design sprint – an intensive, four-hour brainstorm – with the aim of creating solutions based on the questions posed in the morning’s panels. As with all TechRaking events, each group presented its original concept to a panel of judges, who would decide a winner.
“The Reliable Imagery Project” proposed a set of best practices for social media companies such as Twitter to guarantee undoctored images for their users. “Sway,” hoping to make it easier to determine the source of a video or photo, suggested a tool that unites a user’s social media accounts into one viewable platform. “Check It” aimed to create an accredited online course incentivizing learning around verification techniques. Over the course of the afternoon, each team refined its idea with input from the judges.
The winner was a concept called “Suddenly,” a proposed tool for collating and sorting outbursts of expletives and emojis across Twitter accounts on the Web. The tool would aim to geolocate bursts of activity to a central event, thus making it easier to identify and contact sources near the action. Studies have shown that when a noteworthy news event occurs, many witnesses’ first online impulse is to swear.
The tool, which CIR will help develop, will “focus on social instability and volatility around the world,” said Reported.ly’s Carvin, the group’s spokesman. “The core point of this is to help news organizations cultivate sources who have experienced whatever it is that just happened.”
A question, posed by one of the judges, and not yet answered: Using “Suddenly,” how might one distinguish reactions to real-time news events – a missile attack, a blown pipeline, a declaration of war – from reactions to less pressing issues, like an unexpected revelation on “The Bachelor?”
The Suddenly team members did not have an immediate answer. But they’ll have ample opportunity to devise one as we develop the tool together in the coming months.