Tag Archives: nytimes

How they made Snow Fall

 

I had been waiting for this for some time now: the step by step explanation of the development of NY Times most successful and attention grabbing multimedia narrative Snow Fall.  I wrote about the resources that were applied to this (in Portuguese), and how they may seem excessive both in the number of people involved and production time, assets newsrooms don’t have.

Though I believe there is a risk this type of narratives will only happen sporadically, and created only in digital minded newsrooms with huge resources – smaller teams need to learn how to produce consistently multimedia interactive stories, using their own scarce resources (when they turn to this mindset, I mean) – this is  a great walk-through into Oz, i.e., the process of creating Snow Fall at the NY Times newsroom, from which we can draw our own conclusions about what modern news reporting is all about. Or should be.

Q. There’s a ton of audio and moving-image work in Snow Fall, and you used a lot of techniques from filmmaking, but within a very reading-centric experience. What kind of challenges did those elements present?

Catherine Spangler, Video Journalist: The challenges of crafting multimedia to complement a text-based story were the same challenges faced in any storytelling endeavor. We focused on the pacing, narrative tension and story arc—all while ensuring that each element gave the user a different experience of the story. The moving images provided a much-needed pause at critical moments in the text, adding a subtle atmospheric quality. The team often asked whether a video or piece of audio was adding value to the project, and we edited elements out that felt duplicative. Having a tight edit that slowly built the tension of the narrative was the overall goal.

How we made Snow Fall, Source

Worth checking this Storify page to get an insight on this project

 

Snow Fall: o futuro e os meios de produção

 

screenshot.2
Snowfall, New York Times, 2012

 

Tem-se falado bastante da reportagem do New York Times “Snowfall“, muitas vezes pela perspectiva menos interessante. Uns dizem que é o futuro do jornalismo, outros dizem que não, e há quem ache que só naquela redacção é que se podia fazer um trabalho destes. Está toda a gente a exagerar.

Primeiro temos que ver a dimensão do trabalho:

-é uma reportagem que provavelmente não teria espaço numa publicação não-digital, mesmo numa revista (de tal maneira que foi feita uma edição em ebook);

-demorou seis meses a ser feita;

-ocupou uma equipa de 17 pessoas;

-usa video, áudio, aplicações interactivas para recriar o evento;

Vamos por partes, e começamos pelas mais chocantes para quem faz jornalismo.

Meios

Seis meses e 17 pessoas são recursos que a maioria das redacções não pode disponibilizar para uma reportagem.

A questão não está nas 17 pessoas mas o que faz cada uma delas: 12 estão creditadas no design e produção do projecto, 3 na parte de video, 1 na pesquisa adicional, e o jornalista que coordenou o projecto.

Muitos jornalistas disseram que isto é uma aberração, e eu concordo que parece ser, especialmente para quem vem do meio individualista do impresso. Olhando para a quantidade de pessoas que são precisas para fazer uma reportagem televisiva de fundo e colocá-la no ar, vemos que a proporção não deve ser muito diferente, entre o número de jornalistas e pessoal técnico que faz com que seja emitida, e mesmo o papel tinha gráficas inteiras com dezenas de pessoas e distribuidores para despachar o seu produto. É uma espécie de hipocrisia moldada pelos hábitos de produção, e uma visão redutora do processo de produção de informação.

A maior parte da equipa está relacionada com os apectos técnicos e visuais da reportagem, e se queremos ter histórias que se adequem ao meio  digital, temos que ter pessoas com as competências necessárias para as produzir.

Concordo que seis meses é muito tempo, mas se virmos a quantidade de fontes e dados necessários para abordar este assunto, vemos  que é um trabalho complexo. A colaboração de algumas entidades de investigação científica, que vivem fora da pressão de produção diária do jornalismo também pode ter ajudado a que a demora fosse maior.

Nem todas as histórias servem para reportagens deste tipo. É preciso que tenham um certo grau de intemporalidade e, que tenham a possibilidade de serem seguidas no futuro com novos conteúdos.

Um dos exemplos que dava nas minhas formações era a reportagem do Star Tribune “13 seconds in August“, realizada em 2007, que demorou também vários meses a ser produzida, com uma equipa grande também, e que ainda hoje tem espaço na publicação com actualizações sobre os sobreviventes. Porquê? Porque o evento o justifica.

Em Portugal, numa situação semelhante, só uma publicação se deu ao trabalho de fazer algo assim, e a diferença de investimento nota-se.

Também é preciso ver que uma das pessoas envolvidas é o Xaquin Gonzalez Vieira, uma das referências na produção de narrativas digitais, que esteve ocupado a fazer outras coisas enquanto a reportagem era produzida. Devem conhecer melhor a sua infografia da queda  do avião no rio Hudson.

Resultado deste esforço?

uma semana

Tecnologia

Este é capaz de ser o primeiro grande trabalho de grande divulgação produzido por uma redacção que usa simplesmente HTML5 , CSS3 e Javascript para uma grande reportagem, em detrimento do Flash que, devido à sua fraca implementação nos tablets, tem perdido interesse por parte dos criadores de narrativas digitais.  Não são só necessárias pessoas nas redacções que saibam programar mas que saibam trabalhar com dados geográficos e estatísticos. No entanto, a produção dentro das redacções ainda se centra muito no esforço isolado do jornalista-escritor.

Concordo com quem disse que isto não é o futuro do jornalismo, mas não da mesma maneira. É o presente, e não é ficção científica. Não é um esforço descabido, especialmente se comparado com outro tipo de investimentos e se olharmos aos resultados qualitativos e quantitativos. E, basicamente, é uma reportagem com um formato tradicional, mais uns extras.

É , acima de tudo, o futuro das narrativas digitais de fundo. A tendência está aí, a procura também. O mercado digital está a expandir-se cada vez mais por plataformas com necessidades de visualização e interacção específicas que é necessário satisfazer. Mas isso implica que, onde quer que se queira produzir conteúdos para meios digitais, haja competências,  estratégia de mercado e meios de produção adequados.

Choque e exagero não é a melhor maneira de se encarar mais uma forma fantástica de se contar histórias. O que me preocupou nas críticas foi o facto de não terem em conta a realidade do consumo e sim a fraca qualidade das condições de produção existentes na maioria das redacções. Apesar do esforço feito pelos mesmos há anos (Rádio Renascença, Público e JN), o panorama das produções multimédia em Portugal é basicamente o mesmo há 5 anos, com um grau de evolução muito próximo do zero.

E o que vem a seguir a esta avalanche?

So what’s next? The design team behind the Times project told The Atlantic Wire last week that no specific new stories had presented themselves yet as affording the “luxury” of the six months it took to report and design “Snow Fall.” But Abramson’s memo cites one-third of traffic to the avalanche story as first-time web visitors, and that can be more appealing than raw numbers. (We’ve reached out to the Times for comment, and will update when we hear back.)

The Times, of course, does long, reported features all the time, but as The Atlantic’s Derek Thompson pointed out, “There is no feasible way to make six-month sixteen-person multimedia projects the day-to-day future of journalism, nor is there a need to.” But it’s been a great year for the “long read” community, and while there were few ads on the full-screen layout for “Snow Fall,” that its traffic has been dwarfed entire sites might not make single-story advertising too far fetched of an experiment.

 
So What if Tons of People Read That ‘Snow Fall’ Story on the Times Website?
 

The upward spiral and the tornado effect: monitoring the spreading of news

I should start  a category in this blog called “the spilt milk department”. I’d include all the great ideas I had in the last years and that were never developed or wrote about. I could make up excuses and say I’ve been busy (I have) or that I’m lazy (I am), but the fact is things evolve quite rapidly on the interwebz and it’s easy to blurt out a few ideas that quickly will dissolve away into cyber-oblivion, and you need time to build a solid concept. But the rule of thumb is “just do it”, and not “procrastinate until it’s perfect in your mind”. Enough crying. Here goes:

Two years ago I thought about how we could trace back to the original source of information, mostly news, on the web: the first tweet, the igniting blog post, the seminal article that got shared, sliced to quotes, linked to, built upon, changed, remixed, archived. Not only it’s a good way to understand the current online ecosystem, but if analyzed correctly, this flow could provide new insights to create new distribution strategies for news contents, and a real assessment of the impact of a specific event in social media (all media is social these days, by the way). I presented my thoughts about this last year to a class of MA students in their Cyberjournalism class at Porto University and the plan was to build a tool that would track that flow from the very start.

Of course, better said than done, except at the NYTimes, where they built a tool quite similar to my original concept called Cascade. They did in 3D, like I wanted to, and it’s beautiful to look at. Check their video but read on afterwards to dive into my own concept, which is similar in basics but somewhat different.

The way information about a news event is distributed has changed dramatically in the last years. The so called traditional media are no longer the diving force in this process rather to be substituted, in part, by the active participation of users, that became creators, distributors, and sharers of news contents, using tools like Twitter, Youtube, Facebook, blogs and other social networks.

We already know this, but what I’m looking for with this proposal is a model that provides a clearer view on this change, its consequences, and how media should rework their strategy in this decentralized logic, and take the most from it. To understand that logic allows a better, more profitable management of their resources, in gathering and subsequent information distribution, leaving their central role as source of all journalistic contents, but intervening in different parts and ways along the flow. It’s the end of the mainstream media concept and their transformation to stream media.

The ideas presented here were the basis for a prototype, that could be validated and improved with a more scientific and empirical approach. My goal was to create a three dimensional, dynamic visualization, that corresponded to reality. The NYTimes example presented above is a good example of that. But for now, I’ll use my drafts, which are quite simple, to give a basic perspective on the current and future changes in the news paradigm.

In the age of pyramids

The old paradigm

For decades, news were built and distributed in the same way,, and we have to understand the classic news model that dominated from the industrialization of journalism, specially in the 20th century, and that remained relatively unchanged even with the coming of new media like Radio or Television. The breaking point occurs with the internet and the Digital Revolution.

Media as source of the information flow

In a pretty simple way, this is how it worked:

  1. Event;
  2. Journalists gathered information in the field with the news story characters, witnesses, official entities;
  3. Information was edited, published;
  4. The audience had access to the information in print the next day or in the following news segments on TV or Radio;
  5. After consuming the news, all the information could be discussed by the readers/listeners/viewers in a more or less private way, in a direct, interpersonal relationship. The information could be recovered in new news pieces if new developments occurred;

And its useful life ended here, not long after it was created. Unless the audience members created their own archives, there weren’t many chances to recover or reuse that information, since all content (or most of it) was archived and in the possession of media companies. The main feature of this half-life of news was its caducity. Only the more relevant events would become a part of the audience’s collective memory, in a scattered, individualized and disorganized way, in most cases.

The very own ritualization of the journalistic process, with news at fixed hours and institutional establishment next to communities and sovereign bodies, contributed to make it closed and limited to a reduced number of people that determined the degree of importance of a specific event, following rules and guides, almost clerically. They were the defenders of public interest, the public opinion makers, and the power to reach the masses granted them the status of the Fourth Estate. News consumers participated in the information routine simultaneously as actors and public, with very few interference in the practice of news professionals, that held absolute control on what and how it should be published.

The media were the creating hub of all news contents, and all their work was directed from top to bottom, in a pyramidal structure, whether inside news organizations, in content distribution, or in content building (the inverted pyramid), and they were its keepers. But with the new technologies it all collapsed, and we watched the horizontalization of the news process, that now occurs also beyond the borders of the traditional journalistic structures.

From carpet bombing communication to relational communication

Relational

  1. Of or arising from kinship.
  2. Indicating or constituting relation.

So if in the previous paradigm information was static, closed, finite and with a short half-life, the current situation is pretty much the opposite. If the audience was indiscriminately bombarded with information, the internet provided room for niches, with specialized information. And links and the link economy changed everything: we can comment and quote on a specific piece of information providing immediate access to it. We share and point the path to information. Add the social media/aggregation/recommendation/distribution tools and we are no longer passive elements in the news flow, but active characters in the creation and distribution of information.

With the 24 hour news cycle and the permanent breaking news status (news are no longer “breaking” these days, they’re just “happening”; news can be  “exclusive”, but basing their importance in a time factor is, to say the least, irrelevant) the pressure on media to keep the information flowing has become intense:for example, the traditional news cycles for newspapers no longer comply with the needs of a permanently connected audience, nor the construction of ritualized information published at a specific moment if made with “breaking news” in mind.

Another thing that doesn’t work is closed content, or content that cannot be shared or distributed in different platforms. Facebook has become an important place to access information, where individual articles from specific brands chosen by users are shared and commented by personal networks immediately. Twitter was probably the first place where this happened in a massive scale. Recommendation became a simple process, that took news from the media platforms to individual platforms, that are based in the sharing logic. Check this Pew Research Center report about how people navigate news online.

What we are watching today is the “user curation of content”, where news spread faster and farther than they did, just because the users are no longer mere recipients of indiscriminate information but active participants in its distribution. This new ecosystem broke the previous model where media were in a stand bombing the audience with information, in a one way relationship, moving now to a situation where exchange is the rule.

And media is no longer the sole source of news, just remember how many events were first transmitted by users through social networks and online tools only to be picked up but news pros afterwards: earthquakes, the Iran election and, more recently, Bin Laden’s death are great examples about the role of users and social media in the distribution of news.

With these factors in mind, I thought about how we could visualize the flow of information, from the very first tweet, post, video, etc.

The upward spiral

a not as cool looking representation of the news flow

The main difference between my idea and the NY Times visualization is that I thought about a spiral instead of ramifications of content. This seemed to be the most effective way because I had a few parameters in mind: time, range and audience attention. In my draft Facebook wasn’t considered because it was 2009 and it wasn’t as important in the flow like Twitter was, and blogs had more relevance in this ecosystem.

So, this is how it works: at the epicentre there’s the event, first tweets and posts, picked up afterwards by media as breaking news, retweets/shares in social networks, comments, more media articles, new user generated content based on new information or built upon the existing one,and so on and on. Audience attention is higher at the beginning, and it fades as the flow widens. This is not a process closed in time, because information can be recovered and reutilized days, weeks, months, years later, which is another feature of digital content: it’s perennial, and database or archive journalism is something that has been growing recently.

Side view. The arrows mean "time"

These are measurable parameters, if only there was a tool to compile and calculate them…

Of course in reality things might differ, and the constant elements may vary, because media companies can be the creators of the first tweet, or other tools can be used – Iran elections had a huge impact also because of the YouTube videos made available by the protesters – but the core idea is the following: there is a root (or roots) that generate more content, linking opportunities through sharing, recommendation or referral, and construction of new content based on the pre-existing information.

What I had in mind and that Cascade does superbly is to track the various connections, and evaluate the ripple effect caused by a single piece of content. Mapping the origins of information and assess their impact can be useful not only to validate that information but also to develop new strategies in content distribution.

 

The Tornado Effect

The development of this flow, represented by a spiral, would create what I call the Tornado Effect. Imagine a horizontal axis, a timeline representing the event developing chronologically,  and the vertical axis where the information spirals upwards in time and connections. The more numerous the connections and links, the higher the spiral; the longer the interest lasts, the lengthier the timeline. Most events would briefly touch down and dissolve away, others would be powerful enough to drag a significant number of users and platforms into it, or even generate new tornadoes.

There is a scale to measure tornado power, the Fujita scale, that goes from F0 to F10, and I was thinking about having one applicable to these phenomena, let’s say from G1 to G10 (the Gamela scale – meant it as a joke, ok?). Events like Michael Jackson’s death, the revolutions in Northern Africa, William and Kate’s marriage would be high up in the scale; blog posts and articles shared on Twitter by a small group of people within a short period of time would be close to zero.

And if we selected paths of information, from an original source to it’s various ramifications, like we can observe in Cascade, we would have something like lightnings inside the tornado, connecting the related content dots along the spiral.  Sounds fun doesn’t it?

Implications

Leaving visual metaphors aside, what’s the purpose? First, we could analyze where does the information come from; then how it is shared and used; and finally, who contributed the most. A thorough analysis of the path of information could represent a shift in the strategy for media companies that could reconsider how to find, distribute, create and aggregate content, making it more viral, and rethinking it to augment its longevity and usefulness.

The role of the users in this process would be better observed and weighed, and a more complete view on the platforms and mechanisms they apply accessing and sharing information would be possible.

And above all, new types of journalism practices would have to be applied. Curation has become a huge concern for media pros, and there are many tools to curate content, professionally produced or not. Archive journalism would become more important than it is now, since curation is creating archives in almost real time in a way, but with these tornadoes mapped out we could infer who tweeted first, what was the relationship between events and content, and related contents, down to the millisecond, since everything online has a timestamp. Information is now perennial, and should be used taking that feature into account.

Of course, this is not a fully developed idea, and I lack the skills to build such analytic tool. But as concept, I believe it would be useful to understand the spreading of news and create new strategies from that understanding, and at the same time mapping out and monitoring the evolution of information. And the Cascade project is quite close to that.

Leave your disagreements, ideas, praises, this week’s lottery numbers in the comment box. Thank you.

 

picture shamefully stolen from a place I can't remember now