Manohla Dargis just posted a fine article about the onslaught of digitally acquired films1. She writes:
Some of this year’s most acclaimed and talked-over movies, for starters, including “The Social Network,” “Black Swan” and “Tiny Furniture,” were either partly or wholly shot in digital. It’s no wonder that more than a third of my 30 favorites this year — because, really, why stop at 10? — were a combination of the two. Does it matter?
Does it matter?
It matters to producers, as digital, used properly, can lower the cost of a shoot.
It matters to telecine and processing facilities, as –telecine? What’s that? Such facilities are quickly diversifying the services they offer, or are closing.
It matters to assistant editors, as they either have to deal with the constantly-changing workflows designed around different digital formats, or have to deal with transfers from film and keeping track of the relationship between the digital clips in the editing system and actual pieces of film.
It matters if you’re a Director of Photography with exacting standards, and you’ve spent your entire career building your knowledge and ability to achieve the images you want on film, surely. That’s probably the person to whom this debate matters the most, from the aesthetic side of the argument. But should it? Would adapting one’s style of shooting to a particular sensor be any different than adapting to a particular film stock? Can one choose particular cameras and sensors based on the needs of a project, as one did film stocks? I think this is already happening. But I can’t speak for DOPs.
But for audiences and cinema enthusiasts, “Does it matter?” was a good question, a year ago. Now it’s an afterthought.
Celluloid film is an amazing technology, and it’s dying, quickly. I’m no purist2, but I’m a little sad about this, and hope I can work on a feature film that’s shot on film before it completely disappears.
But as much as I love film, it’s exciting to see digital sensors catch up to film, and quickly. It’s a repeat of what happened in the world of print. I think we’re past the time in which the film vs. digital battle was spirited and relevant. That ongoing argument helped shape the new breed of digital cinema cameras, and it shows. It’s fast nearing the time to rephrase last years question, “Does it matter?” as, “Can we stop counting now, or pretending we can tell the difference?” At this point I’d venture that most3 of the more ardent celluloid enthusiasts and digital bashers have seen and unknowingly appreciated a digitally-acquired film. It’s the cinematic equivalent to the Turing Test, and digital started to pass a couple of years ago.
Dargis continues, and my mood darkens:
Digital images still don’t look as rich and sumptuous as film, which was developed to reproduce the way our eyes see the visible spectrum.
I can’t let either of these sorry old canards4 go unexamined.
First canard first: It is a bit unfair to compare the best examples from 100 years of celluloid film against the limited number of existing digitally-acquired films, especially considering that many of the films one thinks of (and notices) as “digital” were shot earlier in the decade5 –digital has improved in quality by leaps and bounds in the past few years. One has to compare to something, of course, but if you restrict your view to the best examples of recent digital, it becomes instantly clear that competently-shot digital images can look plenty rich and sumptuous. Even digitally acquired films from the past two years that don’t feature cinematography designed to take center stage, like “Cyrus”, look “rich and sumptuous” to me.
Back to print. Has National Geographic stopped looking sumptuous since they started publishing photos shot on digital cameras? Have people been writing complaints in to Playboy? The shift of the field of still photography from film to digital is relevant to this discussion, as digital sensors for moving imagery and for stills are as closely related as their celluloid counterparts are. The complaints, worries, and about digital for stills seem to have vanished over the past decade as the technology improved to the point that only an engineer could tell the difference.
Regarding Canard #2: So much of the film vs. digital argument has for so long been couched in pseudoscientific mystical pronouncements, and the idea that film is somehow better suited to replicating the way the human eye sees the world than digital is one of such. Both film and sensors designed for photography are designed to create pleasing images for people to stare at. Neither technology really reproduces “the way our eyes see the visible spectrum”, nor would anyone really want that.6 What film and sensors can do to reproduce the qualities of the visual spectrum that our eyes can sense, they do. Digital sensors follow film note for note in this regard. Both have the same design goal. The statement that film was designed to reproduce the way our eyes work and the implication that digital sensors are not, is simply wrong from any angle.
Nit picking aside, the article’s worth a read.
The battle that made the question “Does it matter?” relevant, is all but over. But “Will it blend?” –I think celluloid’s got a lock on that one.
- I nearly put the word “films” in quotes, but if I did that I’d have to start calling the “bins” in Final Cut Pro what they look like: “folders”. [↩]
- That should be obvious, given this blog post. [↩]
- It’ll be all, by the end of 2011 [↩]
- Well, if they haven’t achieved canard-status it’s not for lack of trying. [↩]
- It’s also worth noting that many of the digital films shot until recently chose to go digital for reasons of flexibility or cost-containment, and didn’t place a priority on picture quality. The results speak for themselves, and should not be compared to the visual impact of films that put a priority on appearance, such as Lawrence of Arabia, but instead should be compared to similarly oriented uses of celluloid such as 60’s man-on-the-street news footage or Monty Python’s outdoor segments. [↩]
- Human vision is fantastic, but that’s mostly the brain’s doing –in many ways our eyes are surprisingly crappy. The brain creates fantastic imagery out of surprisingly little information, and a lot of what we see is the result of pattern recognition and quick deduction in the visual cortex. Our impressions of color and light are relative to neighboring colors, rather than absolute measurements, and the way we see images is in a sense hallucinatory.
Though neuroscientists and opthalmologists would love to get together and watch a sequence that actually reproduces the way our eyes see, it’d be jarring to an audience to watch footage that features a tiny, darker circle of very high detail and color saturation in the center with blues that tend to the violet, a larger empty black spot nearby, and all surrounded by an increasingly blurry and warped bright green-blue miasma in which motion is detectable but not details. Of course the view would continuously jitter around at high speed and never hold still so that the tiny spot of clarity can flit from detail to detail and take in enough for the brain to make its assumptions and create what we see. [↩]