Variety has a terrific interview with James Cameron about current state (and possible futures of) 3-D filmmaking. A couple of things that stood out for me:
Godard got it exactly backwards. Cinema is not truth 24 times a second, it is lies 24 times a second. Actors are pretending to be people they’re not, in situations and settings which are completely illusory. Day for night, dry for wet, Vancouver for New York, potato shavings for snow. The building is a thin-walled set, the sunlight is a xenon, and the traffic noise is supplied by the sound designers. It’s all illusion, but the prize goes to those who make the fantasy the most real, the most visceral, the most involving. This sensation of truthfulness is vastly enhanced by the stereoscopic illusion…
When you see a scene in 3-D, that sense of reality is supercharged. The visual cortex is being cued, at a subliminal but pervasive level, that what is being seen is real.
Seeing U2:3D last month, I agree: the best thing about 3-D is not that it makes things look cool. It’s that it makes things look more real. My favorite shots in the movie are when the cameras look out over the crowd, because you really feel each individual person. Not only are you there, you have permission to stare.
On “Avatar,” I have not consciously composed my shots differently for 3-D. I am just using the same style I always do. In fact, after the first couple of weeks, I stopped looking at the shots in 3-D while I was working, even though the digital cameras allow real-time stereo viewing.
Of course, most directors aren’t James Cameron, who helped invent the technology and can trust his instinct on all of this. But we should trust someone’s instincts, because the result is paralysis. One of pitfalls of adding new technology to film production is that the director moves further and further from the action (and the actors) to a Den of Experts, often in a dark tent, who make decisions around monitors. In most cases, you’re better served by having a d.p. you trust.
We all see the world in 3-D. The difference between really being witness to an event vs. seeing it as a stereo image is that when you’re really there, your eye can adjust its convergence as it roves over subjects at different distances…In a filmed image, the convergence was baked in at the moment of photography, so you can’t adjust it.
In order to cut naturally and rapidly from one subject to another, it’s necessary for the filmmaker (actually his/her camera team) to put the convergence at the place in the shot where the audience is most likely to look. This sounds complicated but in fact we do it all the time, in every shot, and have since the beginning of cinema. It’s called focus. We focus where we think people are most likely to look.
Cameron is slaving convergence to focus, even pulling it as necessary throughout a scene. This makes sense, but I’d never heard it explained so clearly.
The new cameras allow complete control over the stereospace. You should think of interocular like volume. You can turn the 3-D up or down, and do it smoothly on the fly during a shot. So if you know you’re in a scene which will require very fast cuts, you turn the stereo down (reduce the interocular distance) and you can cut fast and smoothly. The point here is that just because you’re making a stereo movie doesn’t mean that stereo is the most important thing in every shot or sequence. If you choose to do rapid cutting, then the motion of the subject from shot to shot to shot is more important than the perception of stereospace at that moment in the film. So sacrifice the stereospace and enjoy the fast cutting.
In front of U2:3D, there was a 3-D trailer for Journey to the Center of The Earth 3D, which I’m sad to say looked like ass. Actually, it kind of looked like nothing, because it was blurry in a way I can’t describe, like my eyes didn’t know how to process it.
I think this is exactly what Cameron is talking about. The 3-D shots in the Journey 3D trailer were probably composed for the movie, where they play much longer. But cut into a conventional trailer, it just didn’t work. (link )
You don’t need to be in 3-D at every step of the way. And as long as your work will be viewed in 2-D as well as 3-D, whether in a hybrid theatrical release or later on DVD, it is probably healthy to do a lot of the work in 2-D along the way. I cut on a normal Avid, and only when the scene is fine-cut do we output left and right eye video tracks to the server in the screening room and check the cut for stereo. Nine times out of 10 we don’t change anything for 3-D.
I spoke with a writer-director during the strike who had the opposite experience. To get the cutting to work right in 3-D, he and his editor were constantly checking the “deep version.” And that’s a not newbie predilection — for Zodiac, David Fincher cut in HD with a giant screen.
No matter how advanced the technology gets, while you’re in the editing room, you’re still working with a rough approximation of what the final film will look and sound like. Just as with color timing, music and FX, anticipating the depth effect is something you’ll need to remember and forget while cutting.
For three-fourths of a century of 2-D cinema, we have grown accustomed to the strobing effect produced by the 24 frame per second display rate. When we see the same thing in 3-D, it stands out more, not because it is intrinsically worse, but because all other things have gotten better. Suddenly the image looks so real it’s like you’re standing there in the room with the characters, but when the camera pans, there is this strange motion artifact. It’s like you never saw it before, when in fact it’s been hiding in plain sight the whole time.
[P]eople have been asking the wrong question for years. They have been so focused on resolution, and counting pixels and lines, that they have forgotten about frame rate. Perceived resolution = pixels x replacement rate. A 2K image at 48 frames per second looks as sharp as a 4K image at 24 frames per second … with one fundamental difference: the 4K/24 image will judder miserably during a panning shot, and the 2K/48 won’t. Higher pixel counts only preserve motion artifacts like strobing with greater fidelity. They don’t solve them at all.
An example of why James Cameron is the Steve Jobs of filmmakers: he understands that what matters is the user experience, not the hard numbers. He also sees how important it is to control the entire process, from shooting through exhibition. The best camera technology is worthless if you can’t get the results you want in a theater.
The good news is that the next generation of moviegoers seems ready to forget that 24fps is how movies are “supposed to” look. And changes within a digital delivery system should be much less painful than the switchover from our current, analog system.
I know it seems like I’ve quoted a lot here, but the interview is long, and there’s a lot more in it about other aspects of the technology which will be interesting to anyone geeky enough to click through.