The majority of 3D releases, both in cinemas and on Blu-ray, are animated films. Indeed, every CG toon is now released in 3D as a matter of course, and plenty of older ones are being reformatted and re-released, particularly the back catalogue of Pixar. The results have often been outstanding, and you’ll also hear people talk readily about 3D being suited to this style and these genres, so it’s hardly surprising to find this bias in the marketplace.
But what about live action stuff? It’s pretty thin on the ground, and after you pick out the national history flotsam and jetsam and eye-gouging horror pictures, there just aren’t many titles out there at all. Not yet.
20th Century Fox noticed this open niche too, and they decided they were going to do something about it. They’ve just released Titanic and Prometheus, and starting with I, Robot, the studio have decided to convert some of their back catalogue specifically into Blu-ray 3D releases, targeted directly at the home cinema market. My appetite for new 3D content agrees – I want to see the catalogue expand so more people start using 3D, and I want more people using 3D so more and more 3D films are made.
To make Fox’s re-release plan viable the studio has been working with JVC on a new conversion workstation, allowing for the effective conversion of 2D titles into new 3D releases.
There are several obvious challenges, both from a consumer and studio point of view. On the one hand, Fox can’t just start shovelling out badly adapted, headache-inducing rubbish made of flat, off-putting cardboard cut-out figures or this plan really isn’t going to take them very far at all. I, Robot will have to win over the audience over if Fox want this to be a really successful, ongoing series of re-releases.
On the other hand, the conversion can’t be too expensive or time consuming if there’s no theatrical release planned, denying the studio the extra wave of income that a big screen release would bring.
But surely the cheaper something like this gets, the worse it gets? Well, that would be true if we were all trapped in time and technology didn’t progress. Compare the computational power of a $500 laptop made just yesterday to its twice-as costly cousin from just three or four years ago. The success of Fox’s new scheme rests on the power of JVC’s emerging 2D-to-3D kit. JVC need to have developed a way to make conversions work better than they ever did, for less than it would have cost before.
Let’s stop a minute to cover the basics of how a 2D film is converted into 3D.
You start, of course, with a single 2D image, the original film. Your first choice is whether or not you’ll treat this as one “eye” as it were, and then go onto to create a “second eye”, or if you’ll take the data from your original 2D piece and create two new images, meaning neither the “left eye” or “right eye” of the new version matches exactly what there was before.
There are pros and cons to each option, particularly in the case of home cinema releases. With a Blu-ray 3D title, keeping the original 2D image as the “left eye” means that the same, single disc can be played in both 3D and 2D players and the 2D player will give the same video as the original film.* This is a big pro for Fox, so this is the way they’ve gone. But we’ll come back to one of the cons of that particular choice in a moment…
Anybody who has even seen Wayne’s World knows the “camera one, camera two” effect of looking at the world first with one eye, and then the next. Hold your hand out in front of you and close one eye and then the other and you’ll see that from each eye’s own vantage point, your hand obscures a different portion of the background.
So, assuming we’re keeping the original 2D film as the “left eye” image, then we need to create only the right eye image. This is done by taking the “left eye” and moving elements in the frame, repositioning them, in accordance with the parallax displacement of this new position. That is to say, the things in the foreground, they get moved the most, and the things in the background get displaced the least, if at all.
But when you move part of an image, you don’t just reveal what was ‘behind’ it. Say that your shot is of Kate Winslet in a corridor of the Titanic. Slip her over to the side a little and you don’t suddenly get more corridor appearing behind her – it was never photographed, you don’t have that information. You get a gap. So new bit of image has to be created to fill the gap, and it has to match the ‘real’ image.
These are the two key steps in creating basic 2D to 3D: ‘rotoscoping’ and ‘painting.’ First, you rotoscope. This is where you “cut out” Kate from the corridor. She’s outlined exactly, and identified as being a separate and distinct object in 3D space. Then you paint. This is where you fill in the gap you made by moving her.
The cut out of Kate is called her ‘mask’. Once upon a time, she’d have to be perfectly outlined, frame by frame, thereby creating her moving ‘mask’ for the whole shot manually. One of the benefits of modern automated systems is that they’re pretty good at tracking an object once it’s been identified in a single frame. Cut out Kate once, and the computer is likely to be able to keep tabs on her ‘mask’ precisely throughout the rest of the shot. Or near enough – and manual operators can step in to correct the few remaining little glitches on a frame by frame run-through
Fox say that their new JVC kit makes this process very efficient, and that the time used in rotoscoping has been greatly reduced. That’s the first thing that made I, Robot viable.
And the second thing is that the system also helps a lot with filling in the empty space, with ‘painting’ at the ‘mask edges.’
This is where we get to a downside of keeping the original 2D image for one of the ‘eyes.’ This choice means that the displacement of the images has to be twice as extreme as if you were sharing the shift across new left and right eye video. This means that there’s a lot of mask edge to be filled in any given place, an awful lot of guessing, calculating or inventing what was hidden from the camera.
There are several different algorhythms that the JVC system uses to project what will be needed to fill the empty space at the mask edges. Again, it doesn’t always get it right, but a quick manual check will reveal the curious stretches and distortions that sometimes occur and then, it’s back to the human, hands-on way of doing things, and the mask edges are painted-in by operators.
In short, the JVC system is making these processes as automated and speedy as we’ve ever seen, but there’s still a human there to step up when the computer gets it wrong. It’s a ‘best of both worlds’ approach. Robots on the front line, humans keeping watch from behind.
But what I’ve described so far would leave a rather unconvincing image. Cut out Kate as a single mask and what you get is flat Kate, and what you want is… well, curvy Kate. You really don’t want Kate Winslet to look like she’s just been printed on cardboard.
I spoke to Fox’s Ian Harvey, the studio’s senior VP of advanced technology, about the way the new system allows for the adding of much more real volume and dimension to the image. Here’s what he had to say about the system’s crucial ‘Emboss’ tool:
Our tool will emboss the layers and give them curvature or shape. There are several ways to do this- we can stretch it, we can bend it – but it does require manual intervention.
Typically, and you’ll have seen this in a number of movies that were released in theatres, when somebody has done roto on an item but they haven’t gone to the lengths of providing roto for all of the features and you get the cardboard cutout effect. There’s depth, but each layer looks flat. It doesn’t look natural. Our process looks natural and when you look at Will Smith’s face, it doesn’t look flat.
I preferred it when we were talking about Kate, but anyway…
To create this effect normally you’d have to do a separate roto for everything. Each of his eyes, his lips his ears, everything. But for a typical bust shot, we just do three masks. Just three. Those scenes where you see a character from the waist up, we just need three masks.
That’s a big deal for us. The emboss tool can then be used on a mask by mask basis. There are fifty different adjustable parameters that give us the ability, keyed to intensity, colour or other things, to set the emboss effects.
Typically, if you look at an image where you have light and dark, dark is further back than light; if you look at colour, typically red is forward, blue is back. There are fifty of these fundamental algorhythms you can use to give a mask shape. You use the right ones for the mask that you have.
It’s even better when working on a film like I, Robot, pictures that required the creation of a lot of CG elements. Those 3D models can be used to emboss the masks too, creating a 3D effect that follows the original CG element’s shaping.
This emboss tool is, I think, the killer part of the JVC kit.
I watched several scenes from I, Robot, including a number of close-ups of Will Smith’s face. It looked like Will Smith’s face. It didn’t look like it had been printed on cardboard and his eyes or ears didn’t seem to be floating, disconnected from the rest of his multi-million dollar mug. In short, it didn’t look anything like Clash of the Titans.
There are several problems you’d expect from a bad 2D to 3D conversion, and the cardboard cut-out effect is just one. You might also expect there to be shots where the new volume draws attention to the wrong part of frame; you might expect there to be shots where the very thing you want to look at is out of focus; you might expect there to be a cross-talk ‘ghosting effect’; and that’s just a handful of the things I was looking out for.
I can honestly say that I didn’t see any of these problems with the I, Robot scenes I saw. Not at all.
But there is one big catch here, for purists at least. I, Robot has been converted without the contributions of the original filmmakers. Fox told me that they reached out to the film’s director, Alex Proyas, but he declined to be involved.
So none of the choices about the 3D have been made by the people who created the film in the first place. It sort of makes the film into a new adaptation, in a sense. This is I, Robot 3D, adapted from I, Robot 2D.
Many of the choices made in converting from 2D to 3D are storytelling ones. It’s not just about the software and hardware, about the new JVC tool that prevents there from being big, screaming issues with the image. It’s also about using the 3D to make the experience better for the audience – an a better piece of storytelling too, not just amping up the 3D thrill-ride.
Not having seen the whole film, I can’t really comment on the overall success of the conversion on this front, but Fox did underline two key ideas.
First of all, Ian Harvey made it clear that the I, Robot process started with a depth script. That is, the film wasn’t just chopped up into pieces and given to 3D technicians to adapt as they would wish, but the overall sequence of stereo effects for the film were pre-planned. A depth script specifies where different 3D volumes are going to be used, and why, with an eye on telling the story best of all.
I asked who was ultimately responsible for all of these creative decisions, and Harvey told me:
We had a stereographer, and I obviously looked at it all. What we creatively started out with, from a stereographer’s output, we tweaked as we figured out what the tool was capable of. And I looked at every single frame so I suppose the buck stops with me on this movie. But we don’t have one person overseeing the 3D this project, it was very much a first time through.
That, I suggested, is where they need to make the next improvement.
Some of the new 3D ideas in I, Robot go beyond adding depth to the image as it was shot. In a few, select places there have been new VFX added.
The best example I saw was in a scene of Chi McBride shooting robots with a shotgun. He was letting rip through a plate glass window, and the new 3D version has a few pieces of broken come out of frame. It’s pretty subtle, not a huge, poke you in the eye moment, but it was designed to add some juice. Fox’s philosophy is very much to keep things behind the screen, in positive space, but this glass is an example of where they pull things out into negative space and let them pop, a little more, just for the money shots.
These particular shards of glass are not there to be seen in the 2D version at all, but they have been created from elements of the original image to keep the colour and texture right.
The shot still looked natural, or every bit as natural as you’d expect from an FX heavy film with an army of robots clambering all over a CG cityscape. And I think that the purists these added FX beats would upset almost certainly wouldn’t go near a 2D-to-3D conversion in the first place.
Should this first release be a success, Fox say they have a “handful” of other titles in consideration, which could well mean early stages of conversion. Harvey told me:
We tweaked the whole system as we were doing I, Robot. When we do another movie there will be shots that challenge the system, but not so many.
Personally, I’d bet you good money that Die Hard is going to come down the pipe soon, and perhaps Braveheart. They started with I, Robot because it’s been a big, big seller ever since it came out in 2008, but they also assessed the film creatively – that is to say, it just looked like an obvious candidate for some 3D. I agree with Harvey’s assessment that if Proyas were making the film today, he’d be making it in 3D. I’d even go so far as to say he’d stage most of the shots in the film just the same way too.
I, Robot is released on Blu-ray 3D on 22nd October in the UK, 23rd October in the US. That’s next week. There’s also a collector’s item release, packaged in a replica of Sonny’s head. This was shown off on the floor at Comic-Con this summer, and folk seemed to love it. I’m very excited to see the whole film…
*Well, in this case there’s some added FX work in a small number of shots.