Reading cinema camera images

Project Page Previous project post Next project post Post History

As part of The Dark Room software project, I've been thinking about how I can solve the problem of how to read image data inside the animation package from the digital cinema camera. One of the crucial parts of my system is about camera tracking and live action deep composites. Therefore at some point, the animation needs to connect with the live footage from the cinema camera.

Now, to deal with this, most production places would just export the raw video into another format that the animation package can read. But I didn't want to do this. For starters, it means you have to create another copy of your image file. This could be converting to another video format, or exporting all the frames to an image sequence. Secondly, this means using more storage space. For small shots, like a 5 second clip, this may not be such a big deal. But if you've got many more clips, the storage space needed to store all of this data starts to add up. A 40-odd second video clip off the cinema camera, comes in at about 11GB. Yep, that's a G for gigabyte.

If I want to make a 3 minute short film, that's around 50GB of file storage. Imagine then having to make copies of this into another format, or creating an image sequence for all the individual frames. And that's only the 3 minutes worth that might be used in the film. There might be another few minutes of footage that isn't used, or that's been cut out.

So, to circumvent this, I decided to try my hand at reading the raw image files myself. That way, both in the editing, compositing, and in my previsualiser software, everything can work off the same single raw source file.

To do this, I turned to the RED website and found their R3D SDK. This SDK allows software developers, like me, to use RED's own software to help me read RED camera raw image files. To me it's fantastic, because now I can read files from my camera, inside my own software, without having to do expensive exporting into other formats (expensive in time and storage). I get to work with the original raw footage.

As proof of working status, the image below is a frame taken from a film captured from my RED camera. It's displayed in the regular Cinema4D picture viewer, so not my display, but it's my programming that's made this happen in the background (alongside RED's SDK). The image looks 'washed out' because it's in the original raw colour form.

A frame from my RED digital cinema camera, displayed in the picture viewer

And here's a screen grab of the same clip on one of my YouTube videos:

Screen grab of the YouTube video with the RED camera clip

Now, at this stage, I've only managed to get it working. That is, I can only read the raw file and get the frames. I haven't dug into the devil detail yet. And it's slow to get the frame data. About 3-4 seconds for a half-resolution clip. It's about 12 seconds for the full resolution. That's because I'm processing everything on the CPU at the moment. The RED SDK does have GPU integration, which I'll look at in time. That should speed things up. For now though, so long as I can get an image, I have something to work with.

I'm still a little off from being able to track the footage. But I don't think I'm too far away. I've got a tracker, and I'm working on a camera pose solver. Once I've got these in working order, I can go back into The Dark Room project to put it all together. This should really help solve the problems with connecting the animation to the image. I'm pretty happy with this. Things keep moving forward.

For reference, the (boring!) video clip is below if you want to see it. They're just static shots of a morning sunrise over the Hobart waterfront. I was really just seeing what the camera would do in those conditions. I'm pleased to say it worked quite well.