Yessss!!!!!!
In Checkerboard detection - part 1 I covered the start of detecting checkerboard corners/patterns in an image. We're doing this so we can correct any distortion the camera footage might have. The Dark Room needs an engine to do this.
And I'm pleased to say I've now got one! Most of one.. Along with the camera pose engine, it means I'm the best part of two-thirds of the way there with The Dark Room now. I'm so excited I can barely think of what to write!
Let's talk about checkerboards...
Here is a photo from my phone of a checkerboard pattern (printed out on A4 paper):
What we want to do, is find the corners of the each pattern tile. We succeeded with this in part 1. Now what we to do, is tell the points how they make up a checkerboard object. That is, how do they link together to form edges and lines.
As I understand it (in my head), we do this by searching through each point and find closest matches. We then sort these matches to see if they can form some sort of single checker (4 corner points). If we can do this with a bunch of points, say we find 4 tiles, we then test if they have matching corners/sides. And we iterate through these until we build a board full of points, ordered in rows and columns. Another way of putting it, is like a matrix of points.
If you're wanting a more technical read, try something like this (PDF): Automatic Camera and Range Sensor Calibration using a Single Shot.
And here's my engine doing just that. Use the slider below to compare the original image to the found corners and board:
The next (and final) step at this point, is to try and calculate the distortion parameters. I'm not sure what this entails yet. But I need to sort that out.
And why are we doing this? It's all to do with making the rendered animation look like it's in the real camera footage. I need to create a workflow that allows me to composite animation on top of my film camera footage. I haven't got an off-hand example of my own here (still putting these things together!), but here's a good example of what the workflow might look like:
So, why do we need to do this?
Well, lens and cameras don't provide a perfect image. Straight lines in our videos/photos aren't always straight lines. Have you ever seen GoPro footage of something like the horizon, or the corner of a building that you know is straight, but looks curved? Well, this is optical lens distortion. And if we're going to do tracking on an image sequence, like footage from my film camera, we need to correct these lens distortions. In essence, this is what we're trying to do:
And it's important we correct distortion, because if we track markers in the footage then try and compute our camera pose solve without correcting the tracking marker positions, then we may (probably will) get an incorrect camera pose solve. As the saying goes; garbage in, garbage out.
Just for note's sake, I'm not using OpenCV as in the image above. But for anyone wanting to use a library with this stuff already built in, I suggest you take a look at this. Much of what I'm trying to do is already done for you. I will speak no more of this.
Back to our correction engine. We need to make it look like there's a seamless connection between our rendered animation and the real world camera. That's the end goal here. To create a seamless camera tracking composite.
I won't go too much further down the rabbit hole on this one, but there's a couple of ways you can use distortion correction in your compositing software. I'm not sure which way I'll do it yet, though I am hoping to render out the animation with distortion. It's not usually the way you do it, but it's just how I'm going to try it initially while testing my software out.
With this engine complete, I now only have to calculate what these distortion parameters are. Then I should be able to use these parameters to undistort our original camera footage, or, distort our animation. Then I should be able to plug all my engines in together, and hopefully use The Dark Room for it's first simple camera tracking animation test!
So close now...