I wanted to calibrate my Kinect V2 Depth Camera in order to obtain it’s intrinsics. With those I can then create point clouds, normals, etc.. It’s the starting place for a lot of interesting computer vision things. We can calibrate the Kinect V2 Depth Camera in a similar way to how we would calibrate the Colour Camera.
There’s an excellent program called GML C++ Camera Calibration Toolbox which is just what you need to do the calibration. It’s produced by the Graphics and Media Lab at Lomonosov Moscow State University.
Once you’ve downloaded and installed the toolkit it needs a sequence of images of a checkerboard pattern taken from varying positions. There’s a link on their website that will help you with how best to do these sequences. I used the “Infrared-WPF” Kinect Sample to generate my images. You only need to click on the “Screenshot” button on that sample to take the pictures. They’re stored in your user’s Pictures folder as PNGs. The toolkit has no problem loading these.
Once you’ve run the calibration click on the Results tab to see your Camera Intrinsics:
You can see in that picture the camera intrinsics I obtained. These are:
Focal Length (x,y) : 391.096, 463.098
Principle Point (x,y): 243.892, 208.922
On a good calibration the Principle Point would normally be in the middle of the image space. The Depth Image we get from the IR camera has the dimensions 512 x 424 so we would expect a Principle Point of 256.0, 212.0. Ah well! The Toolkit website recommends taking lots and lots of images to improve the calibration. I’m sure you could get better results than I did 🙂 But this is good enough for the things I’m planning next.
In my next post I’ll show you how to generate a point cloud and render it. I’ve already implemented this and have got 30FPS without using shaders. I just need to put together the post to show you how I did it.