Home » C++ » Kinect » Lambda expressions » Normals » Parallel processing » Point cloud » Kinect Point Cloud Normals Rendering – Part 2

Kinect Point Cloud Normals Rendering – Part 2

Having shown you how to create a 3D point cloud from a Kinect depth map in Part 1 I will now go on to explain how to calculate the normals for that point cloud and then render that in 2D by mapping the (x,y,z) normal vector values to R-G-B channel values.

To calculate the normal for a given 3D point in the point cloud we will use the point directly to the left of it (i.e. at the point (x-1,y)) and the point directly above it (i.e. at the point (x,y-1)). We will create a triangle of the 3 points and then use the “Cross Product” to calculate the normal of the face of the triangle. Here’s a picture I created to help explain what we’ll be doing:

Triangle face normal

Triangle face normal

Using left-handed cartesian coordinates (because I will be using DirectX which is based on this. OpenGL uses right-handed cartesian coordinates) we create our triangle vertices in a clockwise order. This means p0 = (x,y,z0), p1 = (x-1,y,z1), p2 = (x,y-1,z2). These are the 3 red dots on my picture. Using DirectX here’s my function to calculate the normal of this triangle:

DirectX is now included as part of the Windows 8.1 SDK. To use DirectX in your C++ programs simply include the DirectX header as follows:

It’s important to remember the “using”. If you forget this then you will need to reference all the DirectX functions using the prefix “DirectX::”. That can get very tedious after a while!

We just need one more thing to enable us to create our normal image. That’s another parallel processing template function that will take (x,y) parameters. Here it is:

If you look at this closely you will see that the parameters for this template are exactly the same as the “parallel_foreach_pixel” template we’ve created before (see Parallel Processing and Lambda expressions in C++11 – Part 2). But the “func” lambda expression it’s expecting must provide (x,y) parameters.

We must start by converting our depth values in millimetres to values in metres:

We introduce a structure to define our camera intrinsics and create an instance using the values we found previously:

We introduce some functions to enable us to do the rendering of the point cloud:

The final step is to call all of this in our main loop:

When you’ve put all of this into your DepthBasics.cpp file of the Depth-D2D sample project and run it you should see something like this:


And it should be running really quick (mine runs at 30FPS).

In a future article I will show you how to create smoother normals and I’ll put it all together in a standalone application. I’ll even provide all the source code for you!