Home » .NET » C# » C++ » Kinect » Normals » Point cloud » Kinect Point Cloud Normals Rendering – Part 4

Kinect Point Cloud Normals Rendering – Part 4

In this post I’ll create a WPF front-end to the C++ template code I’ve been using so far. I’ll use a CLR wrapper to achieve this.

The first step is to create a WPF project. I like to use MVVM Light Toolkit for my WPF projects. I’ve installed the Visual Studio plugin so it comes up as on option when I’m creating a new project. I started by created a project called DepthViewer2D:

Create New Project

Create New Project

Once I created this project I added a new project to the solution called DepthImageRenderer. This is a C++/CLR class library project. You’ll find it under “Other Languages” -> “Visual C++” -> “CLR” and choose the “Class Library” option:

Add New Project

Add New Project

Within the C++/CLR class library we’re going to create a wrapper between CLR and native C++. We’re going to use the pimpl idiom to achieve this.

Add a C++ class called DepthImageProcessor to the DepthImageRenderer project. When adding this class make sure that you uncheck the “Managed” checkbox:

Add New Class

Add New Class

There are some other things we need to change about the properties of this class in the project. If you right-click on the DepthImageProcessor.cpp file and choose “Properties” you should be able to change the build settings for this file. We want to change the following:

  • make sure that it doesn’t use pre-compiled headers (which is actually the default for the project). This is because the pre-compiled header is set to use managed code.
    No PCH
  • Remove the line that includes “StdAfx.h” at the top of the source file.
  • Ensure that the file isn’t built with /CLR support
    No /CLR

Once we’ve done this we can define our implementation class – DepthImageProcessorImpl:

And finally we can create an instance of this class when we initialise the DepthImageProcessor class:

Where the data member _impl has been defined in the header file of the class:

We can put our C++ image processing code in the DepthImageProcessor.cpp file. So now we need the depth image! To get this we’ll grab a depth image from within WPF and then send it through to our back end processor. We’ll use a concurrent container to do this: concurrent_queue. This container is lock-free and enables us avoid synchronization bottlenecks. You can find it in the Concurrency namespace.

If you look at the WPF depth processing sample in the Kinect SDK: DepthBasics-WPF, you’ll see how to pick up the depth render frame and get the real data. You’ll need to change the build options for DepthViewer2D to allow unsafe code. Once we’ve got that we’ll pass it through to our CLR class in DepthImageProcessor:

We’ll create 2 queues – one for input and one for output. Here are the 2 structures for the values in the queues. Add these to the DepthImageProcessor class (in the header file):

With these defined we can now define the 2 queues. Put these in the DepthImageProcessorImpl class:

You’ll also need to include the “concurrent_queue.h” header file at the top of the DepthImageProcessor.cpp file:

Everything is setup for us to receive the depth data, push it onto the queue and then flag our background thread that there’s something to be processed. We also check to see if there’s anything in our output queue to be processed. If there is then we pop the processed data from the queue, create an image that can be processed by WPF and fire off an event delegate with the image.

There’s one more thing we need to add to our CLR class. That’s a reference to the .NET assemblies PresentationCore and WindowsBase. To do this you right-click on the DepthImageRenderer project and choose Properties. Expand the “Common Properties” on the left-hand and select “References”. From here you can click on “Add New Reference”. Select the 2 .NET assemblies here:

DepthImageRenderer_AddNewReference

I’ve zipped this all up and you can download the source here.

I have created a video showing the application running on my desktop. I click on the 3 different modes and you can clearly see the difference in the normals. Each increase smooths the rendering. This is rendering at the same speeds as the MFC app in the previous post.