The Kinect does not use the colors to determine the shape of the part so you don't need to put it on a white surface. However, if the mat is black, it might not return enough light (the Kinect uses near IR light to read shapes) but in general, it works.
The easiest would be to try to measure the shape of your mat using a single position of the Kinect, or by moving just a little bit in the X-Y directions (taking data from different point improves resolution, but not necessarily accuracy).
You can also try to add objects on the floor next to the mat to help the Kinect tracking if you decide to move the Kinect to improve resolution.
Finally, if you have a Kinect, you can try it for free with the Haro3D library. You get a 30-day free trial by default.
Two example programs for the Hololens are provided with the installation of the Haro3D library.
Notice that a new release of the Haro3D library will be available beginning of 2018 that includes the use of cloning of 3D objects for large number of holograms. A completely new example program demonstrating the new features will also be made available at the same time. Check the 3D Vision group regularly!
Thank you for the code. I would like to track a apple or banana with a Kinect and than get the X Y and Z coordinates. I have your example's, but with bodies you only track bodies from people. Do you have any idea how i should do this? I was thinking with cloud detection maybe. Maybe you have a example code?
The Kinect can intrinsically track only joints and bodies. To track any other type of objects, you have to develop your own code.
You can use the Haro3D library to acquire clouds of points from the Kinect (see the Cloud of Points example) for that purpose. You can use both shape and color information. Spheres are the easiest to track because LabVIEW provides a VI to fit 3D points on a sphere (Fitting on a Sphere.vi).
An example of such tracking is provided here.
Please, share your progress with us!
I can get the x and y coordinates from a picture, but now i want to get the Z coordinate. I'm trying with the HARO Depth now. The HARO Depth has status, depth data and error out as outputs and depth data i'm try to use. From depth data you can make a 2d array and i thought if you set the numerics of the array then your output is the Z value. When i do that the coordinates don't match. See picture in the annex. I hope you understand me, my English is not so good.
I do not know how you created your IMAQ image but if you used the ArrayToImage function, the image should correspond to the array.
I have modified the Depth Example to display the depth array using an IMAQ display and an array control. As you can see below, the values of the two controls correspond (and I have tested several locations), when considering the Transpose 2D array function. I have also included the code in LabVIEW 2016.
Just in case, notice that the row and column of the image do not correspond to the X-Y coordinates of the object in the real space coordinate system. With the depth, you only get the Z values. If you want the X-Y-Z coordinates in the real space, you have to use the Cloud function.