If you use the Kinect cloud, you should be able to convert the 32-bit color array into an IMAQ image. The example below should work (I cannot test it right now). The points array give you the X,Y,Z coordinate at index 3*i corresponding to the pixel of image pixel i. You might have to adjust the resolution to whatever you need (and the order of the indices for the reshape array function). Remember that the camera space (depth camera) has only 512x424 points so if you use a higher resolution RGB image, more the same camera space coordinate will be repeated for more than one pixel. Let me know if this is what you wanted or not.
I think it should be compatible with Windows 10 but I have not verified. I would hope that with over 1000 downloads, at least one person would have tried it and mentioned something if it was not.
I want to extact the location of each joint and then manipulate this data to produce other stuff (distance between bodies for example). Do you know how to isolate joint information using Haro3d library?
Thank you in advance.
Have a look at the second comment of the current thread, in response to the first comment. Details are given on how to extrat the joint positions. Let me know if you need more information.
Hello Marc! Excellent work, it helped me a lot!
I am using this library to extract the joints positions,but I don't find how to get the timestamp when the frame was created. It's possible to get this timestamp with these VIs? I need this information to synchronise the joints positions with other data.
I noticed that "Is tracked" output turn from True to False many times when I run the program bellow. The instant when "Is tracked" output is True means the frame was just created? Or it's just an error of my program?
First, I apologize for the delay. I distinctly remember responding some time ago but I just saw that my response never made it to the board.
Second, thank you for your comment. I appreciate it.
Finally, here is your answer:
Concerning the timestamp: There is no time stamp provided by the Kinect. However, I think that you can create your own. The maximum rate that the Kinect can run is 30 Hz so you should not expect an accuracy better than 33 ms.
Concerning the tracking: This is related to the point above. The loop in your code iterates much faster than the 30 Hz of the Kinect. When data are requested from the Kinect and none is available, the default cluster is used which has a False value for the "Is tracked" variable.
I would recomment that you look at the status value out of Kinect_Body_API.VI. Look at the "Is tracked" cluster value only if the status is 0 (No error). This should give you a much better idea of the actual state of tracking of a particular body. Finally, I would recommend that you slow down your loop. You can use the wait function with a value of 30-40 ms but it is better to use a UI event loop with a timeout with the same value. This way, your UI remains responsive even if when you wait (basic LabVIEW Core 2 course topic).
Thanks for the great Library, it make LabVIEW program with Kinect easily.
But I have a problem in creating the exe together with Kinect driver. Somehow I can’t get the exe running without some intervention.
Windows 10 (64bits OS)
LabVIEW 2015 (32 bits version)
I did included the Kinect driver in the build’s data directory. But somehow the exe can’t locate it.
I did search on the web and found a suggestion, which is to remove the dll from directory and make LabVIEW search for the path itself.
But it still don’t work(Can’t find the dll automatically). I have to manually search for the dll when prompted..
After clicking the dll, than the exe run without any issues.
I have check the call library node in the Kinect VIs , all point to the right path.
Just wondering what have I missed out/did wrong.
Hope can get some help or advices here.
Thanks You and Have A Great Day !