LabVIEW Robotics Documents

Showing results for 
Search instead for 
Did you mean: 

Using the XBox Kinect with LabVIEW Robotics Starter Kit


I put an XBox Kinect on the LabVIEW Robotics Starter Kit to perform better obstacle avoidance.

<object width="640" height="360"><param name="movie" value=";hl=en_US&amp;rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src=";hl=en_US&amp;rel=0" type="application/x-shockwave-flash" width="640" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>

Hardware Overview

Part Description
Model Name/Number

NI Robotics Starter Kit (includes:

Mobile Platform, Sensors and FPGA_Real-Time Processing Targets)

National Instruments781222-01


XBox KinectMicrosoft

I also recommend using some connectors when hookup power to the Kinect and Fit-PC.  I prefer Molex connectors: 70107-0002 and 50-57-9403

See for info on how to hack in to the power on Starter Kit. 

For the other hardware modifications: I used some standoffs to hold a piece of foam poster board over the Starter Kit.  Then I used some velcro to attach the FitPC and Kinect.  I wrapped up and secured the wires using zip-ties.  That's about it!

Software setup and requirements

NI Robotics Starter kit software, which includes: LabVIEW Robotics, LabVIEW Real-Time, LabVIEW FPGA software modules

The hacked LabVIEW driver for the XBox Kinect

Windows Embedded 7 Standard on the FitPC (takes up less than 2GB!) plus at least the LV Runtime.  I installed the full development system.

I set up the FitPC to connect to my wireless network and then used remote desktop to control it.

The code

Okay, so I kind of made this example using some unreleased software from the LabVIEW Robotics Module (whoops).  So I can't post all of the code--but I can post the important pieces and tell you where they plug in.

From a high level, for the code on the Starter Kit, I simply modified the existing roaming example.  Here's the main modification to the high level roaming example


I also modified the "Calculate Driving" to fuse the sonar and Kinect data.  Unfortuately, that VI is missing so I can't post it.  However, essentially it would use the Kinect data when available, but then default to the sonar data when there was no Kinect data.  Also, if the sonar detected an obstacle that was really close (panic range), the robot would rely on the sonar data to steer clear of the nearby obstacle.  Within Calculate Driving, there is a VI to read an element from the network stream to get the Kinect data.  I set the timeout to 1ms so that we weren't waiting around forever for Kinect data while we run in to an obstacle.

The main VI is Kinect Obstacle Avoidance.  A network stream is created to the sbRIO to transfer kinect heading data.  First, the depth image is aquired and converted to a depth in inches.  Then the data is converted to 3D coordinates (from pixel space).  Afterwards, a small slice of the data is used for obstacle avoidance (after all, we don't need to avoid the ceiling).  Then we find the largest gap and drive towards it.  Otherwise, we turn around.


Converting to 3D coordinates from pixel space.  On the first run of the VI, we calculate the matrix used to multiple the depth image by to get the x, y, and z values.  The matrix is stored in a shift register so we don't have to do it every time.


Processing the data can be tough when you're dealing with ~300,000 data points for every data acquisition.  Therefore, in order to make computation faster and more efficient, we take a 3D slice of the data.  Notice how I preallocate memory for the sliced points and then replace the elements rather than generating an array.  Why slice?  Because in the next subVI, we're going to do a sort--personally I'd rather sort 10,000 points than 300,000.


Finally we find a heading.  It's a little tricky with 3D data.  I essentially just looked at the "Y data", which runs from the robot's left to right looking for gaps.  If there is a gap big enough for the robot to fit through, we head there.  In order to find the gaps, I sort the data and then take difference between neighbors.  The largest values will be the gaps.


Update (12-20-2012)

After a couple inquiries, here is a screen shot of the modified "Calculate Heading" vi.  I also added a zip file with all the files contained and the project file, which you should be able to open and run (in theory, I haven't tested it).  Hopefully this helps!  Also, consider using the driver found at:, which uses the Microsoft SDK and a .NET interface.

Calculate heading VI.png

Member anfedres86

Good job!!!, is almost the same thing that I'm doing with a friend, but we are using other plataform and other kind of navigation,

I used at the begining the VFH combined with the Kinect, but the rover just avoided obstacles, so we decide to do our plataform more inteligent, we decide to combine the Occupancy grid Map with the AD*,  because of this, I decide to implement the real code of the Occupancy grid map and ray casting for oue car.

Keep me in the loop, I would like know more about this project, also, I been doing some improvements that it could be helpfull for everyone who is working with the Kinect and Labview.

"Everything that has a beginning has an end"
Member RoboticsME

Were you able to account for the non-lineararities in the camera, or did you properly correct for them?  I would think it could be hard to build a map without doing so.  How did you do it?

Member anfedres86

At least for now, I'm just ignoring the last 8 pixels, so my field of view becomes just a little bit smaller than before, but yes, you right, It's difficult to implement the map withouth this, for that, I would like to solve the problem to, and if you see in the other post, I'm trying to correct it, but I can't follow the math that they have in the website, so if you know how, we can work togheter and solve it.

"Everything that has a beginning has an end"
Member eek

Nice work, Karl. As usual.

Member duvanG

buenas comapeñeros de la cominudad soy de colombia estoy trabajando con la plataforma robotica starter kit 2.0 y kinect de xbox pero he tenido inconvenientes en el acople de el kinect en labview trato de correr los ejemplos para kinect y trata de buscar siempre el archivo microsoft.kinect.dll y no se que hacer ya he instalado el software para kinect como es el SDK 1.5 y el toolkit para labview agradeceria si me pudieran colaborar en saber como hicieron ustedes el acople a labview que se necesita para poder lograrlo. gracias por la ayuda que puedan brindarme

Member JuanPadillaViva

Hola Duvan, intenta con kinecthesia de la univesidad de Leeds, funciona de maravilla. saludos

Member JuanPadillaViva

Hola Duvan, intenta con kinecthesia de la univesidad de Leeds, funciona de maravilla. saludos

Member claude456

bonjour, je suis étudiant , je travaille sur un projet tels que la detection d'obstacles sous labview avec kinect, la je cherche un programme permettant l'evitement d'obstacles , merci pour votre réponse

Member martinnobis

Does this support the Xbox One Kinect?

Member 泡汤

Hello,could you tell me how can I connect the kinect which has a USB  port with the starter kit,which kind of connectors should I use?