I swear I have read forum posts with questions similar to mine, but I cannot locate them anymore.
My system is as follows:
>> 2 Kollmorgen BLDC motors with their respective servo drives
>> A NI USB-6003 DAQ to collect analog shaft position data and write analog torque commands back to the servo drives
>> LabVIEW to process the shaft positions of each motor and send the proper control signal to the USB-6003
Right now, if I set my DAQmx blocks to collect at above 50Hz, the control loop period becomes relatively inconsistent.
I am trying to nail down the bottleneck in my system and I have a hunch from the forum posts [I can no longer locate] that the USB DAQ itself may have some limitations. I'm a PhD student so I have to get rather specific about what the exact issue is...
However, it may very well be that I programmed my DAQmx tasks sub optimally as well.
I have attached my project. I have 2 subVIs for configuring Analog Input and Output. Saved in LV2015 so hopefully more people can open it. I am using LV2020.
Any first hand experience or documentation people have that addresses some of these issues would be greatly appreciated. I have already read the User Manual for the USB-6003 and read as much as I could locate about the DAQmx blocks. I can't find a document that discusses using them in reference to closed-loop motion control.
Solved! Go to Solution.
Looking at your SubVI_configAnalogInput.vi, I don't know why you aren't getting an error. You are creating two tasks using the same DAQ (Dev1), and even repeating channels (ai0 and ai1). But even assuming 2 different DAQs, running two tasks over USB can be problematic. I would invest in something like a USB-6212 (I have been using these) and put all of your AI channels into a single task.
As far as your analog output, you should not make is a Continuous task. It can be an on demand task since you just write to it as some general rate.
Now if I was doing this, I would be using a cRIO due to the RT being able to more guarantee a desired rate. But I am a fan of putting control loops inside of the FPGA, which really sets the loop rate.
I tried disabling the 2nd task which did seem to make the loop latency more consistent.
However, after several attempts to optimize the program for speed and running the VI profiler, I could not get the loop to run any faster than 1kHz and it was very jittery at that speed. Oddly it was most jittery at 500Hz...
I am lucky enough to have been able to borrow a cRIO. I do not have the FPGA module (and can't buy it), but I do have the RT module and am switching over to that target this week.
Thanks for the help.