We have an application using cRIO. We're trying to run a servo loop at 20 KHz. We measured the speed of the servo loop, and it seems that it can reliably run at no more than 1 or 2 kHz without missing timer ticks.
Is there a tool or a paper somewhere that will tell me the time it takes to execute fixed-point math instructions on the FPGA? I find it hard to believe that the loop is too slow due to the math, but it's not clear where else it could be slowed. We have quite a bit of spare space in the FPGA, so we could change the configuration of something to help with pipelining, etc.
The way we measured the timing of the loop was to put a digital I/O on a shift register and invert the bit each time through the loop. We can reliably run the loop at the timing we specify only if we keep the speed to 1 or 2 kHz.
Thanks for any enlightenment as to what the problem might be!
This is a good resource for doing fixed point math:
Click on the user guide for some advance information.
It doesn't specify timing though which may make it irrelevent.
I'm not specifically aware of a resource that does.
The best solution may be simply to run your fixed point vi's in a benchmarking loop.
In our application, we have used the standard math functions from the labview palate, instead of those in the library you mentioned. It's not clear to me how much improvement we would see if we used the special library, but when we find where the bottlenecks are, we may try it. Is it true of the labview FPGA system as in other FPGA math that multiplication and division are very expensive, and should be avoided? I think much of our code could use simple powers of 2 division and multiplication.
We are compiling a version of the application with some debugging outputs to see how long each loop takes to run, and will go from there.
we run PID loops at up to 400 kHz on R series cards. we use the fixed point functions.
cRIO is typically limited by hardware module speeds.
when using the high throughput fixed point functions, the time to execute is shown in the configuration.
with analog input, the value read is pipelined into the PID and the PID is broken up into several single cycle timed loops.
if you isolate the algorithm in a loop, display the delta ticks in the loop to get an idea of current performance.
Thanks. I'll try putting a tick counter in each loop.
We found using a digital I/O point on each loop, hooked to a scope, that something in the system is causing all of the loops to run at the same speed, about 3.78 KHz I think, but they are all exectly the same frequency, as measured by an agilent scope. Not sure what it is, but something is not right, and seems to delay all of the loops the same amount.
Right now we have an encoder we read, then we calculate with the new encoder reading while the encoder goes back for the next reading.
We've figured out that it is the I/O that is blocking our tasks from running at speed. We have a bunch of analog and digital I/O that we're updating quite fast. (although not as fast as we thought!) Is there any guru-approved method of extracting maximum speed out of the I/O subsystem of CRIO? Like maybe putting all the I/O in one loop, or something? Or is it just a hardware limitation of the modules?
each of the cRIO modules has it's own idiosyncrasies. the module specific examples are sometimes useful but without knowing what you are really trying to do, i don't think there is generic info to be provided.
however, you mentioned encoder input to pid. this is a case where i would have one loop for encoder input and another loop for pid using the encoder value. what module are you using for encoder?
Hi Stu. We're using the SEA Endat encoder module and their driver VI. It all seems to work great in the demo app that they provide. We can read the 2 absolute encoders at a 30 microseconds rate with no problem.
I don't remember which analog modules we are using off-hand, but they are high-density, with lots of channels on each module.
Our plan is to try to split off the I/O updates that are for housekeeping and debugging and put them where they won't interfere with the time-critical loop. Or maybe a dedicated analog module for the servo loop.
I am thinking of using the occurrence functions to synchronize it all once it's working.
in this case, the EnDat interface could be put into your closed loop without the issues involved with incremental encoders. if your ourput is analog output, i would expect your loop to be limited to the SEA module rates.
We found that our calculations only take about 13 microseconds. The majority of the time was (is) spent in CRIO I/O time. We were originally writing to the analog outputs at random times during the calculation loop. We consolidated all the I/O into one I/O call at the end of the calculation loop, and it reduced the time for I/O to a reasonable ~30 microseconds.
We had an analog Input module in our system, which was taking more than 2 milliseconds to read. I'm sure there's something wrong with it, as it is a 2901 module that is rated for 500 Ks/s. We just removed it temporarily.
We can close our loop at 19 Khz at this point. We still hope to get that extra kHz...
Thanks for the help so far in this endeavor!