From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

Scan Engine Rate

Hi,

 

I am studying for the CLED and I have a question about the scan engine rate.  The CRIO Developper's Guide still indicates that a maximum scan rate of 1kHz is possible with the scan engine.  This also seems to be reinforced in this post.  However, I spoke to an NI rep 2 years ago and, after consulting with NI's R&D, I was told that this was a myth; here is her response:

 

"Regarding Scan Engine loop rates, I received this information from R&D, that the 1kHz loop rate is actually a (very popular) myth:
There's no such restriction in the scan engine (1kHz max update rate) itself on any platform, it's up to the max resolution of the timer we use which is in microseconds (so 1MHz loop rate). It's usually up to the user's I/O how fast that MAX rate can be, and on a faster platform the rate could and probably should be faster than on a slower platform. But when asked for a general ball park number for how fast RSI will go, we give the following answer:

1 msec - This is a super-simplisitc, hardware-agnostic answer.  But it's the best ball park number for how fast we can go on just about all platforms.  Historical note: This number used to be a statically enforced number where you'd get a deploy error.  But that was removed some time ago.  But it is part of the reason why this number persists in people's minds.

There is a big list of things it depends on, and some of them are sometimes unexpected to the asker of the question:
* Amount of data being scanned (optimal case is a single, simple parallel DI module)
* Processing power - We have found that this matters a little less than you might think because...
* Bus DMA performance - There is a limit here to how fast we can go based on the PCI bus plus the driver overhead to make DMA happen on that bus.
* Synchronization time - We have to spend some time keeping the RT side (the nanosecond engine) in synch with the clock on the FPGA.

So what we do now is keep track of whether or not we had enough time to do everything in the amount of time the scan engine has allowed us.  If we don't, we'll output errors on all our variables to say that we can't get all the data coherent in the time allowed.  We also publish a variable called "PercentScanUtilization" or something to that effect that tells the customer how much of the scan time is being used.  So our recommendation to somebody looking to optimize for speed is to watch that number."

 

I would like confirmation that the above is in fact the case.  I am also curious as to why the developper's guide has not been updated if this is indeed true.

 

More importantly, I would like to know how to answer this if it comes up on the CLED.

 

Regards,

 

Jordan

Jordan McBain, PhD
LabVIEW Controls Engineer
Revolutionary Engineering
Message 1 of 3
(5,989 Views)

Hey Jordan,

 

The rep you spoke to is correct in that it is a myth. We intend to use it more as a guideline to help customers decide if they need to program the FPGA for a more optimized bitfile. 

 

Here is good documentation on how the scan engine works. Consider if you have a full 8 slot chassis and it becomes very clear how this can overload the Real-Time portion of the controller trying to collect 1 sample form every channel at a rate faster than 1 Khz. Which is why we have the FPGA. If you intend to do control at a rate around 1 kHz we recommend you go with FPGA. Also if you need to analyze the single besure to think about Nyquists Theorem. We need multiple samples of a signal to get a accurate representation of that signal. Depending on what information you want from the signal you may need more or less samples. Value at a instance in time is 1 sample, Frequency at minimum 2, ...etc. We typically recommond 6-10 samples to get a accurate representation of a signal. 

RSI Under the hood

http://www.ni.com/white-paper/7693/en/

 

Here are some performance benchmarks that will back up why we don't recommend the RIO Scan Interface (Scan Engine) for loop rates faster than 1kHz. You'll notice that at lower channel counts we can maintain the 1 Khz rate but past 16 channels it drops quickly and cpu usage rises. I know the document says max rate but depending on what information you need from the signal you may be on the boundary for a 200 Hz signal you are trying to monitor.
RSI Performance Benchmarks

http://www.ni.com/white-paper/7792/en/

 

If you were only using DIO modules then it is possible to achieve faster rates, on certain modules we read the pins directly from the FPGA with the scan engine. For an Analog modlue we are having to request the current value from a register on the module and send it back which takes significantly more time for a multichannel module that isn't simultaneously sampling. 

 

Hope this helps. Treat it like a guideline rather than a absolute. Good luck on the CLED.

Kyle Hartley
Senior Embedded Software Engineer

Message 2 of 3
(5,986 Views)

Thanks - I too thought it was a constraint rather than a guideline. Now I know better.

What vesion of LV did it change ?

Consultant Control Engineer
www-isc-ltd.com
0 Kudos
Message 3 of 3
(5,969 Views)