From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
12-16-2020 03:29 AM
Hello,
struggling with optimisation here. I run data acquisition on cRIO 9057 with five 9232 modules on max S/s (15ch @ 102400 S/s) and with this setup I get weird behavior of LabVIEW.
The steps of processing, designed into separate subVIs, are:
The times are taken from Performance and memory tool. The data is coming from the first loop every 10 ms, so there should be enough time left. But if I measure execution time of the series of the VIs, I get 15-30 ms and therefore RT FIFO overflows.
I tried:
without success. Upper VI:
Nothing special in the subVIs. They really take no time to execute.
Any ideas why this is happening?
T
Solved! Go to Solution.
12-16-2020 03:31 AM
One more thing, I noticed this behavior earlier when I had custom probes running, but i got rid of them and also minimized the number of generic ones.
12-16-2020 03:46 AM
Probes of ANY kind are great ways of slowing your code to a crawl.
12-16-2020 05:11 AM
One thing to notice, your timing is not forced by execution of the vi's
You need to make sure that the time is taken at the correct place in the execution. Force it by using data flow of the wires.
Save to file will also have a different timing, depending on the size of the file.
12-16-2020 05:18 AM
Sorry, I missed the point to what you are writing. Can you be more specific? How is my timing not forced by the VIs in the loop? The loop does not simply pause for a while, the execution time of an operation is a sum of its serial operations times, right? If not, how to force it to happen?
12-16-2020 07:29 AM
The first 2 vi's will run in parallel.
Make sure that the time vi will run before the vi you want to measure.
For the last time vi, I did not look correct at it. You have a wire running under it from right to left, this can hide the data flow.
12-16-2020 08:12 AM - edited 12-16-2020 08:18 AM
@dkfire wrote:
The first 2 vi's will run in parallel.
Make sure that the time vi will run before the vi you want to measure.
For the last time vi, I did not look correct at it. You have a wire running under it from right to left, this can hide the data flow.
EDIT: NICE catch on the backward wire to error in
And, that is only 1 problem... I think that the OpenG functionality is to do NOTHING with error in TRUE. (Somebody please check me on that!)
Essentially your benchmarking is the problem.
Try using Stall Dataflow.vim and turn off automatic debugging on the vi
12-16-2020 08:23 AM
@JÞB wrote:
And, that is only 1 problem... I think that the OpenG functionality is to do NOTHING with error in TRUE. (Somebody please check me on that!)
In the OpenG VI the 'Error In' and 'Error Out' are directly connected, just feed through. No Case Structure. The VI will always return a valid tick count no matter what value 'Error In' is.
12-17-2020 06:19 AM
Allow debugging was the culprit here - turning it off caused the execution time to drop rapidly. Thank you for your advices. The Tick count VIs were just quickly placed without cosmetics, so it might be confusing for others. Thank you.