10-14-2021 03:50 PM
Hi All,
I am using LabView to control a CCD camera (Blackfly-BFS-U3-04S2C-CS). The goal is to track a particle. I have put the image acquisition sub-vi's inside a while loop. The camera is being triggered using internal software. The camera is set to 348 frames per second (fps). I expected that each loop execution time will be constant, and would be on average 1000/348 ~ 2.87 milli-seconds. To my surprise, I see significant variability in the execution time of some iteration of the while loop. Please look at the attached image.
There are a significant number of instances where the time recorded between two frames is greater than 8 ms. That is not good for our applications, since we want the time duration between each frame to be as constant as possible. If the time between two frames become as large as 20 ms, we are in a bad situation.
Is there a way in which we can make sure that each iteration of the while loop takes the same time to execute?
Thank you,
Avinash Kumar
10-14-2021 04:43 PM - edited 10-14-2021 04:44 PM
I think you're bumping right up against the timing accuracy of the operating system (Windows). You are going to see significant jitter, especially if you are doing something that requires some significant CPU usage.
10-14-2021 05:01 PM
Windows is NOT a real time operating system, so execution times can vary due to interrupts, multitasking, and time slicing to name a few.
10-14-2021 06:19 PM
Your loop is doing multiple things at once: trying to acquire images, save images, run DAQmx Tasks, run a Math Script node, and maybe some other things that I cannot see as the diagram is bigger than my monitor.
You will need to break your loop into parallel tasks, for one, saving data should be in its own loop. Also Math Script is most likely slower than native LabVIEW code.
10-15-2021 08:01 AM
First please save it as LV2012 or 2017 or something everyone can open it.
8 ms is tough in Windows.
What others said about multiple loops. If you use timed loops you have a chance as then you can assign each loop to its own processor and bump up the priority for that loop.
Consumer producer model is something to look into.
Make sure that your critical loop has NOTHING that deals with a front panel interface directly as all of those things end up in the same thread. I suspect that this means that you have to pass controls, etc. into the loop using a notifier or a queue, or maybe just have a handshake using notifiers or queues where you pass an interrupt (that has no front panel indicator) to the critical loop so that you can change camera settings, etc.
10-15-2021 11:29 AM
@Tom_Powers wrote:
What others said about multiple loops. If you use timed loops you have a chance as then you can assign each loop to its own processor and bump up the priority for that loop.
Don't remember where I read it, but it is my understanding that timed loops are bad on Windows systems, they are only meant for RT systems, and can make performance worse on a Win Box.
mcduff
10-15-2021 11:54 AM - edited 10-15-2021 12:04 PM
Obviously you are not telling us the whole story. Can you explain the hardware and software environment in detail? Since you are using LabVIEW RT/FPGA timing functions, I assume this is not windows, right? What's the hardware? CPU? # of cores? Memory? etc.
As others have said, you need to separate the time critical parts (Image acquisition) from the non-critical parts (saving, displaying, etc.) and LabVIEW has all the tools to do just that. I am sure we can point you in the right direction and give more specific advice once we have all the details.
And yes, there is absolutely no need for any Mathscript in the inner loop!
What does "mapping v2" do? Why does it need these huge reshaped arrays (one never changes!) to output a few scalars?
10-15-2021 12:02 PM
@mcduff wrote:Don't remember where I read it, but it is my understanding that timed loops are bad on Windows systems, they are only meant for RT systems, and can make performance worse on a Win Box.
The biggest thing about Timed Loops is they serialize everything in them. It is all a single thread. What this means is that code in your loop cannot run in parallel with other code inside of that loop. With a normal loop, the compiler makes "clumps" and each clump is able to run in parallel with other clumps that do not have a data dependency, which typically improves performance.
But Timed Loops also forces itself into a time-critical priority. This means it is easy for it to swamp out any other process that could be running, not giving them time on the CPU. And with Windows, this could mean the display, making it appear like nothing is happening.
So Timed Loops are really only for time-critical, deterministic parts of your application. And let's face it, if you have a time-critical, deterministic piece of code, it should not be on a Windows OS; it should be on an RT system or an FPGA.