LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Multithreaded VIs

Hi,

 

I'm running a program to acquire data from a PMT (variable, but ≈ 1+ MB/s) and to display an image. The problem is that when I increase the frame rate or increase the samples per frame beyond a certain point (4 fps & 200 samples/line), my CPU gets maxed out, my buffer begins to fill up, my image is distorted, then I get a buffer overflow error.

 

I have a quad-core machine (1.83 GHz Xeon Clovertown), but only one core of the CPU is being used—and it is being maxed out. I would like to have my program multithread to get around this issue, but I have not figured out how to do so. I've attached a screenshot of where the slowdown is taking place, in addition to the full VI. 

 

What's happening in the screenshot is: the 2D double DAQmx VI is sending out a triangle-waveform to two different motorized mirrors. The data returning in the 1D VI shown and is output to be displayed in an image. I would like to do these two in parallel and have different cores on my machine run each VI, but I have not been successful in my attempts.

 

I'm using LabView 8.6 and DAQmx 8.9 on Windows XP.

 

Thanks for any help. 

 

 

Download All
0 Kudos
Message 1 of 6
(2,839 Views)

Hi evang,

 

"would like to do these two in parallel"

 

At the moment you have programmed it in a sequential way. When you want it parallel you have to program it in that way...

 

 

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 6
(2,820 Views)

Hi evan,

 

sorry - I misread your message...

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 6
(2,809 Views)

Hi Gerd,

 

Thanks for the reply! No problem. I have tried programming it in parallel (as in this picture; sorry the lines are a bit overlayed), but then the program does not work at all.

 

I am not sure why I have to wire the error from the Analog 2D OUTPUT into the Analog 1D INPUT for the software to run properly. My code might be wrong, though. My other theory was that perhaps the hardware I am using (NI's BNC-2110) does not allow simultaneous output and input processes to take place? 

 

 

0 Kudos
Message 4 of 6
(2,799 Views)

Hi evangw,

 

In your screen shot you are initializing and appending arrays in a loop that iterates by the number of lines you have per frame. The more lines you have the slower the program will become as the Initialize array and append array functions take a lot of time. Each append array you are copying the data from both arrays to a new memory location. If you can, try and pre initialize as much as you can instead of in an loop.

 

 

Also to utilize another core, you could move the reading of the 1D data to a second For loop. The data doesn't seem to have any dependancies. Maybe timing would be an issue.

 

 

Displaying of data takes much of the CPU as well. Maybe using another loop to render the Intensity Graph will help. You would need to use a notifier or a queue setup for this.

 

Finally you can adjust the execution priority of the VI in the VI Properties to squeeze some more performance out.

 

 Hope that helps

 

0 Kudos
Message 5 of 6
(2,737 Views)

Also put a 2mS (1mS Even) delay in the loops.

 

Even if you have a very basic loop like generate random number in a loop with no Delay in it, the CPU almost always goes through the roof.  Instead of doing a little bit in a lot of loops try to think in terms of doing more work in fewer iterations (Which may not be achievable in your particular application).

 

craigc

LabVIEW 2012
0 Kudos
Message 6 of 6
(2,722 Views)