LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

multicore and queue

Hi, I am experiencing LabVIEW crashes after I started using Timed Loops for each of several port data acquisitions. different loops to work with different processors (4 cores). I am using Queues for data communication between the loops. Could this be the cause of the crash (since queue is actually 4 cores are connected to the same queue). if so, then I don't think a shared variable can act like a queue, what is recommended?

 

Message 1 of 6
(3,070 Views)

Siamak wrote:

Hi, I am experiencing LabVIEW crashes after I started using Timed Loops for each of several port data acquisitions. different loops to work with different processors (4 cores). I am using Queues for data communication between the loops. Could this be the cause of the crash (since queue is actually 4 cores are connected to the same queue). if so, then I don't think a shared variable can act like a queue, what is recommended?

 


Unless you have some more eveidence to suspect the Queue, I would think not. I have used more than 400 queues to link loops on 32 core mahcines with no issue. Although multi-core machines are just coming into use in the last couple of years for PC's the technology to allow multiple proccessors to share memory has been around since the very first Star Wars movie.

 

I recomend you post the code that experiences this issue along with more info on the type of crash.

 

I have heard rumors of issues with timed loops though so if you aren't going to follw-up to this post try looking closely or eliminating the timed loops (put normal loop inside timed seq to control the core it runs on).

 

Ben

Message Edited by Ben on 05-28-2009 03:12 PM
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 6
(3,067 Views)

Thanks Ben,

 

I have turned the normal while loops to the times loops very recently, to see if I can monitor and actually control the several loops executions onto different CPU cores.

The code is unfortunately too big to be sent over! however, I expain a liitle hoping it sheds light on the issue;

All seperate data acquisition loops put their data on a single queue for a tcp transmitter. each loop has its own delay time adjustable via a shared variable on the control panel. In an average of 200 msec, the speed is fairly OK so far.

 

The slowness of the run started when I added an algorithm in a seperate loop to work on the bunch of acquired data, which looks at the size of the acquired data for a certain size before dequeuing them. for this, data acquisition loops enqueue their data in individual queues as well as that "tcp" queue, so the "algo" loops needs to see the certain size on all queues before it starts dequeuing. by slowness, I actually mean some data is not, or very poorly, acquired. their loop don't get chance to run. If I kill this "algo" loop not to do anything(the individual queues are still there), it doesn't make much difference. for controlling this issue, I turned the loops to Timed Loops and ditributed the cores among. CPU usage does not show high at all! that is why I thought the queue between the loops are very resource hungry.

 

Siamak

Message 3 of 6
(3,026 Views)

Start simplifying your code to the minimum required to represent the work being done and still demonstrate the issue (please clearly state the issue).

 

Just rying to help,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 6
(3,024 Views)

Hi, What I can ask for the time being regards to the TimedLoop with its Processor attribute under control as well as its Period and Offset. The attached picture illustrates the loop within which array values can be controlled on the Control Panel at run-time, to change period, offset and processor of the other loops as well as its own loop. Question is; can this cause a large overhead if a loop wants to set its processor on everty iteration?

 tl00.PNG

Message 5 of 6
(2,985 Views)
one other point I am trying to understand is about timed sequences; Since I need to have my data acquisition loops run separately and simultaneously each loop on their own assigned sampling rate (period attribute), I cannot imagine I can use timed sequence and normal loop inside it, Timed sequence seems to set daq loops in sequence running one after the other (even on different cores), am I correct?
Message 6 of 6
(2,957 Views)