From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Question about producer consumer loops timing/data acquisition rate

Hi all.

 

To make my vi better I'd like to change it into a producer/consumer. Right now I collect data and graph/write to excel in the same loop.

 

When I change to a producer consumer how do continue to collect data and write it at a rate of, say, once every 5 or 10 seconds? Isn't the idea that the consumer loop will run slower than the producer? I guess I'm a little confused about the timing between them. Should I measure data continuously in the producer and just plot/write every 5 or 10 seconds in the consumer? I'm not dealing with a ton of data at one time, basically about 20 different scalars every couple of seconds, but I may need to run in for a week and I don't want anything to slow down over time. I'd like to keep the every couple of seconds rate constant for as many days as I need.

 

Any thoughts or links are appreciated.

0 Kudos
Message 1 of 14
(2,305 Views)

Let's assume you are acquiring data at some fixed rate continuously (to make it concrete, let's assume you acquire data at 1 kHz, and let's say you collect it 1000 points at a time).  As you acquire the data, you also want to process it, maybe by displaying it, maybe by computing its mean, maybe by plotting it, maybe by saving it to disk.

 

The important thing is that the data keep coming in, 1000 points at a time, and you don't want to do anything that prevents the acquisition loop from finishing its loop in time to read the next batch of data.  The secret is to "get the data out of the Producer Loop" (say by putting it on a Queue) and transfer it to another simultaneously-running "parallel" loop, the Consumer, that can use the time to process, save, plot, etc. the data.  The "transfer mechanism" (such as a Queue) has a little "elasticity" -- it can hold several sets of data if it needs a "little more time" (for example, extra time to open a file for saving the data).  Generally, the Consumer Loop should run, on average, at the same rate as the Producer (i.e. it should process 1000 points/second, on average).  

 

The Producer Loop, because it is typically a "clocked" loop that "sleeps" for most of its cycle, waking up only when data are available, then getting rid of the data and going back to sleep, leaves almost all of the processing time to the Consumer.  Because the Consumer is running in its own loop, if it needs a little extra time, it doesn't impede the running of the Producer.

 

Bob Schor

Message 2 of 14
(2,281 Views)

Thanks for the reply Bob, that definitely fills in some gaps in my knowledge about this.

 

So since I'm collecting data from some instruments via serial commands, my sample rate is set by using the count or timer vi (can't remember what it's called and don't have LV in front of me at the moment) that let's you wire in an input (seconds)and counts up (in seconds) and returns a true value when its count equals the wired input. When the true is returned it's wired to a case structure which then reads the instruments via serial commands. This is how I envision controlling the sample rate of the producer loop. 

 

Do I need to control the consumer loop the same way? Or should I just let it be and not add any type of waiting/ interation speed control?

 

0 Kudos
Message 3 of 14
(2,270 Views)

Hi Nadweb,

 

If you use a Queue to send your data, then with the simplest arrangement the Consumer will Dequeue one element per iteration of the loop, and will run at the 'same' rate (more or less) as the Producer (automatically, so to speak).

 

If you want to concatenate multiple chunks of data, you might choose to use either Shift Registers or try dequeuing multiple elements (via e.g. a nested For loop around the Dequeue). Note that this second implementation might give you a problem if you're not careful, if you don't send an appropriate number of samples (i.e. it could fail to handle the last 4 if you're waiting for 5 to process).

 

How long is the time in your Producer loop? If it's not so long that you would be annoyed waiting for up to twice that amount of time for the program to stop, you can use the probably simpler "Wait (ms)" node (I think you're probably using the "Elapsed Time" Express VI right now, by the sounds of it).

As far as I can tell, the "Elapsed Time" VI doesn't actually contain any waiting functions, so your CPU usage is probably high if you don't additionally have some sort of Wait node (with e.g. a small constant wired, like 100ms or 50ms).


GCentral
0 Kudos
Message 4 of 14
(2,240 Views)

@Nadweb wrote:

Hi all.

 

To make my vi better I'd like to change it into a producer/consumer. Right now I collect data and graph/write to excel in the same loop.

 

When I change to a producer consumer how do continue to collect data and write it at a rate of, say, once every 5 or 10 seconds? Isn't the idea that the consumer loop will run slower than the producer? I guess I'm a little confused about the timing between them. Should I measure data continuously in the producer and just plot/write every 5 or 10 seconds in the consumer? I'm not dealing with a ton of data at one time, basically about 20 different scalars every couple of seconds, but I may need to run in for a week and I don't want anything to slow down over time. I'd like to keep the every couple of seconds rate constant for as many days as I need.

 

Any thoughts or links are appreciated.


I like Producer/Consumer, but it may be overkill in this case. Are you having problems with your current architecture? Are there other reasons to convert this code to Producer/Consumer? Two seconds is an eternity in computer time, so handling that small amount of data in the data acquisition loop should not be a problem. If you're worried about the file getting too large and slowing down the process then you can close the file and open a new one after some amount of time. You might also consider the Asynchronous TDMS logger package which already has functionality to increment files based on either time or file size.

0 Kudos
Message 5 of 14
(2,231 Views)

Hi cbutcher,

 

Thanks for the reply.

 

Ok, that makes sense, it sounds like by using a queue the producer loop will essentially govern the speed. 

 

At the moment I don't really want to send chunks of data, but that's good to know.

 

Yes, the elapsed time vi is what I'm using, and it doesn't contain any waiting. I have not added an extra wait to the loop, perhaps I should add a 50ms wait. I use elapsed time in case the user ever wanted a long sample rate of an hour or something. Then it wouldn't take an hour to stop.

 

0 Kudos
Message 6 of 14
(2,222 Views)

Hi johntrich1971,

 

Thanks for the reply.

 

My current architecture seems to work well, but recently, after about 24 hours of running, I noticed the vi running a little slow. I had a 5 second sample rate set but was actually getting data every 11 ish seconds. Not the end of the word for my application, but I figured the architecture needed to be fixed, since right now I'm doing everything in one loop.

 

Will a large excel file slow down the vi? I know it has to open and write to it each time. I'm using the Express write to measurement vi and writing and the sample rate (generally every 10 seconds).

 


@johntrich1971 wrote:

@Nadweb wrote:

Hi all.

 

To make my vi better I'd like to change it into a producer/consumer. Right now I collect data and graph/write to excel in the same loop.

 

When I change to a producer consumer how do continue to collect data and write it at a rate of, say, once every 5 or 10 seconds? Isn't the idea that the consumer loop will run slower than the producer? I guess I'm a little confused about the timing between them. Should I measure data continuously in the producer and just plot/write every 5 or 10 seconds in the consumer? I'm not dealing with a ton of data at one time, basically about 20 different scalars every couple of seconds, but I may need to run in for a week and I don't want anything to slow down over time. I'd like to keep the every couple of seconds rate constant for as many days as I need.

 

Any thoughts or links are appreciated.


I like Producer/Consumer, but it may be overkill in this case. Are you having problems with your current architecture? Are there other reasons to convert this code to Producer/Consumer? Two seconds is an eternity in computer time, so handling that small amount of data in the data acquisition loop should not be a problem. If you're worried about the file getting too large and slowing down the process then you can close the file and open a new one after some amount of time. You might also consider the Asynchronous TDMS logger package which already has functionality to increment files based on either time or file size.



@johntrich1971 wrote:

@Nadweb wrote:

Hi all.

 

To make my vi better I'd like to change it into a producer/consumer. Right now I collect data and graph/write to excel in the same loop.

 

When I change to a producer consumer how do continue to collect data and write it at a rate of, say, once every 5 or 10 seconds? Isn't the idea that the consumer loop will run slower than the producer? I guess I'm a little confused about the timing between them. Should I measure data continuously in the producer and just plot/write every 5 or 10 seconds in the consumer? I'm not dealing with a ton of data at one time, basically about 20 different scalars every couple of seconds, but I may need to run in for a week and I don't want anything to slow down over time. I'd like to keep the every couple of seconds rate constant for as many days as I need.

 

Any thoughts or links are appreciated.


I like Producer/Consumer, but it may be overkill in this case. Are you having problems with your current architecture? Are there other reasons to convert this code to Producer/Consumer? Two seconds is an eternity in computer time, so handling that small amount of data in the data acquisition loop should not be a problem. If you're worried about the file getting too large and slowing down the process then you can close the file and open a new one after some amount of time. You might also consider the Asynchronous TDMS logger package which already has functionality to increment files based on either time or file size.


 

0 Kudos
Message 7 of 14
(2,217 Views)

@Nadweb wrote:

Hi johntrich1971,

 

Will a large excel file slow down the vi? I know it has to open and write to it each time. I'm using the Express write to measurement vi and writing and the sample rate (generally every 10 seconds).



Step away from the Express vi. There is no reason to open and close the file each time, and as the file size increases the overhead will increase. With that said,if you want to continue using the Express vi perhaps you could break the data up into multiple files. Add an index to the filename and increment that index when a given number of data points has been logged.Converting to Producer/Consumer is not going to help this problem. The main purpose for Producer/Consumer is that sometimes a single write will take longer, but on average it keeps up. If it continuously falls behind it will become a memory leak and ultimately a crash. 

 

I still think that using the Asynchronous TDMS Logger would be beneficial. It is simple to use and since it runs asynchronously it accomplishes the same thing as your producer/consumer. There is a TDMS extension for Excel that allows you to open TDMS files in Excel. 

Message 8 of 14
(2,203 Views)

@Nadweb wrote:

Hi cbutcher,

...

Yes, the elapsed time vi is what I'm using, and it doesn't contain any waiting. I have not added an extra wait to the loop, perhaps I should add a 50ms wait. I use elapsed time in case the user ever wanted a long sample rate of an hour or something. Then it wouldn't take an hour to stop.

 


So adding some wait will definitely help, but if you think someone might choose a "long" time (e.g. minutes, hours...) then you should break it into multiple chunks and wait a shorter time each time (like the Elapsed Time VI does). 

I'd definitely suggest adding a short wait (~100ms) to reduce the CPU usage (there's no need to spin a loop as fast as possible if it's not going to do anything most of the iterations).

 

As John said, you should work on opening the file before your loop, and then closing it afterwards. This will be much more effective, especially once the file becomes larger. Then you only need a Write Text File or similar inside the loop.

If you're opening an existing file before the loop, make sure to use Set File Position to the end if you want to append (otherwise you'll overwrite previous data and potentially have a mangled file).

 

Something like this could allow both a simple write method and an iterating file based on size/number of iterations etc (this is taken from a cRIO, so it may not be optimal for a desktop application, but it should show the idea):

cbutcher_0-1598630164090.png

Here I have a loop running twice a second that increments a counter when it doesn't change file, and then closes a File Reference and creates a new File Reference when it does.

I've taken the image with the different cases visible between two loops, but they have similar behaviour (i.e. on some value, take some action, otherwise, increment the counter).

This produces a set of log files continuously that I can check if I run into problems (e.g., did my cRIO reach ~100% CPU at the time that something went wrong, or is the memory being leaked?).

 


GCentral
Message 9 of 14
(2,196 Views)

I've been trying more and more not to use the Express vi's in more complicated block diagrams. After reading all this I'd like to get rid of it to make the code as efficient as possible. I definitely dont need to be opening and closing the file every 10 seconds for days on end.

 

I've never used the TDMS logger so I will have to look into it some more. At first glance it looks pretty straight forward.

 

 


@johntrich1971 wrote:

@Nadweb wrote:

Hi johntrich1971,

 

Will a large excel file slow down the vi? I know it has to open and write to it each time. I'm using the Express write to measurement vi and writing and the sample rate (generally every 10 seconds).



Step away from the Express vi. There is no reason to open and close the file each time, and as the file size increases the overhead will increase. With that said,if you want to continue using the Express vi perhaps you could break the data up into multiple files. Add an index to the filename and increment that index when a given number of data points has been logged.Converting to Producer/Consumer is not going to help this problem. The main purpose for Producer/Consumer is that sometimes a single write will take longer, but on average it keeps up. If it continuously falls behind it will become a memory leak and ultimately a crash. 

 

I still think that using the Asynchronous TDMS Logger would be beneficial. It is simple to use and since it runs asynchronously it accomplishes the same thing as your producer/consumer. There is a TDMS extension for Excel that allows you to open TDMS files in Excel. 


 

0 Kudos
Message 10 of 14
(2,180 Views)