LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

In a Producer/Consumer structure utilizing a Queue, the Producer loop is producing faster than the Consumer loop can consume.

The main VI among the attached files is "Main_counter.vi," while the rest are encapsulated subVIs. I am using a DAQ board to perform edge counting and analog voltage measurements. I use edge counting to count the number of photons, which accumulates over time. I calculate the photon count per unit time in the subVI called "cal count." The case structure within the consumer loop exists to update the XY graph.

I placed the queue release inside the consumer loop because I want the VI to stop only after consuming all the data remaining in the queue when I stop it.

 

The problem is that I set the sample clock's sample rate to 1000, and the number of samples to 100, which means I'm measuring 100 samples every 0.1 seconds.(The problem happens at different set of the sample rate and the number of samples)

However, when the VI runs for over 60 seconds, the data doesn't get consumed immediately as soon as it's produced; instead, it starts to accumulate.

What could be the issue?

0 Kudos
Message 1 of 13
(887 Views)

I would recommend you Measuring Execution Time of a Code using LabVIEW and check the actual execution time of producer and consumer loops. Then you can use VI Profiler to locate the source of the delay.

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 2 of 13
(843 Views)

I can see many issues that could impact performance and are even dangerous.

 

  • Why do you need to hammer the DO 1000x per second even if none of the inputs have changed?
  • Values properties (e.g. "stop") are much more demanding because they require a thread switch and synchronous execution. Use local variables instead.
  • Reading local variables for e.g. sample rate and # of samples in the lower loop is dangerous because these values are fixed during the upper loop, so if the controls are changed by the user during execution, things will go haywire. Just wire from the controls outside the loop! 
  • You are building many ever-growing arrays in shift registers and since arrays need to be contiguous in memory, new memory needs to be constantly allocated and all data moved there. This is expensive and causes memory fragmentation You'll run out of contiguous free memory way before you run out of memory.. Do you know the final size of the data?
  • Aren't the x values equally spaced? Maybe all you need is a 2D array of final size and one waveform graph for all traces? They seem to share the x-axis. Why all that song and dance with the x scaling?
  • Your subVIs in the lower loop should probably be inlined.
  • Having one chance to save the date after all loops are complete is questionable.

 

Can you explain the experiment or even replace the instrument with a realistic simulation (same data sizes, etc.) so we can play around a little bit. What are the sizes of the various data structures?

0 Kudos
Message 3 of 13
(829 Views)

I was not able to open your Main_Counter VI (you don't need to use underscores in naming VIs -- spaces are perfectly legal, and more "human-friendly"), so I can't understand your problem.

 

Here's a suggestion:  Look at the "Producer-Consumer Template (Data)" that ships with LabVIEW.  To find it, open LabVIEW 2015, click on File, New ... (the three dots are required -- it is the second entry in the Drop-down).  Navigate to "From Template", go down several levels until you find "Producer/Consumer Design Pattern (Data)" and click on it.

 

This design, in fact, is flawed, as you noted, but your solution is just as bad.  When LabVIEW 2016 added Asynchronous Channel Wires and the Stream Channel Wire replaced Producer/Consumer Queues, the responsibility was "fixed" -- the Producer knows when the data ends, and can set "last element?" as True, and once it sends that (which I'll call the "Sentinel" in the next paragraph), it can exit.  The Consumer just looks for the "last element?" to know when it is safe to exit.

 

Here's how to fix this with Queues:  The Producer "knows" when all the data has been sent (it is usually the loop with the Stop button).  When it exits, it does not release the Queue -- it sends one last element, something that is "unique".  With Consumer/Producers based on DAQmx Reads, the data being enqueued is usually an Array, so a "unique" value is an empty Array (a constant Array of the same type as your "data", but with no elements).  Once the Producer sends it, it simply stops.  Now, in the Consumer, you dequeue the data (array) and test if it is empty.  If it is not, then it contains data, and you process it.  But if it is empty, there is nothing to process, and no more coming, so you exit the Consumer loop and the Consumer releases the Queue.  No problem, the Queue is guaranteed to exist until both Producer and Consumer have finished with it.

 

Here's a very short lesson on "common DAQmx Read Loops", and what you might be doing "wrong" (as I stated, I can't "see" your code).  Your Producer While Loop should include only a DAQmx Read, potentially specifying N channels/N samples, with the output being an "Array of something" (for ease of Sentinels, see preceding paragraph).

 

Suppose you are sampling 1000 points at 1 kHz.  When you enter the loop and do the first DAQmx Read, it will "wait" 1.000 seconds and then dump out an array of 1000 samples (this probably takes less than 1 millisecond, but certainly less than 0.1 second).  You enqueue these data, and "boom", it's out of the Producer Loop and the loop starts its next iteration.  But if you specified "Continuous Sampling" (did I mention this?), the DAQmx Read didn't wait for the While Loop to finish, it was busy taking the next 1000 points.  The thing that "clocks" the Producer Loop, automatically, is the DAQmx Read -- it has to take 1000 samples / (1000 samples/sec) = 1.000 second.

 

How about timing the Consumer Loop?   Well, you want whatever you do there to take, on average, less than 1 second to process the 1000 points it will be getting.  Most LabVIEW operations (including taking FFTs, or plotting) should easily accomplish this, so both Producer and Consumer will run at the same speed unless something bogs down the Consumer.  What can do this?  Do you have any "timing functions" in there?  You don't want/need them -- the Consumer is being "clocked" by the 1 data packet / second coming from the Producer, so the Consumer is also running at 1 Hz.

 

So try rewriting your code, using the Producer/Consumer Template, implement a proper Sentinel, and have code that runs with no many fewer problems (partly because it's become much simpler).

 

Bob Schor

0 Kudos
Message 4 of 13
(789 Views)

@altenbach

The main purpose of this code is to calculate and store the number of photons measured by an SPCM (Single Photon Counting Module) through edge counting using a DAQ board.

Additionally, there is a setup involving a photodetector, and its output is also intended to be stored as an analog input via the DAQ board.

Lastly, it needs to record the measurements from a wavemeter that measures the laser wavelength. If the wavemeter is connected, it should save its measurement value at regular intervals, and if it's not connected, it should save 0.

 

I created timing pulses generated through CO pulses to coincide with the Sample Rate's period. Each rising edge of these pulses is meant to trigger precise periodic measurements from the SPCM.

The reason for setting the sample rate to 1000 is to check the SPCM measurement every 1 ms.

The Number of Samples is set to 100 to ensure the code runs smoothly for approximately 1-2 minutes without issues.

When set to around 10 as the Number of Samples, the code starts to slow down in less than a minute, causing difficulties in data collection.

 

I would like to collect three types of data (SPCM, photodetector, wavemeter) at 1-millisecond intervals for approximately 5-10 minutes. While it's a bit inconvenient that the wavemeter measurements do not sync with the timing pulses of the other two data sources, resulting in measurements every 100 milliseconds, it's not a major concern.

 

The reason for using three independent XY graphs is for convenience in visualization. If it significantly affects performance, they can be combined as they share the same X-axis.

The periodic updating of the X scale's maximum and minimum values in the Consumer loop is due to the fact that this method is the only updating approach I am familiar with.
Also I used subVIs for convenience in code visualization. I will follow your suggestion.

0 Kudos
Message 5 of 13
(780 Views)

For the 1ms, I was talking about the small independent loop that just takes the 8 Boolean controls and sends their value to external hardware. There is no reason to spin this faster that the user can possibly operate these switches (use an event structure!). The output only needs to update when one of the switches changes and not hammered 1000x per second.

 

altenbach_0-1705945519562.png

 

 

You obviously are thrashing the memory. To lower the load on the memory, maybe all you need to chart the last 1k points and directly stream the data to a binary file for post analysis.

0 Kudos
Message 6 of 13
(752 Views)

@altenbach wrote:

...(use an event structure!).


Here's how that could look like:

 

altenbach_0-1705948006356.png

 

 

 

0 Kudos
Message 7 of 13
(744 Views)

[EDIT: since I started this reply altenbach already illustrated a much better way to improve the DO loop than my quick-n-dirty method at the end of my post below]

 

Additionally...

 

1. Your AI task isn't sync'ed to your Ctr edge counting task.  Not even IF you've physically wired a connection between /Dev1/PFI38 and /Dev2/PFI0, which you *need* to do.  Your app depends on the two tasks sharing a clock (which might be the case if you've done the right physical wiring) and starting at the same time (which is NOT the case).

    Assuming both tasks and PFI pins are driven by the 1000 Hz "CounterClockOutput", you just need to add a data dependency such that the AI task is started before the CounterClockOutput task.  You did this for your edge count task but would also need to do it for AI.

 

2. I have no idea what goes on with the wavemeter dll call, but my suspicions would steer me toward wondering if it might slow down the producer loop, which is not your main problem.  So not a main suspect at the moment.  (Though the way you replace just one value out of 100 doesn't make any sense.  Just send a scalar in your queue cluster!)

 

3. Your producer loop probably should stop on a DAQmx error from either your AI task or your edge count task.  And if it *does* stop, you need to let your consumer loop know.  Right now it'll just get stuck.

 

4. In your consumer loop, there are better ways available to track time and "counts/sec" without needed to run that inner loop 100 times for every chunk of 100 samples.  Granted, I doubt this is where you're being slowed down, but a fixed sample rate doesn't require you to go through such gyrations.

 

Otherwise I'd echo what others have been saying.  Change your live display from XY graphs to Waveform Charts.  Then you can get rid of several other inefficient constructs in your consumer loop (property nodes for XY scales, unbounded growing arrays, repeated redraws of entire dataset in your XY graphs).

   And as altenbach pointed out, the attempt to blindly hammer the DO loop at 1000 Hz is certainly not helping any.  The use of the DAQ Assistant there may also be a significant source of inefficiency.  As a very quick experiment, try running that DO loop with a 500-1000 msec wait instead of 1 msec.  That shouldn't be your long-term solution, but is at least very easy to try.

 

 

-Kevin P

 

 

P.S.  Despite all the critiques you've gotten, I also see efforts to do right things, such as using a producer/consumer architecture, starting your edge count task before its sample clock, trying to share termination conditions for parallel loops (this needs more work, but I see the attempt...)

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 8 of 13
(736 Views)

@Kevin_Price wrote:

I have no idea what goes on with the wavemeter dll call, but my suspicions would steer me toward wondering if it might slow down the producer loop, which is not your main problem.


That dll call raised my suspicion too. Does it really need to run in the UI thread? How often does that wavelength actually change? (constantly? never during a measurement set? When certain buttons are pressed by the user? etc.)

0 Kudos
Message 9 of 13
(727 Views)

@altenhach

I understand what you're saying. The DO loop doesn't need to be updated every 1ms, so using an event structure as you suggested to reduce the load would be helpful.

0 Kudos
Message 10 of 13
(672 Views)