LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

The basics of creating continuous aquisition

You can also look into channel wires.

http://www.ni.com/product-documentation/53423/en/

https://forums.ni.com/t5/LabVIEW-Channel-Wires/Getting-Started-With-Channel-Wires/gpm-p/3505658

 

This is similar in concept to using the producer/consumer architecture (or Queued message handler - I get them mixed up) and using a Queue to send data between the loops.

 

Stream data from your DAQ acquisition loop to the processing/save loop, thus allowing data acquisition to continue uninterrupted. 

Message 11 of 18
(1,379 Views)

I just want to focus on one specific point.

 


@KRICLA wrote:

...The producer loop will then be "time consistent" if I don't slow it down and I can time and load my consumer at whichever rate I may choose?


You shouldn't aim to "time" your consumer loop by any manual method.  A producer will enqueue data and the consumer will dequeue it.  The Dequeue operation itself will serve to time your consumer loop *automatically*.  When the consumer is keeping up with the producer, it simply waits at the Dequeue until the producer gets around to Enqueuing new data.  The consumer can then immediately receive the data and deal with it.   If a backlog of data builds up in the queue when the consumer gets busy with something, the consumer can later iterate much *faster* than the producer to catch up.

In the long run, the consumer can't just run at any old rate *on average*.  A stable system should be able to consume and process at the same *average* rate that the data is produced.  The big advantage of queues in a producer - consumer scheme is that they provide *temporary* buffering while isolating the DAQ producer loop from occasional time-consuming operations (such as attempts to access a file on a network drive when your wifi adapter first has to wake up from power-saving mode).

 

The reason it's important to isolate the DAQ producer into a clean, tight loop is that if your DAQ Reads don't keep up with the acquisition rate, you'll get a buffer overflow error, and as soon as that happens even once, your DAQ task is dead and will no longer give you any data. 

    By adopting the P-C pattern, the DAQ loop can keep servicing the task and enqueuing data while the consumer is temporarily slowed down.  During this time, the queue builds up a backlog of data, which the consumer should soon get a chance to start consuming.  By *avoiding* having your own manual timing in the consumer loop, a consumer can typically chew through a big backlog really fast and catch back up.

 

 

-Kevin P

 

Edit: Bob Schor's recommendation for using a Stream Channel rather than a queue came in while I was typing.  I endorse it.

   I haven't personally jumped on the Channel Wire bandwagon as we have a fair investment in libraries and reuse code built around standard queues, but he's convinced me that it's likely the better choice for new code most of the time.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 12 of 18
(1,362 Views)

I second the idea of using Channel Wires.  A Stream Channel is designed for the Producer/Consumer Design Pattern.  It's like a Queue except:

  • There's no need to "Obtain Queue" or "Release Queue" -- only the logical equivalent of "Enqueue" (pass to Channel Writer) and "Dequeue" (get from Channel Reader).
  • Stopping the Channel (including letting the Consumer "know" there are no more data coming, so it should exit) is trivial -- the Producer just sets "Last Element?" to True (and exits its Producer Loop).
  • The "Pipe" carrying the data moves "in the logical direction", namely the Writer's output is an indicator, and the Reader's input is a control.
  • The Asynchronous nature of the Data Flow is indicated by the "Pipe" metaphor, which goes over the boundaries of Structures, it doesn't try to "tunnel" through them.

Bob "Channel Wire Enthusiast" Schor

 

P.S. -- I put an example of my favorite Channel, the Messenger Channel, here, in case you are interested.

Message 13 of 18
(1,361 Views)

I've run into some other problems which have obstructed my progress in this matter. Now I'm back.

 

My answers to some point-outs:

- No manual timing of my consumer, check.

- Stream channel vs queuing, I'll go for queuing because I've already started with that.

 

I've attached the current frankenstein ('s monster) VI that I'm working on if someone is keen on having a look to help me sort out the issues. The VI won't run because it needs several support files and sub-VI:s etc. I've implemented the producer/ consumer scheme and some controls to start/ stop everything. The queue data format should be an array of waveforms since I sample multiple channels.

 

Currently I have three problems:

-After running a minute or two the program stops due to memory overflow... I presume that the consumer loop empties the queue for each round in the loop and process the data, so why is there data left or what is filling up the memory and how do I prevent this? Or is my consumer only picking up one value at a time thus not consuming sufficiently? Should I use "flush queue" instead of "dequeue" ?

 

-All my charts, even the waveform chart i've wired directly to the dequeue, displays only jibberish stuff. Is this simply related to my signal problems or have I done something completely wrong?

 

-I now want to make a 10Hz signal of the processed signals I have, and save it in a suitable manner to .lvm. Is there once again a convenient way of achieving this? I don't trust the "sample compression" as it is implemented at the moment. Is it perhaps better to buffer some of the data and save at certain intervals instead of printing data to file at high frequency? Use queue again to achieve this?

 

A few stupid questions... I were not expecting this little task to become this complicated, have never worked with labview before and have absolutely no time to spend on this since I desperately need to get going with the experiments.

0 Kudos
Message 14 of 18
(1,256 Views)

I've run into some other issues but now I'm back at the task of sorting my labview VI.

 

Some answers related to posts:

-No timing of my consumer, check, I'll avoid that.

-I'll focus on queue rather than stream channel because I've already started using that, but if and when I give up on queues then I can give it a go.

 

I've attached the current frankenstein ('s monster) VI I'm working on for interested labviewers to view.

 

The problem I have now is that when I run I fill up the memory buffer and the program crashes. I was under the impression that the dequeuing automatically sends chunks of data as "pallen" described it, but from reading it appears that dequeue only sends one point at a time? Which then will automatically make the concept fail since the idea was to have a fast running acquisition and a slower processing loop. I've understood that there's a "flush queue" to drain the queue, is there another alternative where I can send multiple points in one loop iteration so that my process/ consumer loop will keep up with the acquisition?

 

I'm also thinking about timing my consumer to chug i.e. 100 samples of my 1000hz data and average it to create my desired 10Hz signal if that would be a possible solution. When producer has run 100 iterations, run consumer and process data, and start over...

0 Kudos
Message 15 of 18
(1,273 Views)

- In your producer loop, you should specify the # of samples to read in your call to DAQmx Read.vi.  With your sample rate of 1000, I'd recommend reading 100 samples at a time.  Right now, you've left the input unwired and the default behavior is to read "all available samples".

    The problem now is that the loop might run *so* fast that after reading all available samples one iteration, it loops around to the next iteration *so quickly* that there are 0 available on the next iteration.  So DAQmx Read dutifully returns no samples -- you get an *empty* array of waveforms.   That's why the consumer code can't split the array at index 1.

 

- Dequeue retrieves 1 queue element at a time.  In your case, once you make the mod I suggested, each single "element" will be an array of waveforms which each contain 100 samples.

 

- no comment on the memory fillup and crash.  The code is just too frankenstein-ish to deal with in detail.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 16 of 18
(1,242 Views)

@Kevin_Price wrote:

- In your producer loop, you should specify the # of samples to read in your call to DAQmx Read.vi.  With your sample rate of 1000, I'd recommend reading 100 samples at a time.  Right now, you've left the input unwired and the default behavior is to read "all available samples".

    The problem now is that the loop might run *so* fast that after reading all available samples one iteration, it loops around to the next iteration *so quickly* that there are 0 available on the next iteration.  So DAQmx Read dutifully returns no samples -- you get an *empty* array of waveforms.   That's why the consumer code can't split the array at index 1.

 

- Dequeue retrieves 1 queue element at a time.  In your case, once you make the mod I suggested, each single "element" will be an array of waveforms which each contain 100 samples.

 

- no comment on the memory fillup and crash.  The code is just too frankenstein-ish to deal with in detail.

 

 

-Kevin P


Thanks Kevin, tried it and the program now runs without overloading the memory

The instant readings on the waveform charts are still not displaying anything (accept the one I've added before all computations), but running averages are displayed. I can also assume it has to do with either the sample compression or the non-othodox signal splitting and transformation to dynamic data.
 
I'd also like to make two secondary queues for output data, one for the displays/ charts so they don't have to run full speed and one for file export. If it's a really stupid idea I'd appreciate if someone stops me, otherwise I'll give it a go to get the program going
0 Kudos
Message 17 of 18
(1,227 Views)

Most of us LabVIEW old-timers specifically avoid dynamic data wires, the DAQ Assistant, and many other "Express VI's".   The behavior of such things tends to be much more opaque and variable -- it can change based on double-clicking and changing properties that aren't immediately visible on the diagram.

 

Whether or not to add more queues and loops is a question without a universal definitive answer.  I usually make a dedicated loop and queue for file writing.  I often find I don't need to dedicate a loop & queue for GUI chart updates.  Your 1000 Hz sample rate isn't pushing any limits, so you should be able to make things work out either way.

 

 

-Kevin P

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 18 of 18
(1,195 Views)