From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Real-time display at high data acquisition rate with continuous saving

Solved!
Go to solution

Hello everyone,

 

I encountered a problem and could need some help.

 

I am collecting voltages and corresponding currents via a PCI-6221 card. While acquiriing data, I would like to see the values on a XY-Graph, so that I can also check current vs. voltage, not only voltage/current vs. time. Besides, the data should be saved during the acquisition.

First of all, I create analog input hannels with the DAQmx Create Virutal Channel, then I define the sampling rate and mode and start the tasks. The DAQmx.Read is placed in a while-Loop. Due to the high noise of the signal, I want to average for example every 200 points of current and to plot this versus the average acquisition time or the average acquired voltage. The saving of the data should also be included in the while-loop.

 

The first thing, I was thinking of, was to run the data acquisiton in Continuous Mode and use e.g. 10k S/s as Sampling Rate. The DAQmx.Read is configured to 1D Wfm N Chan N Samp (there are 4 channels in total) and the number of samples per channel is for example 1000 to prevent over/underwriting errors of the buffer. Each of these packages of 1000 samples should then be separatet (I am using Index Array at the moment). After getting the separate Waveforms out of the 1D array of waveforms, I extract the Y-value with Get Waveform Components. The resulting error then needs to be processed to get average values.

 

But how can I manage to get these averages without delaying my code?

My idea/fear is the following: I read out 1000 samples after about 0.1 s. These are then divded into single waveforms, the time information is subtracted, some kind of for-loop for the averaging is used (I am not sure how to this exactly), the data is transferred to a XY-Graph and saved to a .dat file. After all that happened (I hope I correctly understood data flow within a while-loop), the code in the while loop starts again and the next 1000 samples are read out and processed.

But if the processing took too long the DAQmx.Read executes too late and from cycle to cycle, the readout from the buffer lags behind the data generation of the PCI-6221.

 

Is this concern reasonable? And how can I circumvent this? Does anyone know an effective way to average and save the data?

I mean, the first thing I would think of is increasing the Number of Samples per Channel, but this also increases the duration of data-processing.

 

The other question concerns the timing. If I understood right, the timestamp is only generated once when the task starts (with the DAQmxStartTask) and the difference in time betweeen the datapoints is then calculated by 1 divded by the sampling rate. However, if the processing takes a significant amount of time, how do I ensure, that that time error does not accumulate?

 

I am sorry for the long plain text!

 

You can find my example-vi(only to show roughly what I was thinking, I know there are two averaging-functions and the rate are not correctly set now) attached.

 

Best wishes and thank you in advance,

 

Mr KSE

 

PS: I have to add: imagine running the data acquisiton on a really old and slow PC, for example a Pentium III.

PPS: I dont know why, but I can't attach my vi..

0 Kudos
Message 1 of 17
(6,142 Views)

Several questions.  Are you running this using LabVIEW Real-Time?  That is, do you have a Host PC that handles User Interaction, displays, "analysis" (e.g. averaging), file i/o, etc. and a separate processor running LabVIEW RT (a PXI, cRIO, etc.) responsible for DAQmx, data sampling, digital I/O, etc.?  Note that if you really want "Real-Time", you pretty much want an RT Processor running a Real-Time OS (specifically not Windows) to ensure Determinism in your code, with minimal jitter.

 

If so, then you absolutely can do this.  I'm currently using a PXI (not even a PXIe) to sample up to 24 channels at 1KHz, streaming it to the PC, where it is streamed to disk, with displays (of all 24 channels) updated every 50 points (so the display updates at 20 Hz).  I originally did show the averaged response (50-point averages), but (as it turned out) the users wanted to see the noise level (which I'd kindly filtered out), so I just displayed a single point out of the 50 instead of the average.  To ensure the integrity of the data, one of my channels is a "clock", basically the index of the Timed Loop on the RT side, so I can see that I don't "miss" any points (and I don't).

 

LabVIEW's ability to have multiple loops running in parallel, interacting through Queues, Notifiers, and other LabVIEW structures that impose "order" with minimal time cost, makes this possible.  It took a number of tries for the particular design I chose to solidify, partly because I failed to thoroughly document (and understand) the requirements before I started coding.  It was an interesting "learning experience".

 

My advice would be to fire up Microsoft Word and write as thorough a documentation of what you want the Software to accomplish.  Include the type(s) of processors you will use to collect data, the types and acquisition rates of the data, any control requirements you have, something about how the data will be stored, what displays you want, what displays you need (that was a painful lesson -- one of the ones I wanted ultimately had to go ...), and what kind of User Interaction you require.  You can then start thinking about design choices.

 

Bob Schor

0 Kudos
Message 2 of 17
(6,130 Views)

Oops, I just re-read your post, and see you are using a PCI-6221, which definitely runs in a PC.  I'm assuming you are not configuring the PC with a Real-Time OS (which can be done, by the way).  It's probably still possible to do what you want to do, you'll just need to be a little more careful about how your parallel loops interact.

 

BS

0 Kudos
Message 3 of 17
(6,125 Views)

Hello Bob,

 

thank you for your reply.

To clarify: I do not need real-time in that way that Windows as the OS is not enough. I only want to ensure that the timestamps get no systematical error; the absoulte time is irrelevant, only the relative times should be accurate in the order of seconds.

 

The problem is that I have to integrate for example currents over time and so the relative time should be rather accurate. The program which is now running in the lab, seems to have a problem with timing (in that program the time for each average of datapoints is simply calculated by dividing the number of averages by the sampling rate and put these numerics into an array), because one can see that the rate, at which the potential is changed, is not constant although the hardwares produces a constant potential sweep-rate.

If you plot the acquired data (collected over a longer period), you can see that the sweep rate (rate of change of the potential) increases over time; that is the reason I started to distrust the program (when using a oscilloscope you can see that the generator itself produces constant sweep rates).

 

Best regards

0 Kudos
Message 4 of 17
(6,039 Views)

It comes down to the clock.  The problem with Windows is that your program doesn't "own" the CPU and its cycles, Windows (OS) does.  However, your DAQ hardware has its own on-board clock to drive the A/D converter and  hardware to pump data directly into PC memory, so if you were, for example, to ask for a sample of 1000 points at 1KHz, your sampling rate for those 1000 points should be exactly 1.0000 milliseconds apart.

 

We've now reduced the problem to getting the 1000 samples to be taken at accurate and precise intervals.  One way to do this is to encapsulate the data acquisition loop in such a way as to remove anything "non-essential", then look at optimizing the run-time properties of the resultant VI.  Look into the Execution settings, particularly Priority (consider Time-Critical Priority) and Execution System (possibly Instrument I/O).  I don't have much experience trying to make a Real-Time OS out of Windows, but these are places you can tweak.  Also, use something like an RT FIFO to export the data from your Acquisition VI (don't even think of doing any processing of it within the VI) -- you've got a whole second to deal with only 1000 points (in my contrived example).

 

Bob Schor

0 Kudos
Message 5 of 17
(6,031 Views)
Having your code would help a lot. Try a different browser if you are having difficulty attaching or zip up the VI.

Using continuous mode, you should be able to acquire at a very regular and precise time between samples. Your processing and saving should be done in a consumer loop though you can probably take the mean without affecting the acquisition. Streaming to a tdms file is another option. Have you run the shipping examples without any problems just to acquire and display?
0 Kudos
Message 6 of 17
(6,021 Views)

Thank you Bob to confirm my concerns. It now seems clear to me that the "old" Vi has the problem, that the whole data processing is within the while loop.

 

I will now try to add my code. I want to say in advance that this code is not clean up as I tried out some displays/things.

 

@Dennis_Knutson: I have to admit that I heard/read of the Queues/Consumer and Producer loop for the first time today. I found an example which maybe is proper for my application, but I have to understand it first:

 

https://decibel.ni.com/content/docs/DOC-9543

 

Coluld I simply add some kind of downsampling like in this example https://decibel.ni.com/content/docs/DOC-36352 and a graphical display to the consumer loop at the bottom? And waht happens if the producer loop is much faster than the consumer? Are the samples simply stored in the RAM?

 

And: Would that work for LabView 8.5 too? Because one of the examples is for Labview 8.6 or later..

 

 PS: This time it seems to work! I am sorry that I cannot take an image, but I don't have LabView at home.

 

0 Kudos
Message 7 of 17
(6,009 Views)

The Consumer has to be (in the aggregate) faster than the Producer or else you will lose data.  However, you often don't need to "consume" all of the data.  For example, suppose you are acquiring 16 channels of analog data at 1KHz.  Let's say you collect it a sample at a time (i.e. 16 points, once a millisecond) and immediately "export" it from the sampling loop via FIFO or Queue.  You want to "consume" it by (a) saving the data to disk and (b) having a 16-channel display to see the data.

 

Writing to disk isn't too hard.  Suppose you accumulate 50 samples (a buffer of 16*50 = 800 points) and send this, using another Queue, to a disk-writing routine (with the file already open, so you only have to do a single Write).  You should easily be able to keep up.

 

As for the display, you can't really see much detail at 1KHz, and are likely to lose the signal in the noise.  So take those same 50 points, average them to produce 16 "samples of the samples" (actually "estimates of the samples") and display these.  Now you are talking about updating your display at 20 Hz, which, again, is easily accomplished.  If your data acquisition rates go up an order of magnitude, simply scale the buffer size -- the bottleneck in this scheme is the speed of writing to disk.  Note that if you use a Queue, even if the disk can't do every write in 50 milliseconds (because, for example, it needs to find some free disk blocks, or Windows wants to do a Virus Scan, or something else), as long as you have a big enough Queue (consider using a fixed-length Queue so you don't have the overhead of adding Queue elements -- a buffer size of 10 to 100 elements should be OK) to absorb the interruptions, and your output device is fast enough (please don't use floppies!), you should be fine.

 

BS

Message 8 of 17
(6,005 Views)
You also have examples that come with LabVIEW on the producer/consumer architecture.

Yes, that down sampling seems like it might work for you. You say you want to take 200 samples and take the mean so simply request that number and use the mean function on the statistics palette. That will effectively change the dt in the waveform so change that at the same time.
Message 9 of 17
(6,003 Views)

Thank you  very much for your input!

 

I think I will have a look at the queue-tutorial and then set up a producing loop for data acquisition with two consumers for display and saving, both with the option of averaging. I hope that I can install another LabView Version tomorrow, otherwise I will post my result on Saturday. It would be really kind if you could have a short look at my code, after I have finisehd. I will report as soon as possible!

 

Best wishes and thank you very much,

 

Mr KSE

0 Kudos
Message 10 of 17
(5,978 Views)