From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Do I have to dequeue each element separately?

I figured it out by just adding 4 new dequeue element blocks to and four new tdms writes in my consumer loop.

My issue now though is the time stamp writes slower than I collect data. I get 100 times more data than time stamps. Is this because im reading 100 samples at 1000hz? If I increase that so they are equal will the time record at each sample?
0 Kudos
Message 11 of 32
(1,783 Views)
I figured it out by just adding 4 new dequeue element blocks to and four new tdms writes in my consumer loop.

My issue now though is the time stamp writes slower than I collect data. I get 100 times more data than time stamps. Is this because im reading 100 samples at 1000hz? If I increase that so they are equal will the time record at each sample?
0 Kudos
Message 12 of 32
(1,781 Views)

You have changed several things so it would be helpful if you posted your updated VI.

 

When you flush a queue with an array data type the output is an array of clusters of arrays. Each array that was enqueued appears in the output array as a cluster with the original array inside. The reason for this is to confuse inexperienced programmers. No. The reasin is that an array in LV is required to be rectangular. That means that all rows have the same length and all columns ahve the same length (although rows can have different lengths from columns). The cluster allows the individual arrays to have different lengths but still return in on array when the queue is flushed. This is documented in the help for Flush Queue but the reasons may not be explained there.

 

You could unbundle the array from each cluster inside a for loop. The TDMS writes could be in the loop or you could combine the data into one 2D array. TDMS Write accepts 1D or 2D arrays but not 3D or higher order arrays.

 

Although you seem to have things working with matched numbers of enqueue and dequeue functions, I suspect that this method will cause problems in the future. For the benefit of anyone (including you) who might need to modify this code in the future, please document carefully where the data comes from and where it goes, mentioning the need for matching numbers.

 

Arrays scale nicely and automatically. If you can find a way to create one array of strings and one array of numerics and enqueue them, I think you will be better off in the long run.  If you need to separate things at the TDMS write, either include an array of indexes  or put markers in the array. The markers could be some value which can never occur such as a row of all -9999.

 

Lynn

0 Kudos
Message 13 of 32
(1,768 Views)

I have attached an updated version of the VI. The problem is that the time is not recording to the spreadsheet accurately (also attached but i deleted about 5000 rows of data so it would upload). It stops after a certain number of smaples no matter what I try. this vi has it direct writing to the file first in the consumer loop. I have tried queueing it up as a value as well in the producer loop but that made no difference. Why would this happen?  Also would it be better to include my event marker stamp in the consumer loop as a direct write to tdms or keep it in the producer loop like it is now?

 

Another thing that would be nice to figure out is how to make these data files smaller. 1 min of data is about 22mb even with the defrag prior to saving. I need at least an 60-90min of continous uninterrupted data collection, which I can only imagine the file size will be.

 

Thank you for the help!

Download All
0 Kudos
Message 14 of 32
(1,741 Views)

Another issue I am noticing is that if i stop the program after recording data, the time to restart keeps increasing since it has to open that file that keeps getting bigger and bigger each time. Even if I am trying to add a new sheet. Is there any way to avoid this other than to just start a new file?

0 Kudos
Message 15 of 32
(1,722 Views)

Why not acquire the data as an array of waveforms and write those directly to the TDMS files. Waveforms include a timestamp generated by the DAQ device. It is the most accurate indication of the the time the acquisition started. By writing the waveforms that would eliminate the need to format and write a time string every time. By staying with one numeric datatype the files would probably be smaller.

 

According to my calculations you do have a lot of data: 48 channels at 1000 samples per second for 90 minutes: 48*1000*60*90 = 259.2E6 samples. At 8 bytes per DBL that requires >2GB to store. The timestamps and channel names add a small amount. By writing waveforms you only write 48 timestamps and 48 dt values, not 5.4E6 time values.

 

Lynn

Message 16 of 32
(1,709 Views)

I had a direct tdms write before I was advised here to use the queue system in order to ensure that the graphs updated properly. Before when i had the direct write the vi would get slower and slower and slower as data collection proceeded. How can I have a single while loop with direct write and avoid this? Also I can't stay with one data type since I want the file to have the event description markers as the last column in the excel sheet.

 

 

Speaking of amonut of data, when I set the sample clock to a rate of 1kHz and then set the voltage read to 100 samples per channel, how much data am i recording? How do these two settings play off each other?

0 Kudos
Message 17 of 32
(1,703 Views)

You can use the queue and the Producer/Consumer architecture with waveforms.

 

Another possibility might be to write a separate text file with the event markers and their timestamps. Combine them afterwards by aligning timestamps. 

 

The sample clock rate (1kHz) tells the DAQ device how many samples to acquire. It determines the time between successive samples and sets an upper limit on the bandwidth of the signal being measured. The number of samples per channel (100) tells the Read VI how much data to grab from the DAQ device. With those settings you need to call Read 10 times per second to get all the data. If you call it faster, it will just wait until 100 samples are available. If you call it less often, say one time per second, eventually a buffer somewhere will overflow and you will start losing data and getting errors.

 

Lynn

Message 18 of 32
(1,691 Views)

am i not queueing an array of waveforms right now? what do you mean by write waveforms directly then?

 

So what you are saying is that the two numbers should be the same if i want my sample rate and read to vi/output to graph rate to be in sync?

0 Kudos
Message 19 of 32
(1,685 Views)

No. Your most recently posted VI enqueues a 2D array of floating point numbers (DBL).  A waveform is a special type of cluster containing a timestamp for the start time, a numeric scalar for dt, a 1D array of floating points, and an arbitrary number of attributes. The first image below shows the DAQmx Read.vi with the 2D array output and Enqueue with the 2D array input. The second image shows the same thing set up for an array of waveforms.

 

Enqueue 2D array.png

 

Enqueue 1D Array of Waveform.png

 

When the two numbers (sample rate and number of samples to read) are the same you will get one reading per second. It has nothing to do with synchronizing.   The graph will update whenever dat is written to it. No synchronization required. Often you may want the reads to occur at one rate, the graph updates to occur at another rate, and the writes to file to occur at still a different rate.  Updating the graphs more thana few times per second is useless because the human eye/brain cannot perceive the changes faster than that. The reads must occur often enough to prevent buffer overflows on the DAQ device.

 

Lynn

Message 20 of 32
(1,674 Views)