LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Best way how to write FPGA data in rt cRIO system in tdms file

Solved!
Go to solution

Hej,

 

I am struggling to write measured data from an analog input (NI 9215) sampled at up to 20 kHz to a tdms file in the rt system (crio-9022). I just need to save several periods of 4 arbitrary analog signals at frequencies between 5 Hz and 1KHz. So storing up to 50k values should already be enough.

 

I use a high priority and a low priority loop. First I tried to adapt the example from the "Getting Started with CompactRIO - Logging Data to Disk" (http://zone.ni.com/devzone/cda/tut/p/id/11198). But when I used this in my high priority loop (running at 1ms), the loop runs out of time and the rt system becomes unresponsible. If I change the number of elements to write (the number of elements to wait for in the fifo read block) it becomes better, but still data is lost because the loop finishes late.

 

NI_Forum_saveData_1.png

 

So I was thinking to create a RT fifo and to store all the values from the measurement first in this memory inside the high priority loop and then write the values to the tdms file in the low priority loop. This time I used the read/write fpga block instead of the FPGA fifo block. It was already working better but writing the files to the tdms file took a lot of time since each value was read and written to the tdms file individually. Unfortunately I could not find a possibility how to write the whole rt fifo to the tdms file at once. Is there a block available or is it possible to create a big array first and then write the data to the tdms file at once? My code I tried is in this second picture.

 

NI_Forum_saveData_2.png

 

I hope someone can give me some tips which method should be better for my project and a hint what I did wrong or what I can optimize. I stucked for days now on how to save my measurements on the cRIO system.

 

Thank you very much in advance. Have a nice weekend.

 

Best regards

 

Andy

 

 

 

0 Kudos
Message 1 of 19
(5,677 Views)

hej,

 

sorry for putting up this topic again, but I am still struggling with saving my data. Is there nobody outside who can give me some hints or tips how to store my data in an optimum way? My loops always finish late when I try to save my data and I do not know how to collect data together, so I can write it in blocks, which should be faster think.

 

I really appreciate any tip or help.

 

Thank you in advance.

 

Best regards

 

Andy

 

0 Kudos
Message 2 of 19
(5,615 Views)

AndyKr,

If I understand correctly, your high priority producer loop generates 4 DBL (AI_CHO/AI_CH1/AI_CH2/AI_CH3) data at 1MHz rate, and each time (1us) puts the 4 DBL data
to a queue; your low priority consumer loop gets data from queue and writes data to a TDMS file.

 

My question:
1. Your data producing rate is 4 * 8 * 1M = 32 MB/s (1 DBL is 8 byte), am I right?

My suggestion:
1. Can you cache your data in your producer loop? TDMS has better write performance for big chunk. e.g., Writing 32 KB at a time is faster than writing 1024 times and each time write 32 Byte.
e.g., Currently, you put 4 DBL to a queue at a time. You might cache 1024 times 4 DBL data and put 4096 DBL to a queue at a time.

 

2. Your low priority consumer loop might use a normal While Loop, not 1KHz Timed Loop. I mean it is wierd that your producer loop is 1MHz but the consumer loop is only 1kHz. LabVIEW "Dequeue Element" node will pend if the queue is empty and then a normal While Loop will not burn CPU.
 

0 Kudos
Message 3 of 19
(5,604 Views)

If you want to write the TDMS data in blocks you should use the TDMS Set Properties VI to define the property NI_MinimumBufferSize for the heavy data channels. For details look into the LV help.

 

Hope it helps

Christian

0 Kudos
Message 4 of 19
(5,595 Views)

HiXiebo and Christian,

 

thank you very much for your answers. Actually, my high priority loop is much slower. I run it with a maximum loop time of 50us = 20kHz or slower, depending on my Signal I want to measure. So my data producing rate is maximum 4*8*20k=640 KB/s. My low priority loop runs at 2ms to 5 ms (much slower then the high priority loop), since I am doing just some simple math calculation there and control the front panel in this loop.

 

I understand that it is much more efficient to write blocks of data (e.g. 1024*32KB instead of just 32KB) to a file with TDMS. But is it also the same for a queue or RT FIFO, i.e. does the block size of the data chunks also matter for the queue and RT FIFO?

 

@Xiebo: I understand that caching the read data from the FPGA in the high priority loop first will improve my code. But I do not know how I can cache the data I read? I was thinking to do it with the FPGA FIFO, but the FPGA read/write blocks seem to be faster for me and I do not know why? Can you tell me a block/vi to cache the data I read from the FPGA or maybe even an example?

 

@Christian: This NI_MinimumBufferSize property looks exactly what I was looking for. But my question is now if I should put the tdms write VI's in my high priority loop and read directly from the FPGA FIFO buffer to the file as it is done in the Disk logging example at http://zone.ni.com/devzone/cda/tut/p/id/11198? Or is it better to read the data from the FPGA via the FPGA read/write function, write the data to a RT FIFO in the high priority loop and then write the data with the  NI_MinimumBufferSize property option to the tdms file from the RT FIFO in the low priority loop?

 

In summary, I am still unsure if the FPGA FIFO or the FPGA read/write function with a queue or RT FIFO is better for me and how I can create a cache to build chunks of data blocks to write.

 

Thank you very much in advance for your help.

 

Best regards

 

Andy

 

0 Kudos
Message 5 of 19
(5,557 Views)

I would try both options. The recommended version is obviously the RT FIFO to be able to write the data in a low priority loop. But in cRIO systems the CPU has to be shared by both processes so it's open if you reach the performance you want to.

 

br Christian

0 Kudos
Message 6 of 19
(5,554 Views)

My recommendation:

In your high priority producer loop, use FIFO.Read to get data and put data into a queue;

In your low priority consumer loop, get data from the queue and write the data into a tdms file.

 

In the above solution, caching data is not necessary because FIFO.Read "Number of Elements" is a kind of caching data for you.

0 Kudos
Message 7 of 19
(5,547 Views)

Hi

 

Writing to a file on the RT system is slow.. I have tried all file formats...

 

I suggest you create a buffer that stores the data when collection is finished then write to a file. It means you can only collect for a short period of time.

 

I have managed to collect at nearly 100k from the 9215 all 4 channels holding in a cyclic buffer about 10 seconds worth of data. When a trigger is found then stop collecting and store to file... If you are running at 20k the you should be able to 50 seconds worth of data.

 

Mark

0 Kudos
Message 8 of 19
(5,526 Views)

Hej,

 

thanks a lot for the suggestions and help.I managed now to set up a rt FIFO in the high priority loop to write to and then I write the data to the tdms file in the low priority loop, also using the NI_MinimumBufferSize option. This really speeds up writing the tdms file! At the moment I manage to save data continuously at a rate of 2kHz, which is good for the start but needs to be improved for further measurements.

 

@Mark: I was thinking about this way since I just need to store some periods of my signal. Thus a buffer for 200k values (50k per channel) should be enough for me and also be possible with the cRIO system. But what trigger do you use to stop collecting the data? Do you stop the data acquisition loop with the trigger and start another loop for writing the data to the file? Or do you save the data from the buffer to the file in parallel to the high priority data acquisition loop?

 

Thank you all for your help!

 

Andy

0 Kudos
Message 9 of 19
(5,518 Views)

Hi Andy

 

i run a producer consumer setup. first loop i extract the data from the fifo and add to an array and wait for it to build up to about 10000 samples and then add to a queue.

 

Second loop. I take the queue elements and put them into a cluster to which i can then create another array. the second loop creates an array of clustered arrays.

 

the reason i put arrays into a cluster is that big arrays are slow to manipulate. you also have to be careful of what array function you use some are slow. delete from array and build array seem the fastest doing this i have created an cyclic buffer.

 

I am collecting data all the time. I am looking for values to go out of tolerance and collect for a certain period after. Tolerance checking i am doing in the fpga code. After my collection period has finished i then save the arrays into a csv file using a separate loop.

 

Hope that helps

 

Mark

 

 

Message 10 of 19
(5,503 Views)