LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to handle 12Gb/ minute data while writing ti binary file.

Solved!
Go to solution

Hi there,

 i am trying to acquire data from fpga (PXIe-7975R with NI-5783 at 100 MHz) at the rate of 100k samples/ ms(I16 ). while acquiring the data i am getting the samples as per my theoretical calculations and there is no problem with that. but while i am trying to write the data directly into a binary file (the file size should be 12*10^9 Bytes) my host code is running slowly. to overcome it i used queues, but while i am tying to acquire data through queues and write the data into binary file i am getting error  "not enough memory to complete this operation", means the allocated buffer memory of my code is running out. please tell me if anybody knows how to handle this much data without effecting memory.

Thank you

0 Kudos
Message 1 of 24
(3,782 Views)

Hi kiranteja,

 

my host code is running slowly.

To write 12GB/min to a file your filesystem must handle 200MB/s write speed.

Does your harddisc allow such rates?

 

(When you still suspect YOUR code is the problem then you should ATTACH YOUR code!)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 24
(3,745 Views)

To add to Gerds comment. I assume you have a loop that samples and a loop that writes to disk (through a queue)? How much data to you grab and write at once? You say 100k/ms, i hope that's not how you're Writing to disk (100k/ms as individual writes)? Assuming your disk can keep up with a 200MB/s (an SSD should) it's probably more efficient to sample and write in the 10-100ms scale. 

Try grabbing 1E6/10ms or 10E6/100ms and see how that works.

With a 1ms write loop i'm not surprised you'll get a backlog and buildup of queue.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 3 of 24
(3,738 Views)

Hi  

0 Kudos
Message 4 of 24
(3,732 Views)

Hi,

 

What is the software running on? As Gerd says an SSD should eat this up (most now do 500MB/s) however if you are running on a PXI controller with a spinning disk this is likely to struggle more and are likely to be too slow for 200MB/s. In this case, there may be nothing to be done in software, you would need to upgrade the drive.

 

Cheers,
James

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
0 Kudos
Message 5 of 24
(3,725 Views)

hi  James_McN,

I am using Conduant DM-8M (8 TB SATA SSD) which can support PXIe based systems. the overview of the SSD is mentioned in that PNG. the transfer rate is over 3GB/sec. the ssd is inserted into NI PXIe-1082 8-slot chassis with NI PXI-8840 embedded controller. 

 

 

 

0 Kudos
Message 6 of 24
(3,712 Views)

Hi Kiranteja,

 

Cool - that looks like a cool product. So how does that appear in the 8840 OS - does it show as 1 drive or is it configured as RAID? Are you able to run a crystaldisk or similar benchmark tool to the drive where you are writing from LabVIEW to confirm the setup is all happy?

 

Failing that, if you are able to share the code where you are writing that would be interesting. I wouldn't expect you to need too much special code to achieve this but obviously, something isn't working.

 

Cheers,

James

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
0 Kudos
Message 7 of 24
(3,690 Views)

What is the budget? The HDD-8261 clams to do 2 GB/s, the PXIe-8267 5 GB/s (see PXI Storage Modules). With "Contact for pricing" it doesn't sound cheap though.

 

Depending on the amount of data, a RAM disk can be convenient. Of course if it needs to be persistent, at some point it needs to go to a physical disk...

0 Kudos
Message 8 of 24
(3,682 Views)

@kiranteja93 wrote:

Hi  


I'd wager it takes well more than 1ms for the 100k loop. 😄

When it comes to SSDs it can be good to know that they internally usually have 256kb blocks, even if the file system have 2kb blocks (like NTFS), so any disk write will cause it to read the 256kb block, modify the memory and write it back. If those 4 SSDs are in raid it might be much larger blocks.

Try the 1000k/10ms or even 10M/100ms just to compare. You can't crash harder than you're already doing, right? 😉

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 9 of 24
(3,666 Views)

You'll find that writing chunks to a file (appending the file) will be significantly slower than writing one contiguous chunk in one go.  This is because of the file system.  The OS needs to locate some free sectors, mark them as being used, link them to the file and then write.  Add to this the actual speed of writing (which doesn't really affect you as you have cool hardware) and things can get gnarly.

 

A Trick I have used in the past for known-size files.  If you are using 64-bit windows:Write the file beforehand.  Fill it with zeroes.  This way, Windows actually caches the disk sectors in RAM (which in 64-bit Windows can be a LOT).  When you then overwrite the file, windows actually copies the contents to memory, and only flushes to disk at a later stage (also no searching for sectors, it's already mapped out).  You can get amazing access times and throughput this way which is indistinguishable from really fast hardware.  Caveat is the time it takes to generate the file in the first place and having lots of RAM free.

Message 10 of 24
(3,654 Views)