LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Fast Binary File Saving

I need help.
In my application, I am saving in a binary data file at every 75 ms, around 300 kilobytes of I16 data over a period of 20 seconds.
This gives me a file of roughly 80 Meg.
I am using that method because it is the fastest one that I know. I can not use building up array method with shift register because it become, after a while, to slow to deallocate and reallocate memory.

My method as some rattles because sometime I miss few frames.
Is there a better method? (RAM Disk or something else?)
Harold Hebert
National Research Council Canada
0 Kudos
Message 1 of 6
(4,303 Views)
Try create array as global variable and add there your data.
In one of my projects i created global of 10 meg and that worked good.
0 Kudos
Message 2 of 6
(4,303 Views)
I've gone a lot faster than that. It can be done. I did 100
kSamples/sec (400 kB/sec) back in 1998 w/ Ultra SCSI HD's and as dual
pentium pro 200 MHz in LV. I did 6.4 MB/sec in C++ (100 64K frames
per sec) in 2000 w/ a 450 MHz P3 and Ultra 2 SCSI H/D's. Here's what
you do:

Separate the file I/O and the data acquisition into separate while
loops and use a queue to send acquired data to the file i/o loop.
This way the acquired data loop can go as fast as it can, the file
loop can go as fast as it can and they won't make each other wait.
The data will be queued up by the O/S into the queue until it can be
written later. This is how to do asynchronous file writes in LV.

Write data in exactly 64K chunks (65536 bytes). If your frames are
less than this or not multiples of this, accumulate until >=64K, write
the first 64K and stick rest back into buffer to be combined with the
next frame. Write the remaining non 64K fragment once at the end.
64K is the best size for DMA moves from my experience and anything
else is less efficient.

Alternatively if you know that 80 MB is your limit, just create a 40M
element array and then put your data into it as you acquire it and
then write the data to disk at the end, post test.

Stuff your PC with as much ram as the BIOS can handle. RAM is cheap!!

Get rid of unneeded apps, services, and protocols they all eat CPU
time. Especially virus and firewall which try to check files as you
write them.

Get a faster PC w/dual processors.

Get a RAID array w/dedicated RAID controller, ULTRA2 160 SCSI 10K LVDT
H/D's certified for video applications (no thermal recals), get a lot
of buffer ram for the RAID controller. This will dramatically improve
your disk bandwidth, seek times, and reduce your i/o overhead on the
CPU.

Don't believe the H/D mfg's performance figures. Typically can't do
better than 10% of these rates on a continuous basis. (People who
sell RAID arrays generally give you the truth though as opposed to
consumer outlet shrink wrapped H/D's.)

Rewrite it in C or C++, it will be tons faster.


Douglas De Clue
LabVIEW developer/C++ developer (currently job hunting...)
ddeclue@bellsouth.net


Harold Hebert at NRC wrote in message news:<50650000000800000072520000-1023576873000@exchange.ni.com>...
> I need help.
> In my application, I am saving in a binary data file at every 75 ms,
> around 300 kilobytes of I16 data over a period of 20 seconds.
> This gives me a file of roughly 80 Meg.
> I am using that method because it is the fastest one that I know. I
> can not use building up array method with shift register because it
> become, after a while, to slow to deallocate and reallocate memory.
>
> My method as some rattles because sometime I miss few frames.
> Is there a better method? (RAM Disk or something else?)
Message 3 of 6
(4,303 Views)
I've gone a lot faster than that. It can be done. I did 100
kSamples/sec (400 kB/sec) back in 1998 w/ Ultra SCSI HD's and as dual
pentium pro 200 MHz in LV. I did 6.4 MB/sec in C++ (100 64K frames
per sec) in 2000 w/ a 450 MHz P3 and Ultra 2 SCSI H/D's. Here's what
you do:

Separate the file I/O and the data acquisition into separate while
loops and use a queue to send acquired data to the file i/o loop.
This way the acquired data loop can go as fast as it can, the file
loop can go as fast as it can and they won't make each other wait.
The data will be queued up by the O/S into the queue until it can be
written later. This is how to do asynchronous file writes in LV.

Write data in exactly 64K chunks (65536 bytes). If your frames are
less than this or not multiples of this, accumulate until >=64K, write
the first 64K and stick rest back into buffer to be combined with the
next frame. Write the remaining non 64K fragment once at the end.
64K is the best size for DMA moves from my experience and anything
else is less efficient.

Alternatively if you know that 80 MB is your limit, just create a 40M
element array and then put your data into it as you acquire it and
then write the data to disk at the end, post test.

Stuff your PC with as much ram as the BIOS can handle. RAM is cheap!!

Get rid of unneeded apps, services, and protocols they all eat CPU
time. Especially virus and firewall which try to check files as you
write them.

Get a faster PC w/dual processors.

Get a RAID array w/dedicated RAID controller, ULTRA2 160 SCSI 10K LVDT
H/D's certified for video applications (no thermal recals), get a lot
of buffer ram for the RAID controller. This will dramatically improve
your disk bandwidth, seek times, and reduce your i/o overhead on the
CPU.

Don't believe the H/D mfg's performance figures. Typically can't do
better than 10% of these rates on a continuous basis. (People who
sell RAID arrays generally give you the truth though as opposed to
consumer outlet shrink wrapped H/D's.)

Rewrite it in C or C++, it will be tons faster.


Douglas De Clue
LabVIEW developer/C++ developer (currently job hunting...)
ddeclue@bellsouth.net


Harold Hebert at NRC wrote in message news:<50650000000800000072520000-1023576873000@exchange.ni.com>...
> I need help.
> In my application, I am saving in a binary data file at every 75 ms,
> around 300 kilobytes of I16 data over a period of 20 seconds.
> This gives me a file of roughly 80 Meg.
> I am using that method because it is the fastest one that I know. I
> can not use building up array method with shift register because it
> become, after a while, to slow to deallocate and reallocate memory.
>
> My method as some rattles because sometime I miss few frames.
> Is there a better method? (RAM Disk or something else?)
Message 4 of 6
(4,303 Views)
A lot of good suggestions all in one place. Very nice.

Randy Hoskin
Applications Engineer
National Instrumnents
http://www.ni.com/ask
0 Kudos
Message 5 of 6
(4,303 Views)
Or create an 80 MB array in the beginning and just do an array subset replacement. This way there will be no new memory allocations.

Randy Hoskin
Applications Engineer
National Instruments
http://www.ni.com/ask
0 Kudos
Message 6 of 6
(4,303 Views)