LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

efficient way to read a hex file to an array of U16 data?

OK I had another idea while doing something else. Just for kicks, here's a new record holder. It's about 23x faster than Randall's and should get you in the low 100ms. (My ancient 1GHz PIII laptop does a 4MB string input in less than 250ms).

I originally thought that "decimate array" causes a buffer allocation. Apparently, it does not! 🙂

Anyone has an idea to speed it up a bit?

(Sorry about these multiple post. I'm sure CC will suspect "other" motives ;))
Message 11 of 16
(1,018 Views)
And here's a picture of the code for those with older LabVIEW versions.
Message 12 of 16
(1,016 Views)
Thanks for the help! I thought it would be very complicated to make a lookup table for the ASCII codes, but it turned out to be moderately straightforward the way you did it... How do you accurately measure the runtime of your different iterations to know how performance compares?
0 Kudos
Message 13 of 16
(982 Views)
For benchmarking, I use the standard 3-frame flat sequence, see attached image. Of course you should randomize the input a little bit or feed it with your actual data. If the string is a few MB, the ms resolution is good enough.

(To test much faster VIs, you need to put a FOR loop in the innner sequence frame, then divide the ms by the loop count.)
Message 14 of 16
(978 Views)
OK it is optimized for string conversion but it still requires that the whole file is read into memory, a file that needs twice the memory needed by the U16 array itself. I'd like to see the whole file read and conversion optimized... Maybe it could be more efficient to read/convert small chunks of the file at once instead of loading the file. It is more tricky to time file I/O because of memory caching. The small chunks might be appropiate to avoid the OS to cache file in memory and then file swap memory for large array...


LabVIEW, C'est LabVIEW

Message 15 of 16
(964 Views)
Jean Pierre,

I would be very interested in some hard data to determine at which point it is worth getting the file in chunks instead of all at once. Somehow, I suspect that 4MB is peanuts on a modern computer and should be read in one swoop, but I haven't done any serious benchmarking.
Message 16 of 16
(945 Views)