LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Stripping Zeros using all cores of the processor

Solved!
Go to solution
Solution
Accepted by topic author bmann2000

Instead of padding with zeroes, I would use a proper header for the binary file, describing all the line sizes. In this way the file will be smaller, and reading out sections very easy...

0 Kudos
Message 11 of 13
(493 Views)

I guess it depends if the file is always read entirely and from the beginning or if you need random access to any particular row in the file. If you pad with zeroes (in this case you can use my algorithm to just "left align" each data row). you have random access to any array element, because, given the array dimensions and the number of bytes/value, you can calculate the file position for it.

 

I am not sure how your device is supposed to "ignore zeroes at the end". In order to tell if there is more valid data left after a zero, it needs to inspect all remaining elements. You could prepend each row with an I32 indicating row size, there are even built-in ways to do that.

 

All your values seem to be in the range of -1 to 0, One option would be to define certain sentinel values outside that range (e.g. +1 could mean end of valid row data). You only have a handful of significant digits, so the data could probably be stored and processed lossless in an array of e.g. U16 integers while remembering the scaling factors. Wasting 64bits per value (DBL) is certainly overkill.

 

0 Kudos
Message 12 of 13
(481 Views)

Altenbach is correct in mentioning that not using a 2D array is a lot of extra coding.  In the end I did use varying line length, mainly because it provides us with debug advantages for new products.  All my speed problems were text file related, I also had accidently unbundled inside a for loop which kills speed. I used a separate header file with file position for the binary version of the same implementation.

0 Kudos
Message 13 of 13
(426 Views)