03-23-2017 04:45 AM
Instead of padding with zeroes, I would use a proper header for the binary file, describing all the line sizes. In this way the file will be smaller, and reading out sections very easy...
03-23-2017 09:08 AM - edited 03-23-2017 09:11 AM
I guess it depends if the file is always read entirely and from the beginning or if you need random access to any particular row in the file. If you pad with zeroes (in this case you can use my algorithm to just "left align" each data row). you have random access to any array element, because, given the array dimensions and the number of bytes/value, you can calculate the file position for it.
I am not sure how your device is supposed to "ignore zeroes at the end". In order to tell if there is more valid data left after a zero, it needs to inspect all remaining elements. You could prepend each row with an I32 indicating row size, there are even built-in ways to do that.
All your values seem to be in the range of -1 to 0, One option would be to define certain sentinel values outside that range (e.g. +1 could mean end of valid row data). You only have a handful of significant digits, so the data could probably be stored and processed lossless in an array of e.g. U16 integers while remembering the scaling factors. Wasting 64bits per value (DBL) is certainly overkill.
04-11-2017 04:51 AM
Altenbach is correct in mentioning that not using a 2D array is a lot of extra coding. In the end I did use varying line length, mainly because it provides us with debug advantages for new products. All my speed problems were text file related, I also had accidently unbundled inside a for loop which kills speed. I used a separate header file with file position for the binary version of the same implementation.