The way you read the file and decode it the last bytes of it will be read and decoded into empty array elements.
I would redesign the entire code; instead of reading one byte at a time and doing inserts and thus resizing the array continously (a no no...it creates copies of the data and is thus memory costly and slow, the bigger the array, the worse the effect ..it can be quite dramatic) just read the entire file using e.g. read characters and then use the spreadsheet string to array to get the array...
Alternatively; read the size of the file, calculate how large an array you will need, initialize a shift register with an array that size, read line by line and then use the replace function to put the data into the array (using replace is the thing to do, never
insert..that's only OK if the array is very small and the operation is not to be repeated often).
In the first solution you load all the data into memory in one read operation and then use additional memory for the array, it thus a bit more memory expensive than the latter where there whole file is not loaded into memory as a string first, but is put into the array at once...on the other hand it requires more read operations. Often the best approach is to mix the two; reada block of the file into memory, process it, then read another block etc...that way you optimize the performance by keeping both memory usage and the number of read operations down to an optimal level...but normally that kind of optimization is onlyrequired if the files are very large and/or many...