I am running LabView RT, which seems to use the default "Big Endian" to store (measurement) values to binary files. I read somewhere, that the default "Big Endian" is part of LabView's MacOS/PPC heritage, although I am running LabView RT on an Intel based PC/PXI.
My question: As the LabView RT system is x86/64 based, shouldn't there be a performance loss having a default "Big"? "Big" is opposed to the CPU native format "Little"... If "Little" is the better (or an equally good) option, I could eliminate endianess conversion between RT and my x86 host.