Just curious: what about portability to non-Windows OS? such as Linux.. how would it react to variants?
Another point - relying on undocumented features could give you some headaches later on. For example, in a recent thread there was someone complaining that the DB toolkit didn't work.
Why? Because when adding variants to the DB it relied on the internal structure to determine the size of the data. When that internal structure, the toolkit broke and NI didn't find it because they didn't test for that case.
JLV, one good reason for not using variants is that they are not strictly typed, so you can only catch errors at run-time. This is also something that speaks for them (for example, when you want to create this type of architecture), but there are other ways to do this as well.
" As far as I know, the endianess and internal representation of variants has nothing to do with ... or the processor, ... "
Its NUXI story time.
Unix was originall developed on a DEC PDP 11/70 and was latter ported to (I beileve ) an IBM machine.
When it booted for the first time it "flew the banner"
becuase IBM had one endiness and DEC used the other.
So if the flattening was taking place on a PC and the unflatten was on a Sun work station, MAC or other, I would stop and ask myself about endiness about two seconds after it did not work.
Isn't that complicating somthing that's supposed to make life easier (to maintain/scale)?
I have had to write converters for data log files that were the saved versions of a type def'd cluster. When the type def changed, the old files would have been unreadable. So before I changed the type defs, I would save-off a copy of the original version so I could use it in the converter. The converter itself was also a pain to write when the type def had a lot of fields.