Hello --
You are absolutely correct that the binary output of AI Read is only 16 bit. The reason is that the interface for this function was defined before NI had any 24-bit acquisition products.
However, the scaled data output really is 24-bit data. It is returned in terms of voltages rather than binary codes. But the scaled data does allow you to utilize the full 24-bit dynamic range of your device.
If you would like to "undo" the binary-to-voltage scaling (essentially giving you the original binary codes) you can devide every point in the scaled data by the 4472 code width. The code width is the voltage corresponding to one least significant bit on the ADC, and it has a value of 0.00000238418579 volts. (20 volts / 2^23). In order to reduce pr
ocessor usage while you are saving the data to disk, I would recommend performing this calculation when you read the data back rather than on-line as you save it.
In general, you should not experience bandwidth issues streaming 8 channels to disk at 100 kS/sec. This corresponds to a bandwidth of 3.2 MB/sec. The IDE drive on a modern desktop can handle nearly 10 times this rate, while a the laptop drive on a PXI controller can go 2-3 times this speed. If you find that your system cannot log fast enough to keep up with the rising backlog, I would recommend chaning your data block size to a power of two such as 2048, 4096, or 8192 scans.
Hope this helps!
Bryan