Beagles,
serial comm on a desktop OS involves some 'tricks' to be reliable. The serial data on such a machine is received by a chip called UART, that has a defined buffer of between 16 to 256 or even more bytes. This buffer can be filled with a baud rate of 230kBaud within 0.6 to 11 ms. When the UART buffer reaches a given filling limit, the UART raises an interrupt. In the ISR, the OS copies the data from the UART buffer to a system buffer. When the limit is -say- 80%, such an ISR is raised every 0.5 to 9 ms.
Normal user apps, one of which is LabVIEW, can not reach those short latencies reliably, so they could just reach serial data in the system buffer only (this is where 'bytes_at_serial_port' checks).
I had a system working with 13 serial ports, each of them running at 230 kbaud. I do not remember what the default size of the system buffer was. But I do remember that I could set it to a value of 32766 (2^15 - 2). This gave room for more than 1.5 seconds before LabVIEW has to remove (read_serial_port) data to prevent the serial buffer from overrun. This is fairly enough and can be handled easily with a proper program design in LabVIEW.
A proper program design in this circumstances means, that you should divide your app into a receiving part and a storing and maybe a processing part. The receiver should just do that and provide the data by means of a queue or socket or so to the other parts. A storage part should just stream the data to disk, if necessary. Depends on the importance and amount of your data and the stability of your system. The processing part (which includes even a raw viewer if necessary) should run at lower priority. If your machine is quick enough to handle everything, than this would be done as well. Otherwise the receiver is higher prioritised and you'd probably not loose data.
HTH and
Greetings from Germany!
--
Uwe