Hope you can help,
I'm trying to help a friend of mine to get RT reading 46 strain gauge channels at 300 S/s on a PXI-1011 that has 6 SCXI-1520 strain gauge modules in it.
These modules are being read by a PXI-6052E card. This PXI unit is bolted to the top of a helicopter blade spinning at 110 RPM. The RT software on the PXI-1011 doesn't seam to handle concurrent loops very well, one doing continuos scan across 46 channels at a rate of 300 samples per second, and the other loop sending the data from the scan using the TCP write VI. (Note: We couldn't get the new LVRT FIFO Vis to handle such a larg chunk of data)
This is only 13,800 samples per second, which is not much for a network connection, but the continuos s
can VI reading the PXI-6052E seems to be choking up the PCI bus or chipset enough to hinder the TCP stuff from doing it's thing. I don't know if it is a memory buffering thing, or what. Is DMA being used by RT to read the continuous data coming in?
(when more than 2kbytes is transmitted using TCP Write at once, RT starts choking and then gets an AI buffer over-run error. As it is, I am double transmitting each 1000 byte chunk of data to make sure that nothing is dropped, but alas, once every 15 seconds, RT on the PXI does some kind of black magic thingy and I loose 10 chunks of data (each chunk is about 2000 samples).
We even tried disconnecting from the RT gui, (this is not a stand alone application) It made almost no difference.
It took about 8 hours of playing with the pauses, and the # of scans to read at a time, to get it where it is now (Reading 20 scans at a time, then pausing the scan loop 65ms, and putting a 22ms pause in the TCPIP Write loop). Right now my con
tinuous scan is softwar triggered, should we play with the other types???
Thanks for you help,
Brad Whaley
LabVIEW Certified Engineer