Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

RT choking on PXI-1011 with strain guages

Hope you can help,

I'm trying to help a friend of mine to get RT reading 46 strain gauge channels at 300 S/s on a PXI-1011 that has 6 SCXI-1520 strain gauge modules in it.

These modules are being read by a PXI-6052E card. This PXI unit is bolted to the top of a helicopter blade spinning at 110 RPM. The RT software on the PXI-1011 doesn't seam to handle concurrent loops very well, one doing continuos scan across 46 channels at a rate of 300 samples per second, and the other loop sending the data from the scan using the TCP write VI. (Note: We couldn't get the new LVRT FIFO Vis to handle such a larg chunk of data)

This is only 13,800 samples per second, which is not much for a network connection, but the continuos s
can VI reading the PXI-6052E seems to be choking up the PCI bus or chipset enough to hinder the TCP stuff from doing it's thing. I don't know if it is a memory buffering thing, or what. Is DMA being used by RT to read the continuous data coming in?

(when more than 2kbytes is transmitted using TCP Write at once, RT starts choking and then gets an AI buffer over-run error. As it is, I am double transmitting each 1000 byte chunk of data to make sure that nothing is dropped, but alas, once every 15 seconds, RT on the PXI does some kind of black magic thingy and I loose 10 chunks of data (each chunk is about 2000 samples).

We even tried disconnecting from the RT gui, (this is not a stand alone application) It made almost no difference.

It took about 8 hours of playing with the pauses, and the # of scans to read at a time, to get it where it is now (Reading 20 scans at a time, then pausing the scan loop 65ms, and putting a 22ms pause in the TCPIP Write loop). Right now my con
tinuous scan is softwar triggered, should we play with the other types???

Thanks for you help,
Brad Whaley
LabVIEW Certified Engineer
0 Kudos
Message 1 of 5
(3,365 Views)
If you're doing buffered acquisition, you do not want to set your VI to run in time critical mode. For VI's running at time critical priority that are doing DAQ, refer to the example program that ships with LabVIEW called Real-Time PID Control. This VI creates a zero length buffer and then uses AI Single Scan to read 1 point/channel at a time with hardware-timing. The zero length buffer causes your time critical VI to sleep while waiting for new data, which then provides time for normal priority VI's to run in the background. These normal priority VI's would be used to handle communication back to the host computer.

If buffered DAQ is all you need, there is no need to run the VI as time critical. A buffered acquisition is deterministic already.

Here are
a few links you may find helpful to learn more about RT programming.
LabVIEW Real-Time Architecture and Good Programming Practices
Using the LabVIEW Real-Time Communication Wizard

Regards,

Kristi H
Applications Engineer
National Instruments
0 Kudos
Message 2 of 5
(3,365 Views)
Kristi,

Thanks for your comments. I don't see how setting the buffer to zero is going to capture 300 S/s continuous waveform data across 46 strain gauge signals without dropping a single reading of data. It's critical that we have continous data. Know what i mean?

Sincerely,

Brad
Brad Whaley
LabVIEW Certified Engineer
0 Kudos
Message 3 of 5
(3,365 Views)
Hi Brad,

In your case, it sounds like you simply need to do a deterministic, buffered acquisition. This type of DAQ application can even be accomplished in Windows; however, you are getting more system reliability when you run the VI in LabVIEW Real-Time. Because a buffered acquistion is deterministic already, there is no need to run it as time-critical; you only need to create a buffer that is large enough to hold the continuous data.

Many LabVIEW Real-Time applications involve real-time control. In these instances, a zero length buffer is defined and AI Single Scan is used to read in 1 point from each channel. The RT program can then deterministically calculate the next output value(s) based on the latest input. Because we have not con
figured a buffered acquisition, which would clock in the input based on the sample clock; and because the correct operation of our system relies on our ability to calculate the next output based on the last input deterministically; we want to make sure the VI will always give us the needed performance to keep our control system stable. This means we will want to run the VI at time critical priority. AI Single Scan together with the zero length buffer is designed to give us optimal performance for our RT system because it will allow our time critical thread to sleep. As I mentioned before, the time critical thread must sleep at some point to allow normal priority threads to run.

As another point of clarification, real-time only implies "on-time"; it does not mean "really fast". Real-time response is defined as the ability to reliably, without fail, respond to an event, or perform an operation, within a guaranteed time period.

Hope this helps.

Regards,

Kristi H
Applicati
ons Engineer
National Instruments
0 Kudos
Message 4 of 5
(3,365 Views)
I see your question and the current responses. You should also look at the TCP traffic. If you used the NI generated code, it may have a deadlock condition. Try to collect the data to the local disk of the RT system to verify your collection first. If you can perform gap-free acquisition at the rates you describe (which should not be a problem), look to the communication side of the problem. We recently completed and RT application that required us to rework the communication section because of similar issues that you describe. If you are still having trouble, I can try to code a simple example of the methods that I am describing.
Stu
0 Kudos
Message 5 of 5
(3,365 Views)