Hello, I have a 1 M Ohm resistor dead short to a channel of a PXIe 4322 AO card, and I am measuring the voltage drop across the resistor using a PXIe 4303 AI card. I noticed that voltage signal generated or asked to generate was observed by the AI card about 70 ms. This is a very long time the purpose I am trying to use it for. I would rather like it to be a few milliseconds if possible.
I am also trying to understand if this delay is because of the AO card, or the AI card.
Solved! Go to Solution.
The snippet you've provided is insufficient to understand how you're synchronizing both the instruments.
In short, you need to configure both instruments to use some kind of hardware-level synchronization in order to compare these delays.
Thanks for the prompt response. I tried with on board clock to synchronize and also with on demand output, but there was no deference in the results. I don't understand the reason for 70 ms of latency.
Please see the attached.
FYI - you've implemented absolutely no sort of synchronization, you're instructing each of the cards to use their "own" onboard clock to time the generation/capture of samples.
Please read the following articles -
You'll first need to handle task sync better as Santhosh has advised, but then there are also a couple other considerations.
1. What's your AI task's sample rate? The spec sheet for the 4303 shows that the the anti-aliasing filter alone can account for an appreciable # of msec worth of delay. For example (referencing the page 7 table), a 1 kHz sample rate would result in a nearly 60 msec delay. This aspect of delay is unavoidable when using the 4303 due to its Delta-Sigma style A/D converter and anti-aliasing filter.
2. After starting the tasks, you're waiting for 10 samples worth of AI data before entering your main loop. This has an impact on the delay you observe too.
3. Compensating for delay can either be fairly simple or fairly tricky, depending on your app's requirements. If you just need the data you store to file to "look" as aligned in time as what happened in the real world, that's fairly straightforward. You'd just shift the AI data according to your knowledge of the filter delay. (Assuming you've already gotten the tasks into hardware sync.)
4. If you need to *observe* an AI signal and then react with an AO signal with minimal latency, that gets trickier to manage. In that scenario, the anti-aliasing delay is unavoidable, there's also a contribution from AI buffer-related delay and there could be another buffer delay from AO unless you choose on-demand mode (which is why you *should* choose on-demand mode in such a scenario).
Thanks for the articles. I have no experience with synchronizing two tasks, so these were very helpful to get started with. I have been trying to use PXIe_Clk100 as a source clock to synchronize both tasks. However, I cannot find PXIe_Clk100. I do see /PXI1Slot2/PXIe_Clk100 and similar for other modules as seen in the picture attached. Do you think these are same as backplane synchronization clock? And if yes, where are there separate clocks for each modules installed?
You can type in the Reference clock "PXIe_CLK100" as shown in the below snippet.
PXIe_CLK100 is supplied by the chassis and is not present on each of the cards but routed to each card via the backplane.
Sorry about the late response, and thank you for your detailed response.
The sampling rate was indeed 1k S/s. Therefore, this delay now makes sense. The attached chart suggests that the delay can be lowered significantly with higher sampling rate. This should solve my problem to some degree.
You got it right. I am developing a very high magnetic field superconducting magnet. I have to monitor several voltage signals, and when the magnet quenches, I want to send a AO signal to trigger a quench protection system. All of this needs to be done in a few tens of milliseconds. I will be sampling data at the highest possible rate, so this should help with minimizing the delay. I had originally thought of using on-demand mode for AO, and now that's what I am going to use. My only concern is the on-demand mode for AO will get affected from any AI buffer-related delay, but I don't think I can do anything anyway.
I am also contemplating of buying a cRIO or another PXIe system to run with real-time OS for this task. I am not sure how much of a difference that will make though.
Oh okay. I did not know that I could just type it in. I thought it should be selectable.
Those sync articles are very detailed and good, but let me also give you a little voice of experience.
Much of the detailed info in those articles is hyper-focused on eliminating very very very tiny timing errors, usually measured in nanoseconds. There is a *VAST* class of everyday data acq needs that simply doesn't legitimately need to concern itself with that level of detail. Many systems have behaviors of interest that are limited to relatively low frequencies such as a few kHz or less. In such cases, there'll be no practical and discernable effect of these minor timing errors that are measured in nanosec. The system simply doesn't have the
bandwidth to *do* anything within that time interval.
So start with some of the simpler techniques for sync, and move on to the more complex methods only if necessary.
[Edit: started this reply after seeing your reply to Santhosh but before seeing your reply to me.]
So what's your real app all about? Do you mainly just need to have your data in sync in your output file? Or do you need to make real-time decisions about your data in order to generate immediate changes in output signals?
In broad terms, what are the characteristics of the input signals that would indicate it's time to trigger the "quench protection system?" What happens if you don't get the proper AO signal out to the quench protection system in time? What if it's a little late? How about a lot? What if it never happens due to a program failure (bug, error condition, crash, etc.) I ask because some kinds of protection systems really need a hardware solution rather than software. cRIO can potentially qualify as hardware, once thoroughly debugged and proven out.