I'm just gonna piggyback a bit on the good ideas bstreiff has already been offering up.
1. Let's start with speeding up the reaction speed of your original software-timed approach. In addition to starting the AO task before the loop (to speed up the process of writing the output), you could also *possibly* help yourself some by how you handle the AI task config and read.
- Do you have a high clocked sample rate for AI? If so, your software loop might not be keeping up. Each time you read 1 single sample it might represent a sample from farther and farther in the past. Even if the reaction speed to write the AO is fast, you'd be reacting to stale data so the overall apparent reaction time would look slower. Instead, you could wire in the magic # -1 to read "all available samples" every iteration. You need to handle the possibility that you get 0 samples out in an empty array, but otherwise you just check whether any of the samples exceed the threshold.
- Do you have a slow clocked sample rate for AI? If so, the interval between clocked samples is adding to the latency between "AI threshold exceeded" and "AO value generated".
- Do you need your AI samples to be clocked at all? You might be able to speed up the loop by running the AI task in "on-demand" mode (accomplished by omitting any call to DAQmx Timing).
2. It sounds like his analog pause triggering suggestion might also work well (if you get a module that supports it). Note that the hardware trigger will not work on an absolute value of voltage the way your software does. You'll need it to react while crossing a specific signed voltage in a specific direction.
If still ok, then his recommendation about using the analog comparison event sounds promising. The AO task could be configured to use that event as a sample clock, and you'd get hardware-level timing in the AO reaction to the AI threshold.
-Kevin P
ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.