I have a C# program that simultaneously reads some analog and digital outputs. I configure the DI task to use the AI sample clock as suggested to me in this response.
The code to read the data looks like:
digitalResult = digitalReader.BeginReadWaveform(numberOfMeasurements, null, readDIsTask); analogData = analogReader.ReadMultiSample(numberOfMeasurements); readDIsTask.WaitUntilDone(); digitalData = digitalReader.EndReadWaveform(digitalResult);
Now I have a more subtle bug. The program works fine continuously reading the inputs and processing the data, sometimes for several days without a problem. But eventually it always hangs. If I pause the debugger in Visual Studio it is always stuck on the line:
I know the triggers that begin the read are always arriving. I could provide a timeout but there doesn't seem to me any good reason why it shouldn't finish. Are there any reasons why the digital task might not finish sometimes?
No good reason comes to mind. Things to consider exploring:
1. Be sure to catch any error status for the tasks leading up to the WaitUntilDone call. For example, if the AI task develops an error, its sample clock could stop prematurely (or never start). The DI task itself wouldn't have an error (which would cause the WaitUntilDone to return immediately). Thus, the DI task would get stuck waiting for AI sample clock pulses that will never arrive.
2. In case of some quirk/bug/whatever in the driver, maybe try waiting on the AI task instead of DI? Not only to help with the possible reason #1 above, but also just generally because it's the task that runs in the most normal and conventional way, deriving its own timing from internal timebases. The main working theory is simply that the less conventional task configurations end up getting less usage and exposure, so intermittent corner-case bugs are more likely to lurk in such places.
3. Instead of the WaitUntilDone, you could use an indirect method. Dunno C# syntax, but you could make a loop that queries the # samples read and includes some kind of short sleep() function so as not to burn CPU unnecessarily. With your own loop, you can have multiple possible terminating conditions (both tasks reach target # samples, either task produces an error, too much time has passed with no new samples, etc.).
4. I don't know the API at all, but it strikes me as odd that the syntax of the first two lines would look so different. I'd expect to make very similar-looking calls to handle this part of the AI and DI tasks. Perhaps this is worth more exploration?
(The syntax makes it kinda look like the DI task is expecting a continuous stream of samples, hence you just "begin" the read process. The AI task syntax looks more like I'd expect for a finite sampling task.)
Thanks very much for the reply.
1) I have checked the AI task and it has the correct number of samples and has returned them when the program is hanging indefinitely on WaitUntilDone.
2) I tried waiting on the AI task originally for the reasons you state but found that the DI task was almost never finished when I came to try to access the data which is why I switched.
3) Yeah this is the only thing I have come up with at the moment. Putting a short timeout on the WaitUntilDone throws an exception when it hangs and I just ignore that read and try and read it again which seems to work ok but feels a bit silly.
4) They are both finite samples tasks, but I want them to read off the samples at the same time (rather than one after the other). To do this I configure the DI task to use analog sample clock. This means I need an asynchronous call for the DI task (hence it looks a bit different) as it won't actually start reading until I run the AI task (which can be synchronous because it doesn't wait for anything).
When configuring DI to use the AI sample clock, it's going to be *crucial* that the DI task is started before the AI task. I'm wondering if your async call for the DI reader function might set up a race condition.
5. Do you explicitly start your tasks somewhere? You should, and make sure you start DI *before* AI. If you don't, then it must be the case that there's some kind of auto-start behavior built into those Read functions. (I know that the LabVIEW API to a DAQmx Read *does* support such an auto-start functionality.)
A combination of auto-start *and* the async nature of the first DI Reader call might indeed set up the race condition that concerns me.
6. Why not do a synchronous call to read all DI data right after the similar synchronous call to read all AI data? The timing sync and correlation is not affected at all -- sharing the sample clock and starting DI first guarantees that the samples are sync'ed in time. It's irrelevant when you happen to retrieve the data from the task buffer with your Read call.
Now getting back to the points originally numbered 1-4:
1 & 2. If DI was almost never finished at the same time as AI, that's more reason to be suspicious that the AI task usually started first. You definitely need to control the sequence of the task starts.
4. Theres *no need* to read the samples at the same time. Sync is based on the hardware sample clock. The DAQmx driver moves sync'ed samples from the boards to the respective task buffers. Whenever you happen to read them from the buffer, they are still sync'ed.
Thanks very much for your input.
5 & 6) I don't explicitly start tasks anywhere. I think you could be right that this race condition is the issue - thank you! However I think I'm confused about how the functionality works; if I explicitly start the tasks and use synchronous calls for AI and DI data then the data is read in ok first time round but the second time round the samples are never read (and the task is stopped after the read call). I tried setting the Retriggerable property to true on both Tasks but then it never seems to be done and I can never read. Do I need to explicitly stop the task to read the data? And then clear it and start again for next iteration?
The number of samples is the same for all channels (DIs and AIs) and the same for every iteration.
Thanks again, Luke
Working right the first time around is progress, now let's address the second time around and beyond.
I reviewed the thread you linked in msg #1. So it's clear you're looking to perform triggered finite acquisition. In this thread it's also clear you're repeating this acquisition over and over. But I'm not sure if re-triggered acquisition is appropriate or not. Let's first go through the simpler case of one-time triggering with a software loop to repeat it.
1. Fully configure both tasks without starting them. Because you're sharing a sample clock, only AI needs to be configured for triggering.
2. Explicitly start the tasks -- DI first, then AI.
3. You don't really need the "Wait Until Done". You can just do a synchronous Read call where you request all the finite samples the task was configured to acquire. DAQmx will wait to accumulate all the samples you requested, which in this case is equivalent to waiting until acquisition is done.
4. After reading the data, stop the tasks but don't discard them.
5. In your software, loop back to step #2 when ready to start another round of single-triggered acquisition.
If you were to do retriggering, the task would ignore triggers that occur in the midst of an acquisition, but would want to respond to the first one that occurs after completing an acquisition. I'm not sure if there'd be a problem with the data buffer if you haven't yet had a chance to read the data out of it before the next trigger signal arrived.
I did a quick test on a PCIe-6341 where it *seemed* as though DAQmx actually accomodates this, but I'm not sure where the limits are. I made a very small finite retriggered AI task that ran for only tens of msec. I then manually triggered it as many as 10 times. I then performed 10 Reads of the entire task buffer and got 10 distinct full sets of data.
This behavior surprised me, I was not aware of it. There's gotta be a limit somewhere -- DAQmx can't know how many buffers to pre-allocate and it won't always have time to allocate on the fly. But at least there appears to be *some* ability to keep reacting to new triggers even if the previous data buffer hasn't been read yet. I'd treat this feature with caution though until/unless more is understood about its limitations.
If you run into more trouble, please post the code where you configure the tasks. I"ll try to decipher as much as I can.