LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx task synchronization in parallel loops

I'm looking for suggestions on how to handle the following:

 

I have a process that does DAQmx reads and a parallel process that writes output waveforms using DAQmx. These are separate block diagrams (i.e. not two while loops on the same block diagram). I need my read and write tasks to be synchronized. The tricky part of this that for the sake of synchronization, my write process needs to know about my read tasks so that it can set up its sample clock, start trigger etc. The only way I can think to do this is to have some sort of synchronous, inter-process handshaking via inter-process messaging, passing the necessary task references around, setting up tasks, then sending messages to trigger a task to start. Is this the only/best way to handle this? Has anyone done something like this before and have a relatively proven way of handling it?

 

Thanks.

0 Kudos
Message 1 of 6
(4,405 Views)

Another thought, configure, setup, start the tasks in a single thread, then send those tasks off to the parallel read/write loops.

0 Kudos
Message 2 of 6
(4,397 Views)

Have you tried using the internal signals to synchronize? On the M series I'm fairly certain that you can make the AO use the AI sample clock and also make the AI start trigger use the AO start signal. This would allow you to use a simple rendezvous to force the AO start task to wait on the AI to be ready, then one trigger fires them both and they share the sample clock. If you can't use the same sample clock, then I would use a notifier and have the AO wait on notification from AI, with the notifier data being the sample clock information, turn off the ignore previous to allow for the case of the AI being ready before the AO. I can send code if you need it for using the internal signals.

Charles Chickering
Architecture is art with rules.

...and the rules are more like guidelines
0 Kudos
Message 3 of 6
(4,396 Views)

@Charles_CLA wrote:

Have you tried using the internal signals to synchronize? On the M series I'm fairly certain that you can make the AO use the AI sample clock and also make the AI start trigger use the AO start signal. This would allow you to use a simple rendezvous to force the AO start task to wait on the AI to be ready, then one trigger fires them both and they share the sample clock. If you can't use the same sample clock, then I would use a notifier and have the AO wait on notification from AI, with the notifier data being the sample clock information, turn off the ignore previous to allow for the case of the AI being ready before the AO. I can send code if you need it for using the internal signals.


Yes, this is basically what I'm doing (internal triggers/sample clocks). The question had more to do with the starting of the tasks, they need to be in a specific order. Sounds like you've handled that with a rendezvous.  

0 Kudos
Message 4 of 6
(4,389 Views)

I'm sure you probably have a reason for running these tasks in separate threads, but given they are so tightly coupled timing-wise I wonder if it would make sense to put them together in a single thread?


@for(imstuck) wrote:
Another thought, configure, setup, start the tasks in a single thread, then send those tasks off to the parallel read/write loops.  

 

As somebody who has recently inherited code with over a dozen asynchronous event handlers all talking between themselves, I vote for this approach if you do have to execute the tasks in separate threads (stop the tasks in the main thread as well after using Wait on Asynchronous Call to ensure your DAQ threads have completed).  From what I've seen anyway, having too much coupling between threads can easily get out of hand for a larger application if you're not careful.

 

 

Best Regards,

John Passiak
0 Kudos
Message 5 of 6
(4,347 Views)

John, you may be right; we are still tossing around ideas. There are two ways of thinking here. The first, if you separate configuration of tasks from running the tasks, you end up with something fairly reusable and autonomous. In general, we could pull these read/write processes into other applications, but do the configuration in an application specific process. When the tasks are configured, just send them off to a reader/writer and keep those "dumb". It also makes it more maintainable if we need to update how tasks are created/configured because we can do it all in one place.

 

Then, there is the other way of thinking which is that the writer and reader should themselves define how they are to be configured to keep things encapsulated. The writer provides methods for how to configure itself, the reader provides methods for how to configure itself.

 

So, this is basically where we stand and I doubt either approach is inherintly wrong, so that's why I posted to learn from what others who came before us have learned.

0 Kudos
Message 6 of 6
(4,290 Views)