Measurement Studio for .NET Languages

cancel
Showing results for 
Search instead for 
Did you mean: 

ChannelsToRead and Physical/Virtual Channel name

Was there ever any resolution to this issue? I tried this in 8.0.1 and the issue still exists.
0 Kudos
Message 11 of 15
(1,704 Views)
Hi DRT-
 
We are looking into the root cause of the slowdown, but I was able to see good results by ensuring that the Application.DoEvents() call was completed after the myTask.Dispose().  I am using NI-DAQmx 8.1; please give this a try and let me know if you're not able to see better results.
 
Thanks-
Tom W
National Instruments
0 Kudos
Message 12 of 15
(1,667 Views)
This behavior is caused by the way the asynchronous I/O API was designed in the .NET DAQmx API. First, a little background .

Asynchronous I/O is great in the cases where the
read operation might take a while (reading lots of data at low sampling rates). Your UI remains responsive as the data acquisition is running.

When you make a call to BeginRead, the .NET API queues a thread in the threadpool for the read to occur. Once the read completes, the threadpool thread will call the read callback function (lets call this function foo) you registered. In foo, we call EndRead to read the actual acquired data, and then call BeginRead to kickoff another thread to do the read. So this way, for continuous operations, we are constantly scheduling threads to keep the continous DAQ operation running. If we didn't call BeginRead in foo, the threadpool thread would end after firing once and we would not keep on getting read callbacks.

Whether foo is called in the threadpool thread or the UI thread depends on the synchronizing object. In all the C# shipping examples, you will see the following line of code

reader.Synchronizing = this; //"this" refers to the form class

This call configures the reader to ensure that foo is fired in the same thread as the form. This becomes important in cases where foo might be manipulating the controls on the form (Win32 programming disallows accessing UI controls from within threads that did not create the UI controls in the first place). The way this works is that the threadpool thread posts a message to the form and waits for the message to be handled by the form. When the form's message pump sees this message, it calls foo in its thread and everyone is happy.

It is possible that you might have several messages posted to the message queue to call foo, especially if the aquisition rate is high.

So this is what happens when you start a task using Async I/O.

What happens when you call Dispose() on the task? I'm glad you asked. See next post (I'm hitting the 5K character limit)

Bilal Durrani
NI
0 Kudos
Message 13 of 15
(1,628 Views)
Dispose frees any DAQ resources that might be used by the task. Calling Dispose on the task without calling Stop first is basically like doing an abort on the task. This does a cleanup and leaves the task a hollow shell of its former self, capable of nothing but throwing "I have already been disposed!" exceptions.

But what about all those messages that might still be queued up? Will they still fire even when I have disposed the task?

The anwser is yes. The .NET API does not handle cleaning up the message queue for "stale" messages that might come in after we have already disposed off the task. The way we handle this in the examples is by using the taskRunning boolean (see example code). When we dispose the task, we also set the taskRunning boolean to false. So when the foo callback fires again, foo won't do anything. This ensures you won't use a reader object which was associated with a task that already has been disposed.

Now you have the background on how the asynch I/O works and why the .NET DAQmx examples are set up the way they are.

The case that you are seeing is related to the threadpool worker thread and the way we were handling its exit conditions by use of the taskRunning boolean. So lets walk thru what exactly your application was doing.

You create a task, set the synchronizing object to the form ( so the read callbacks will fire in the same thread as the form, i.e. the UI thread) and start the task. The reader schedules a read in a threadpool thread. This thread does a read and then sends a message to the form to fire the read callback in the UI thread. taskRunning is set to true. Callbacks are firing and all is well.

So now we have 2 threads. A UI thread and a thread pool thread that is being scheduled continuously.

Then you press the stop button.

We dispose the task and free any daq resources. taskRunning is set to false. A new task is then created. A new reader is created and associated with the new task. taskRunning is set to true. BeginRead schedules a thread from the threadpool to do a read. While all of this is happening in the UI thread, there is a threadpool thread that that has probably completed a read and has sent a message to the form to fire the read callback. This callback will occur once we exit the stop button and the form's message pump handles the message.

Then we exit the stop button and the read callback fires. The readcallback is unaware of all the changes that have just occured. This is because by the time the callback occurs, the task and reader objects are new (and valid). taskRunning is still set to true, so it happily goes on calling EndRead and BeginRead, hence it keeps on scheduling itself.

So what has happened now is that there are two worker threads being running. One thread belongs to the second task that we created. And the other belongs to the first task and it doesn't know to quit. You start seeing this slowdown because you have 2 worker threads doing reads when you should only have one.

The reason why a strategically placed call to Application.DoEvents seems to fix this is because this gives the read callback a chance to fire (because we just processed some messages queued on the form), see that the taskRunning boolean is false, and hence not call BeginRead to reschedule itself. It dies silently, never to be scheduled again.

We typically do not recommend using DoEvents as a workaround since it might not scale very well and it might mask what the problem might be. It might work now, but it might start failing once you start doing more complex things.

What we need is a way for the first callback to know which task it is associated it. So one way to approach this would be to replace the bool taskRunning check with a comparison of task objects. I have attached an example that demonstrates this fix.


You can diff this file with the example you submitted. I changed the taskRunning bool to a task and am using that instead to make decisions in the read callback.

I know this is alot of info. Please let me know if something didn't make sense.

Message Edited by bilalD on 07-18-2006 06:48 PM

Bilal Durrani
NI
0 Kudos
Message 14 of 15
(1,627 Views)
Bilal,
Thanks for the very lucid explanation of what is going on. I now understand how the async calls work, and will encorporate your changes into my code. Many thanks again.
 
0 Kudos
Message 15 of 15
(1,618 Views)