11-10-2008 07:20 PM
I'm porting a fairly elaborate custom data acquisition application from Traditional DAQ to DAQmx and running into a few issues. My latest question has to do with performance of AnalogInput task(s) in DAQmx compared to TraditionalDAQ and impact on the GUI.
In my TraditionalDAQ application we configured hardware, daq parameters and created a thread safe queue to hold data. The DAQHalfBufferReady polling function was executed within an asynchronous timer dropping data into the TSQ every half buffer. The application would poll the Queue looking for data and send copies to both an asynchronous disk writing thread and GUI update routine running in the main GUI thread.
The DAQmx version of this application handles the DAQ buffering internally. I've registered an EveryNSamples callback routine which copies data to the TSQ as mentioned above. Everything else is basically the same. However, when DAQ is "turned on" the GUI becomes unresponsive suggesting the DAQ operations are not running asynchronously.
Can anybody confirm or deny this and/or suggest a method to force the EveryNSamples callback to run asynchronously?
Thanks,
Trevor
11-11-2008 01:34 AM
Hi Trevor,
Maybe the EveryNSamples callback is running in the main thread and that is making the GUI unresponsive.
You can create a thread that sleeps mostly and checks user events in between.
Then, you can just call PostDeferredCallToThread from the EveryNSamples callback so it returns immedately.
This way you can make sure the TSQ operation is done in another thread.
Hope this helps.
11-11-2008 09:56 AM
Hi Trevor,
When you call the function DAQmxRegisterEveryNSamplesEvent, there is an Options parameter which lets you configure whether you want the event to be fired in the main thread, or in a dedicated DAQmx thread. If you pass 0 to this parameter, it should already be sending the event asynchronously (in a dedicated thread).
You should be able to verify the thread in which you're receiving the event by calling CmtGetCurrentThreadID in that event's callback.
Luis
11-13-2008 04:22 AM
Luis,
Thanks for your reply. It turns out that the problem may be at a lower level than simply running the EveryNSamples callback in the correct thread. I am registering the callback as you have suggested and it is running in the DAQmx thread. My EveryNSamples callback drops the data into a thread safe queue. My UI sets up it's own callback that watches for "DataInQueue" events from the TSQ, pulls the data off of the queue and sends it to a stripchart control.
I'm developing on a development system using simulated devices and this works great. However, compiling the project as an application and testing it on the deployed system with real hardware (identically configured to simulated hardware) things aren't so rosy. Depending on the combination of sampling rate and samples per scan (for lack of better terms, they are configured in the MAX task) I get CPU load between 30 and 50% (larger, less frequent transfers use less CPU) with AI running and NOTHING else being done by the user on the system.
The project also loads Digital Output and Analog Output tasks which are simple single point updates activated on an EVENT_COMMIT from their associated controls. Operating these controls causes no noticeable change in the behavior of the stripchart displaying the Analog Input data.
What is odd is the following...
- When the stripchart (running with VAL_SWEEP) wraps from the right to left side of the display there is a noticeable pause in the updating of the display. The data is queued correctly and it squirts out as expected (similar effect as holding the mouse down on the window title bar... UI is stalled, DAQ continues to queue data). This is not a show stopper but suggests something weird is going on.
- When the stripchart (running with VAL_CONTINUOUS) starts to scroll the CPU jumps from 30-50% to 100% and data is lost. This is only noticeable on the development system with simulated hardware because the driver is outputting SINE wave data. The "real" hardware is currently only outputting DC values. Simple solution... only use SWEEP mode even though CONTINUOUS would be more useful for the users.
- If a minimized window (from another application, say Windows Explorer) is un-minimized and it overlays the GUI window the CPU load goes to 100% and the CVI application becomes completely unresponsive. In fact the window will loose ALL controls and be plain grey. IF the application comes back (rare) it seems to not be able to acquire/display Analog Input data correctly. Digital Output and Analog Output no longer function. It makes me think I need to add a ProcessDrawEvents() (or the CVI equivalent) call somewhere but I'm not sure where it should be added (ie. main UI thread, display update callback).
One my development system none of this behavior is seen. Clearly the two are significantly different.
I think the problem is caused by the stripchart control and it's memory requirements. I am configuring the control to dynamically adjust it ATTR_POINTS_ON_SCREEN to be the maximum allowable 5 second multiple. Thus I am almost always plotting close to 10000 points on screen.
In all cases, the operation of my application cannot approach that of MAX using the identical tasks (in terms of CPU utilization). It is clear that MAX only plots a single "scan" when nSamples < sampleRate and when nSamples > sampleRate it seems like the plotted data is decimated in some what BEFORE being plotted.
On issue I have is that I MUST plot data in near real time with perhaps 2-4 updates per second. The application requires the user to watch the data and respond to the stripchart to trigger software events. Acquiring for a long period of time (ie. nSamples > sampleRate) is not an option.
Can you or anybody else shed some light on how MAX handles it's task based acquisitions (queues, thread, stripcharts, etc) so that I can get my application to perform in some way approaching MAX.
I have a test application that I can wrap and send you to look at. It has the important parts from my "real" application and is what I'm using to test these performance issue right now.
Thanks again,
Trevor
11-13-2008 10:53 AM
HI Trevor,
After reading your post, I agree with you that the likely culprit seems to be stripchart. It's possible that you're pushing the envelope on how fast you can plot to the chart, although 2-4 updates a second doesn't sound like very much at all. So this is definitely surprising.
One thing that I would recommend you do would be to try a test application that doesn't do any data acquisition at all. Instead, you could use an asynchronous timers to simulate the EveryNSamples event (like the DAQ event, the asynctimer event is also fired in a separate thread). You could configure the timer to fire at intervals that match your current scan rate. Then, if you can still reproduce those problems (the lag in sweep mode, the missing data in continuous mode, and the frozen application when the window is covered up) we'll be in much better position to identify the real problem. At that point, you can send me the application, if you like, and I'll be happy to help you debug it. If, on the other hand, you can't reproduce the problem without the DAQ driver being involved, then we'll need to look for alternative explanations.
Luis
11-13-2008 12:51 PM
Luis,
I think I see where you're going with your suggestion but I would counter (before implementing it) with the question... How does doing what you suggest differ (in substance) from using a simulated device? Granted, without knowing for certain, it seems possible that the simulated data is being created by the DAQmx driver and if that is the case then we haven't eliminated the driver itself from being the problem. However, I would imagine (incorrectly?) that what NI did was NOT to modify the driver itself but just to create a "Virtual Instrument" which encapsulates the basic functionality of the real hardware. To me this makes more sense. It would be nice to know... (NI?)
Before I code up more "fake" functionality what I'm going to try is creating a simulated device on my target machine and reconfiguring the tasks to use that simulated hardware. If the application runs as it does on my development workstation I will suspect the hardware driver itself. If it still demonstrates the issue then I'm thinking it may be a resource availability problem between my development system (Intel Core 2 Duo (E6850) 3.0 GHz, 2GB RAM, nVidia Quadro FX570 256MB) and the target machine (Pentium 4 1.7 GHz, 1GB RAM, ATI RAGE ULTRA 128 16MB).
Of particular note however is this fact. On the target machine MAX runs this task (8 channel AI, 1kHz sample rate, 10k samples) and displays data in the graph control with a CPU utilization only approaching 80%. The task running in my application (8 channel AI, 200Hz sample rate, 50 samples) with a stripchart control updating at 4Hz runs at 50% CPU. If I reduce the frequency of data transfers (200Hz rate, 200 samples) then CPU utilization falls by 40% to 30%. You have to hit the MAX running task pretty hard to even notice the CPU going over 3-5%.That is significantly different performance. Does CVI create an offscreen bitmap of the stripchart contents and then scale that to fit within the restricted UI window (I don't have a 9001 pixel wide stripchart!)? It's hard to know where to put my effort with the black box system (which is OFTEN a great advantage!) I'm working with (namely the CVI user interface library).
Thanks again,
Trevor
11-13-2008 02:06 PM
Trevor,
The simulated HW devices are implemented in the DAQmx driver. I don't know the specifics of how they are implemented, but it would be better if we could eliminate DAQ altogether as a possible factor in the behavior you're seeing. Because from a debugging standpoint, I'll be able to debug the internals of the CVI UI library as your application is running, but I won't be of much help with DAQ. And the interaction with DAQ will make it hard to debug the UI library, because of timing effects.
The parallel with how MAX updates the graph in its testing window isn't very useful, since MAX is not using the CVI UI library. So any odd effect that could be being caused by the CVI stripchart control will not show up when you're running the task in MAX.
If you want to keep things simple you could dispense with the separate thread and just try creating a simple for-loop in which you try to plot as much fake data to the stripchart as you can and then compare what happens in your development computer as opposed to your target computer. That should be fairly simple and it might tell us if the difference in computer resources is playing a role in the differences you're seeing or not. Just be sure to insert a call to ProcessSystemEvents in the body of the loop so that it doesn't lock out the UI completely.
Luis
11-13-2008 02:12 PM - edited 11-13-2008 02:12 PM
11-13-2008 05:04 PM
Understood.
I'll be back...
Thanks
11-17-2008 05:24 PM
Luis,
I attempted to substitute an asynchronous timer into my project but now cannot compile my project. If I exclude the single source file which contains my asynchronous timer setup code and callback routine the project compiles/runs fine in debug and release mode. If I include/enable the asynchronous timer source (myAsyncTimer.c) file in the build CVI returns an "Out of memory" error when trying to build the project (memory goes from about ~44MB before build to ~57MB after successful build without file to ~1733MB after failed build with file). Attempting multiple builds (after exit/restart of CVI) I found that I can build the release exe successfully (~48MB after build). If I rerun CVI and manually compile each source file they all seem to compile fine. However, when I try to link the object code into an executable (CTRL-M) CVI produces the out of memory error consistently.
How can I send you the zipped project for you to help debug these issues? I'd rather notpost the whole thing to this board.
Thanks,
Trevor