From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Windows 2000 and DAQ

Hi

We are running an application (C/C++, NIDAQ 6.9.1, Windows 2000, PCI 6025e card), and have found its performance lacking. The current DAQ algorithm is something like:

DAQ_Config external trigger
SCAN_Setup
Select_Signal PFI_0, ND_LOW_TO_HIGH

while not done
SCAN_Start // 2 scans over 2 channels
arm hardware
while true
DAQ_Check
if daqStopped == true
break
end while
process
DIO
process
AI_Clear
end while

Back of envelope calculations show this sequence should complete within 3.33 ms (the minimum time between external triggers).

Upon profiling our application, we found that SCAN_Start took about 20% of the time at nearly 4 ms per iteration; the time taken by the process function
s and DIOs are negligible.

Possible solutions to the performance issue are scan clock gating, or using counters to simulate retriggering (as shown by SimulateRetriggerScanScan_Eseries.c) along with circular buffers and DAQ_Events. I am leaning towards the counter option as I think it gives me more control than the former solution.

However, prior testing this approach, is the application doomed given the time constraints and Windows 2000? (In an answer to a question regarding Windows NT performance and DAQ, it mentioned part of the DAQ is done in kernel mode which is expensive compared to user mode. Does this hold for Windows 2000, too?) Are there alternative methods? And finally, should I be looking at LabView RT?

If double buffering is used along with DAQ_Events, say DAQ_Events is set to fire whenever n readings are ready, must I use the DAQ_DB_Transfer functions to extract data in the callback function, or can I extract data from the buffer specified in SCAN_Start?

Peter
0 Kudos
Message 1 of 3
(2,619 Views)
Peter,

Based on your discussion about possible solutions, it seems that you wish to acquire data continuously each time you receive a trigger signal. In this case, yes, you do want to use scan clock gating or counters to create you own scan clock signal. Usually, for the rate at which you acquire data, there is not enough time to call the functions to set up the triggers, clear them, and then reconfigure them for the next round.

I recommend starting with the examples in the NI-DAQ >> Examples >> AI directory, such as the SCANsingleBufAsync to start with. The SCANsngleBufAsyncExtScan_ESeries shows how to apply an external clock signal. You would have to use one of the counters running the STCgenerateRepeatedTriggeredPulse example and wire that to
the pin expecting the external scan clock signal. Then, apply your trigger signal to the gate pin of the counter running the example. For a double buffered operation, search the http://www.ni.com/support pages for "double buffered scan", and you will see the "Double-Buffered Asynchronous Scanning in Microsoft Visual C++ with NI-DAQ" example. If you are not using Visual C, then you can still see the function calls in the C file of that example.

Also, the NI-DAQ User Manual for PC Compatibles is a very helpful reference when dealing with NI-DAQ function calls, because it describes how the functions fit together. The NI-DAQ Help file describes the details of each function, as well as the parameter details.

Regards,
Geneva L.
Applications Engineer
National Instruments
http://www.ni.com/support
0 Kudos
Message 2 of 3
(2,619 Views)
Hi Geneva L

Thanks for your reply.

I would like to elaborate on the original method used for DAQ:

When a trigger is received, one scan is required: we do not wish to continually acquire data. Once complete, processing is performed (including DIO), and finally a signal is sent to the equipment to reset it. (It is possible that a sample is rejected under some circumstances.) This process is repeated until the total number of samples received satisfy some contraints.

As noted, setting up SCAN_Start every iteration is expensive, and we felt another DAQ technique was required.

Using counters to simulate retriggering, and NIDAQ events (Config_DAQ_Event_Message, configured to fire every n scans) we found that when NIDAQ events were han
dled by Windows, performance was much poorer than our original method. When we switched to the function callback option of Config_DAQ_Event_Message, performance was much better!

Thanks for the RTM reference 🙂
0 Kudos
Message 3 of 3
(2,619 Views)