10-04-2007 04:42 PM - edited 10-04-2007 04:42 PM
Hello,
I am using a PCI-6220 Multifunction DAQ as part of a control loop. I am writing C software using NI-DAQmx 8.6 under Windows XP. Essentially what I need to do is to sample 5 S/channel for 8 channels (so 40 samples) every millisecond (1 kHz). I am then averaging these to reduce noise. If a timestep is missed, I do not care about the missed samples and want to acquire the next 40 samples instead. So in a sense it is a soft real-time task; I must have the most recent data, which must be sampled at exact intervals, but as long as my software gets this data in a timely fashion the user-space callback does not need to be well-timed.
So far I have configured this using two tasks. One is the analog sampling task which is set to hardware-timed single point sampling mode. If I understand correctly, this means that data is sampled by NI-DAQmx when I ask for it instead of being buffered. (Actually it would be preferable to have NI-DAQmx sample this data for me, and signal my process when it is done. Is that possible?)
It is configured for a pause trigger (took me a while to figure out your terminology for "gate") controlled by ctr0. ctr0 is controlled by the other task -- it goes high for a 40-sample period (at around 30 kS/s x 8 channels = 0.0001333 s), and then goes low for the rest of the millisecond.
This actually seems to work alright. The problem is now in synchronizing my Windows process to the sampling interval. Since this is a control loop, there are several things to do within the millisecond, including writing voltages to an analog output board.
My first instinct was to use the RegisterSignalEvent() function. I provided a callback for when ctr0 changes state which pulsed a Windows event handle (PulseEvent) to wake up my waiting thread. Unfortunately this seems quite unstable -- when I integrated this approach into my application I was greated by a nice BSOD.
The error, if it's of any help, was IRQL_NOT_LESS_OR_EQUAL: 0x0000000A (0x0000F7AB, 0x00000002, 0x00000001, 0x80703A8E)
I then tried not using a callback or
any kind of timer at all. This worked well, with very nice timing
due to ReadAnalogF64() waiting for the next round of 40 samples,
though the CPU usage was quite high. I assume this to be because
ReadAnalogF64() uses busy-waiting, which would make sense. But, it
means it's not really optimal. I'd rather have a way to cause my
process to sleep until we're ready for more samples. In any case, I
would have been happy with this, but unfortunately after some time,
this method also succumbed to the dreaded blue screen. (In other words, it seems to blue screen more easily when RegisterSignalEvent() is in use, but a BSOD still occurs without it.)
I then tried using Windows
multimedia timers (i.e., timeSetEvent) with EVENT_PULSE. This worked
okay (i.e., seems stable, no BSOD) but the timing is a little too
slow, averaging between 2 and 3 ms. I know that timeSetEvent can
achieve 1ms timing, but I think combined with the implied busy-wait
of ReadAnalogF64() it adds up to 2ms due to a lack of
synchronization. This is convincing because it still uses quite high
CPU, indicating busy-waiting.
So there seems to be some very bad error in the single-point sampling code that is causing a blue screen. Somehow this is much worse when I've registered a callback function. My quesion are:
Thanks,
Steve
Message Edited by Steve.S on 10-04-2007 04:42 PM
10-05-2007 01:35 PM
I can only address a very small part of your post. I use DAQmx via LabVIEW only and can't comment usefully on any direct dll calls.
Considering you're working with 5 samples/channel at a time, there are more CPU-efficient ways to retrieve your data than hardware-timed single point mode. Namely, configure your task to buffer samples into system RAM. In LabVIEW, this is done by making a call to DAQmx Timing to define sample rate and requested buffer size -- the dll function probably has a similar name.
Then to sync your windows code with the hardware timing, you can just request multiple samples from the data acq buffer in a call that's equivalent to LabVIEW's "DAQmx Read". The read call will return as soon as that # of samples are available, thus syncing you to the hardware timing. There are additional config properties that let you specify whether to poll or sleep while waiting.
There are further DAQmx settings that let you choose which 5 samples you want. The default is to give you the 5 earliest-but-not-previously-retrieved samples. You could instead request the 5 most recent, the next 5, or perhaps the 4 most recent plus 1 brand new one. In short, quite a few options are out there.
Note that you probably still have to use a single-point update for your output though, due to the latency in buffered output tasks.
-Kevin P.
10-08-2007 03:09 PM
Hi Steve,
Welcome to the discussion forums!
The execution timing of your code is dependent on the Windows operating system and cannot be guaranteed. If you use delays, the smallest resolution that you can program is 1 ms. If you don’t use delays, you experience what you’ve seen: the processor runs the thread as fast as it can and uses up your resources. Windows aside, I’ll guide you to some things that will help you gain greater control of your data.
To get the most recent samples, you can follow the below paths in the NI-DAQmx C Reference Help file to find the appropriate properties:
NI-DAQmx C Properties -> List of Read Properties -> RelativeTo (for “DAQmx_Val_MostRecentSamp”)
NI-DAQmx C Properties -> List of Read Properties -> Offset
To know when a number of samples has been received from the hardware (register events), see:
NI-DAQmx C Functions -> Task Configuration/Control -> Events -> DAQmxRegisterEveryNSamplesEvent
Again, these are things that can help out the execution of your code, but if you need deterministic timing and no interruptions from an operating system, you may want to think about investing in a real-time system. You can visit www.ni.com/contact if you want to contact someone to discuss those options. Let me know if I can be of more assistance.
10-11-2007 03:56 PM