I am trying to modify my LV 6 data acquisition code to work in LV 7 and DAQmx base on OS X.
The ‘behind the scenes’ approach of DAQmx base is not comforting and I am hoping there are people who can shed some light on what is occurring. I am trying to sample multi-channel data as fast as possible. I have a NI6031E, it is like 64SE, 100k/s specs. I will probably be using around 10-20 channels but regardless of the number I always want the card providing 100k/s data.
I am measuring something with pretty high RMS noise but with true signal changes at 100 Hz. The signal is already filtered, but it is just noisy because of the underlying sensor. Anyhow, I was acquiring at 100k/s and then averaging (downsampling) to around 200 Hz and using the extra datapoints to reduce my RMS noise. This worked really well and kept my disk space usage down. In the past I would run my DAQ at 100k/s and then use the “Ai continuous acquire.vi’ in a while loop. That VI just grabs data at the ACQ rate and stores it in a buffer and when you call it you tell it how many samples to grab. I would have it grab either ~25 samples OR the number of samples remaining in the buffer after the last call… this gave me a slight temporal error but I don’t care because it is a X vs Y type measurement. I am driving a sensor and measuring both the drive signal and output signal pseudo-simultaneously. In the multi channel acquisition, the first channel is the drive signal (which is just a continuously running FIFO buffered output from another board) and then the remaining channels (say 2-10) are all the output signals. This technique allowed me to collect the data very quickly (100k/s) and then immediately downsample (within my while loop) and then only retain the average value (of the drive signal and output signals) and store those to disk. Now… how do I do this with DAQmx base (yes BASE!).
I can get Base to give me the data at 100k/s but trying to grab that data at 1 point per while loop is not going to work because the while loop is just tooo slow. The only option is to have the Read VI return several points during each call. Now, how many points do I ask for? In LV6, the equivalent ‘read’ VI would also return the number of samples remaining in the buffer, so that if the buffer started getting full, I can dynamically ask for more so it wont overflow. Well, now in Base, I have no way of knowing how full the buffer is getting. The only way I can do this currently is to just make sure I ask for enough data so the buffer doesn’t overflow. Alternatively, can I go in to all the subvis of the ‘read’ VI and wire up a new output terminal that passes up the number of samples remaining in the buffer? I am worried I will buffer overrun… this system runs for days and it would be terrible if it failed because of this problem. There is a ‘set’ buffer size VI in DAQmx base, but I cannot find a ‘query’ buffer size.
It is also possible that my understanding of how the Read VI is working is not correct. I configured the task to be ‘continuous’ (or whatever the new term is). Does it start filling its buffer as soon as I run “start task”? or only when I run the “read” vi? Does the ‘read’ VI grab the OLDEST (FIFO rules?) samples in the buffer? Is this a ring buffer? Will it overflow? Etc etc etc, I just have NO IDEA how this is working behind the scenes. I’ve read the help documents and they are worthless… the help documents are only a substitute for mousing over the terminals. Is there any good reference for DAQmx Base?
Should I change my acquisition to ‘single-shot’ and then just ask for ~25 samples during each while loop? This seems much slower than using the continuous acquisition, provided I can keep it sane.
Thanks-
Brad