Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronized analog input and output with a long (20,000 sample) output waveform using ni-daqmx base

Using NI-DAQmx under Windows it's fairly easy to set up a long analog output waveform which has more samples than the physical FIFO. This is because the driver takes care of buffering for you.

It's more complicated, but not impossible using NI-DAQmx Base (version 2.1 Mac OS X in my case). The attached source-code file shows how. The M-Series digitizer that I am using has a FIFO of ~2,500 samples. The test program delivers a 20,000 sample sinusoidal waveform to channel AO-0 by first writing 1,000 samples, then starting the output task, then writing additional chunks of 1,000 samples as the output FIFO empties. Works fine, and I see a nice sinusoid on an oscilloscope monitoring the output.

A problem appears when I try to synchronously read samples from an input channel while writing to an output channel. For example, in the attached program un-comment the line "//#define analogInputActive". With this change the program now sets up an input task, writes 1,000 output samples, then triggers both input and output tasks synchronously. The program enters a loop that writes a chunk of 1,000 samples, then reads in 1,000 sample, etc.

Note that it calls exactly the same sequence of output functions as before.

The program executes without reporting an error, but there is now a serious problem with the output waveform. On the oscilloscope, the waveform is no longer a clean sinusoid. Instead, it rises from zero volts three times in a sawtooth shape (2,000 samples per tooth), before finally becoming a clean sinusoid. It terminates 6,000 samples too soon (never returns to zero volts).

The bad output waveform I see on the oscilloscope is accurately reflected in the analog data that is read in by the program. At least that aspect is working correctly.

I've tried all sorts of variations, but can't find a way to get this program working correctly. I can't see that I'm doing anything wrong here, and I'm worried that it is actually a bug in NI-DAQmx Base. Any help would be appreciated.

John.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 1 of 16
(4,062 Views)
Hi John,

Thanks for posting to the forum.  I believe you left out your code in the last post. 

How exactly are you triggering both tasks (i.e. are you routing signals or are you using an external trigger)?  There are several features of DAQmx that are not available on DAQmx Base.  DAQmx Base for ANSI C does not support the routing of signals.  You can, however, make the trigger for both analog input and analog output tasks come from the same source.  If routing signals doesn't pertain to your application then please post the relevant parts of your code so we can assist you further. 


Best Regards

Hani R.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 16
(4,023 Views)
Hi,

Thanks for getting back to me

> I believe you left out your code in the last post.

Seems that when you click 'Preview Post' the attachment is lost. Hopefully, it will be attached to this message.

> How exactly are you triggering both tasks (i.e. are you routing signals or are you using an external trigger)?

I don't think this is a triggering problem. I am triggering the synchronous output from the internal signal emitted by the input task ("/Dev1/ai/StartTrigger"). This works nicely with NI-DAQmx Base.

The problems is with the actual output waveform that is delivered. It is buffered correctly by my code when the input task is not created/activated. But when I start input and output together, the start of the output waveform is corrupt. It looks like a bug in NI-DAQmx Base to me. But maybe there's a work around?

Thanks,

John.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 3 of 16
(4,019 Views)
Hi John,

I looked at your code and compared it to a working example in LabVIEW.  The program flow is very similar to what you are trying to do.  The significant difference that I noticed is that when you use both the analog input and output tasks, you are calling the DAQmx Base analog write before setting up the trigger and then writing again in the while loop.  My first suggestion would be to remove the write before the trigger.  When you disable the analog input, it also disables the trigger. This may explain why everything appears to work when you run the analog output task alone.  Hopefully, this solves the issue. 


Best Regards

Hani R.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 16
(3,982 Views)
Unfortunately, your suggestion didn't help. Here's what I tried, starting with the original NI-DAQmx Base function call sequence....

.....................................................................................................................................
// write output signal
DAQmxErrorCheck ( DAQmxBaseWriteAnalogF64( outputTaskHandle, samplesToWrite, 0, timeout, DAQmx_Val_GroupByScanNumber, outputData, &samplesWritten, NULL ) );
totalSamplesWritten += samplesWritten;

// trigger output when input starts
DAQmxErrorCheck ( DAQmxBaseCfgDigEdgeStartTrig( outputTaskHandle, outputTrigger, DAQmx_Val_Rising ) );

// initiate acquisition - must start output task first, as it will be triggered by the input task
DAQmxErrorCheck ( DAQmxBaseStartTask( outputTaskHandle ) );
DAQmxErrorCheck ( DAQmxBaseStartTask( inputTaskHandle ) );
.....................................................................................................................................

I first moved the 'Write' call to after the Trigger setup...

.....................................................................................................................................
// trigger output when input starts
DAQmxErrorCheck ( DAQmxBaseCfgDigEdgeStartTrig( outputTaskHandle, outputTrigger, DAQmx_Val_Rising ) );

// write output signal
DAQmxErrorCheck ( DAQmxBaseWriteAnalogF64( outputTaskHandle, samplesToWrite, 0, timeout, DAQmx_Val_GroupByScanNumber, outputData, &samplesWritten, NULL ) );
totalSamplesWritten += samplesWritten;

// initiate acquisition - must start output task first, as it will be triggered by the input task
DAQmxErrorCheck ( DAQmxBaseStartTask( outputTaskHandle ) );
DAQmxErrorCheck ( DAQmxBaseStartTask( inputTaskHandle ) );
.....................................................................................................................................

I next moved the 'Write' call to after the output Start...

.....................................................................................................................................
// trigger output when input starts
DAQmxErrorCheck ( DAQmxBaseCfgDigEdgeStartTrig( outputTaskHandle, outputTrigger, DAQmx_Val_Rising ) );

// initiate acquisition - must start output task first, as it will be triggered by the input task
DAQmxErrorCheck ( DAQmxBaseStartTask( outputTaskHandle ) );

// write output signal
DAQmxErrorCheck ( DAQmxBaseWriteAnalogF64( outputTaskHandle, samplesToWrite, 0, timeout, DAQmx_Val_GroupByScanNumber, outputData, &samplesWritten, NULL ) );
totalSamplesWritten += samplesWritten;

DAQmxErrorCheck ( DAQmxBaseStartTask( inputTaskHandle ) );
.....................................................................................................................................

I even tried moving it to after the input Start call, but this gave an error as expected.

In every case above, I observed exactly the same corrupt, incomplete output waveform.

To make it easier for you, or another engineer, to reproduce this problem, I will attach the complete XCode project (Mac OS X), not just the source code.

.........

In my own trial-and-error testing, I've found that almost filling the FIFO in the first Write call improves the output waveform under most, but not all conditions (see #define fillFIFO in the attached code). Even if I could get this approach working reliably in all circumstances, it seems like a fragile kludge. I had to figure out the size of the PCIe-6251 FIFO by trial and error, and have hard coded it. If an end-user had a different digitizer, my code would most likely break.

As you stated in your reply, the original sequence of NI-DAQmx Base function calls is very reasonable, and similar to that in the working LabVIEW program. The fact that it definitely doesn't work under Mac OS X / NI-DAQmx Base strongly suggests there is a bug in NI-DAQmx Base.

Thanks for your help with this,

John.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 5 of 16
(3,971 Views)
Hi John,

Thanks for posting your project.  I will try your code on a Mac with an M-Series card and DAQmx Base to see if I can reproduce the issue. 


Best Regards

Hani R.
Applications Engineer
National Instruments
0 Kudos
Message 6 of 16
(3,939 Views)
Hi Hani,

Thanks very much for taking the time to explore the problem. I've now encountered another equally serious problem and have started a new thread on that issue. Since you are setting up to run one testbed application, I wonder if your could also give my new testbed app. It demonstrates that the synchronous output signal is corrupted by glitches, and progressively desynchronizes with the input signal at high sample rates. The new thread and attached XCode project is at...

http://forums.ni.com/ni/board/message?board.id=250&message.id=35218#M35218

Thanks again,

John.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 7 of 16
(3,927 Views)
Hi John,

I was unable to reproduce the issue with both your code and with a VI I created in LabVIEW.  I ran your code on a Mac OS X with DAQmxBase and a PCI 6251.  When running your code, I noticed the sample rate for the analog input was the same as the output rate of the analog output.   According to the Nyquist Criteria, to retrieve the frequency information of the output signal, the analog input sampling rate should be at least twice that of the highest frequency component of the analog output signal.  In order to retrieve a reasonable representation of the output waveform, I would recommend sampling at least ten times the highest frequency of the output.  Therefore, if your analog output is running at 50kHz, you should sample the signal at 500kHz. 

After changing the sampling rates, I was able to obtain the analog output waveform.  This does not explain why the output waveform was incorrect on your oscilloscope.  At this point, I don't believe it is an error with DAQmxBase.

If you are still getting an incorrect waveform after increasing the analog input sampling rate, could you post an image of the graph you obtain?  Also, what M-Series card are you using?  If there is anything I can change in the test code/setup to reproduce the issue, please let me know. 


Best Regards

Hani R.
Applications Engineer
National Instruments
0 Kudos
Message 8 of 16
(3,888 Views)
Hi Hani,

Thanks so much for taking the time to set up your hardware and run my testbed application. I'm involved with tech support for my commercial application, and understand the time and effort involved. I'm sorry that you couldn't reproduce the problem. It must seem that this has been a waste of time for you - but your lost time is nothing like the time that I have already wasted on this frustrating problem.

Unfortunately, I can not yet accept your claim that this is not a problem with DAQmx Base. This is why...

In the last few days, I have switched from Mac OS X to Windows XP development, and have had a much more productive and happy time. The switch simply involves a reboot of my Intel Mac Pro desktop machine, which dual boots Windows and OS X. This means I am using EXACTLY the same hardware setup as before. All testbed code runs smoothly under Windows XP. It seems that there is nothing wrong with my PCIe 6251 card, and NI-DAQmx works as advertised. It is bug-free (as you would hope) and the real beauty of DAQmx is that it handles the buffering of long output waveforms to the buffer without the programmer (me) having to raise a sweat.

However, this experience has confirmed my belief that there is something seriously wrong with DAQmx Base under OS X - at least on an Intel-based Mac Pro. In your reply I noted that you used a "PCI 6251" suggesting that you ran the tests on an older PowerPC Mac (with PCI bus). Is that correct? If so, I suspect the problems I've encountered may lie with the Intel port of NI-DAQmx Base. If you Get Info on the file 'nidaqmxbase' in the 'nidaqmxbase.framework' directory, you see that it is a 'universal' library. That means it is contains both PowerPC and Intel code. It is possible that the serious (show stopper) problems I am experiencing are in the Intel library. Unfortunately, I can't test that idea here, because my old PowerMac only has a PCI bus, whereas my M-Series card is only compatible with the PCIe bus. I'm reluctant to buy another NI card to just to prove my point about DAQmx Base being buggy.

For the record I am running OS X Tiger v10.4.11 on a 2.66 GHz dual-core Intel Xeon with 2 GB RAM, using a PCIe-6251 NI card with BNC-2111 wiring block. Using NI-DAQmx Base 2.1 for Mac OS X (the framework library nidaqmxbase is dated 1 Sept 2006).

By way of encouragement, last year I had a sales booth for my software at the Society for Neurosciences conference in Atlanta, Georgia. It was attended by 35,000 neuroscientists, and people trying to sell them stuff. It was a very busy conference for me, but I did find time to visit the National Instruments booth - an impressive affair! It must have cost NI more than $20,000 to set it up for the week. So clearly your company is very interested in this specialist scientific market. My specialist data acquisition and analysis application, AxoGraph X, will help you drive sales into this market. Unfortunately, it appears that you do not currently have a product that is compatible with a modern Intel-based Mac running OS X. There are many thousands of dollars at stake here for both our companies. I hope that you can continue to work with me on this problem.

Your suggestion about the 'Nyquist' criteria was intriguing (I am aware of sampling theory), but irrelevant here. What my customers require is tight timing synchronization between input and output channels, which must be updated at the same rates. They record electrical signals from brain or nerve tissue, and filter these signals at the Nyquist frequency (half the sample freq) before connecting them to the analog input channels. They deliver output pulses (analog and digital) to stimulate the nerve tissue, to switch on valves that apply drugs, etc. The input and output waveforms MUST be tightly synchronous, and MUST run at the same sampling rates. Any half-way decent digitizer should be able to do this. Of course the M-Series can do it easily, but in my hands, not on an Intel Mac Pro with NI-DAQmx Base.

I have modified the test-ni tesbed XCode project so that it outputs the acquired waveform to a tab-text file. I have attached the modified project and two PNG graphs. The file 'AcquiredData Graph.png' is a graph the tab-text output file 'AcquiredData.txt' (not included). It shows the waveform that was generated and sent to DAQmx Base 'write' functions, together with the waveform that was acquired by the testbed. The file 'Chart Record of Output Signal.png' shows the actual waveform generated when the project is run (recorded on a separate system). Note that test-ni was run twice. The first time, the line '#define preFillFIFO' was not commented out, and the output waveform was OK. The second time, the line was commented out, and the output waveform was corrupt. (Note that when you build and run test-io, 'AcquiredData.txt' appears in the Debug folder).

Pre-filling the buffer is a fragile, ugly kludge that breaks down at higher sample rates. I need a sensible, reliable method for buffering long output waveforms with NI-DAQmx Base.

Thanks again for your help,

John.

PS This is probably irrelevant, but when my testbed application launches the following error message is reported. I've been told in an earlier discussion that it can be safely ignored. That may be true, but it does suggest a lack of care, and attention to detail in finalizing DAQmx Base v2.1 for Intel Macs...

com.ni.LabVIEW.dll.nidaqmxbaselv
CFBundle 0x313480 (framework, loaded)
{type = 15, string = file://localhost/Library/Frameworks/nidaqmxbaselv.framework/, base = (null)}
Amethyst:Library:Frameworks:nidaqmxbaselv.framework
2007-12-04 12:23:48.015 test-ni[605] CFLog (21): Error loading /Library/Frameworks/LabVIEW 8.2 Runtime.framework/resource/nitaglv.framework/nitaglv: error code 4, error number 0 (no suitable image found. Did find:
/Library/Frameworks/LabVIEW 8.2 Runtime.framework/resource/nitaglv.framework/nitaglv: mach-o, but wrong architecture)
CFBundle 0x17821a90
(framework, not loaded)
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 9 of 16
(3,874 Views)

Hi John,

It is definitely possible that a driver issue is unique to a particular machine.  I want to thank you for providing such detailed information about your setup and the errors you have found.  I want to assure you that your feedback is extremely important to National Instruments.  That being said, I will have your code tested on an Intel Mac and I will update you with the results. 



Best Regards

Hani R.
Applications Engineer
National Instruments
0 Kudos
Message 10 of 16
(3,858 Views)