From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Driver Development Kit (DDK)

cancel
Showing results for 
Search instead for 
Did you mean: 

continuous in- and output with PCI-6229

Solved!
Go to solution

Hi All,
I'm currently evaluating a PCI-6229 card on RTX (and maybe InTime later on). The main goal of the evaluation is to prove we can get hard real-time, deterministic behaviour from this system in order to use it to replace our current DSP-based solution.
To do so, I'm setting up a rather simple program that should
- continuously acquire 1 channel @ 20kHz
- have a processing loop of 5mSec, ie 100 samples; this loop should be entered every 5mSec as exactly as possible
- do some simple processing in the loop (thresholding the signal)
- bring out the result on an analog output (low signal if signal is under threshold, high signal for all samples above the threshold)

I could get the basics working pretty quickly: continuous input, continous output, continuous input with dma, input using interrupts.
Trying to combine everything isn't really working out however.. Normally I'd try and try until knowing the device inside-out but now I have a rather strict timeframe so hopefully someone here can provide some insight.

Questions:

1. For the input, I can get an SC_TC interrupt each time one frame is scanned. I measured this on a scope by toggling a digital output on the card, and there's no noticable jitter on the squarewave which is a good sign.
However I'd like to combine this with DMA, but the DMA is lagging a bit on the interrupt so I end up having to poll the DMA after all in the ISR, so there's no use using the interrupt in the first place.
Is there a way to set up continuous DMA servicing and get an interrupt from the DMA system itself after 100 samples are transferred?

2. For the output, I cannot get continuous mode working with DMA, only by writing to the FIFO manually. I can preload a couple of frames with the dma, but after calling aoStart(), tDMAChannel::write() works once, but all calls afterwards return kBufferUnderflow. Any ideas?
I probably have to check when exactly to write to the DMA, but I have no idea which of the many status functions to use. Tried with AO_Status_1.readAO_FIFO_Half_Full_St() but that's not it. tried to write it on each UC_TC interrupt but that didn't work either.

3. DMA is, to my understanding, a more performant way of getting samples into the host and doesn't require to call AI_FIFO_Data.readRegister() in a loop and vice-versa for the output side. But are there really benefits in using DMA?

4. I made a basic program to bring everything together the simplest way possible: preload 100 samples in the output FIFO, start analog input, start analog output, have the ISR copy all values directly from input to output FIFO.
Putting both analog signals on a scope, I expected to see the input and about (see question5) 5mSec later the same signal on the output, with no jitter.
However what I see is the output is just floating around, in other words there is no fixed delay between output and input! How is this possible? I use the same divisor for in-and output. Is there any sample code available that achieves what I want?

5. Is there a way to start both input and output at the exact same time, eg at the same edge of a certain clock pulse? How are input and output synchronized? Can I be sure they never go out of sync?

0 Kudos
Message 1 of 3
(7,228 Views)

update: looking at the DAQmx control samples, it seems some of them set the start trigger for the AO to the AI start trigger. I mimicked this using kAO_START1_SelectAI_START_1 with the aoTrigger method, and the sync seems better now: the output starts exactly 10mSec after the input (or 5, if I preload only one frame) and stays in sync for about 25mSec. That it goes wrong.

 

At the moment I'm using no DMA, just one simple interrupt routine that basically does

 

board->Interrupt_A_Ack.writeAI_SC_TC_Interrupt_Ack( 1 );

board->Interrupt_A_Ack.flush();

toggle ^= 1;

board->Static_Digital_Output.writeRegister( toggle ? 0x00 : 0xff );

for( u32 i = 0 ; i < numSamples ; ++i )

{

  const u32 val = board->AI_FIFO_Data.readRegister();

  board->AO_FIFO_Data.writeRegister( val );

}

 

In the screenshot in the attachment you can see this goes wrong: cursor A shows the position of the input signal (=AI 0, yellow) that should be shown 10mSec later at the output (blue line=AO). Cursor B shows this 10mSec later position, and you can see there's some garbage right after the cursor. This is the data that was written to the AO FIFO in the fifth interrupt (interrupt position show in purple).

 

Trying to figure out where that comes from, I put board->AI_Status_1.readAI_FIFO_Empty_St() in the interrupt loop. And effectively, when trying to read the 500th sample, it reports that the FIFO is empty. Consequently the data written to the AO FIFO does not make sense anymore. But I do not understand how can this ever happen? The SC_TC interrupt is fired whenever 100 samples are read from the input. So after this interrupt occurs, the FIFO should always contain at least 100 samples, no?

0 Kudos
Message 2 of 3
(7,210 Views)
Solution
Accepted by topic author Stijn

thread can be closed: I'm using the new DDK with X series cards and have no problems anymore..

0 Kudos
Message 3 of 3
(7,017 Views)