From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

RT Analog Input with Ext Sample Clock

Well, I'm not sure our 6052E borards have an FPGA, but I get your point.  The DAQ board is pushing data into a FIFO but the application on the RT controller only gets data in chunks of arbitrary size.  If I understand you correctly, under DAQmx we can never know if a 1 sample Read.vi call in a time critical loop will actually get the data when we expect due to the bottleneck in the dma transfer, even at a very slow rate of 2 Hz.  In the first benchmark program, the loop counter appears to increment by 2 becasue the first iteration hangs up on the Read call until enough data has arrived and the following iteration executes so fast you can't see the counter update on the front panel.  The computed loop frequency is either 1/2 the expected value or infinite for the same reason - one iteration takes twice as long as expected and the next goes so fast the tick count is zero.  Is this right?

 

This creates problems.  In the approach we've been using we depend on reading a single sample from a range of AI channels across 2 devices, reading a GPS timestamp, and reading pulse counts from a counter.  These various measurements were then combined into a 1D double array and passed out of the time critial loop via FIFO to a normal priority loop for writing to disk and network broadcast.  Loop timing was controlled by the AI sample clock.  But problems reading data seem to have broken this timing approach and it seems we can't rely on data arriving when we expect.

 

Can someone suggest an alternate programming approach here?  If the time critical loop gets hung up by a Read call this will seriously mess things up.  I haven't found many examples that seem directly relevant.

0 Kudos
Message 11 of 14
(503 Views)

The daq boards have a small fpga that daqmx compiles to. They can't be accessed directly through anything but the daqmx api. 

 

What kind of accuracy do you need for your application? Do you have an concrete number that you must meet?

 

If you are reading more than two samples, or one sample from two or more channels on one board, then the DMA transfer will automatically take place and you won't see this problem. There will be a small time delay between the time you read the samples and the time you read the GPS time stamp. However this time will be very small, especially on a real time system.

 

As I alluded to before you can configure data transfer requests and data transfer mechanisms, to get around the behavior you were seeing with one sample, but I think this will only be necessary if you need to read only one sample from the board. 

 

If you need ~ns accuracy you will probably need to use a hardware trigger, if you need ~ms-~us you can probably get away with the method you are currently using.

 

Jesse Dennis
Engineer
INTP
0 Kudos
Message 12 of 14
(480 Views)

Our application typically runs at 40 Hz.  The attached diagram shows timing for the two most important clocks.  There are are 4 tasks to complete during each iteration: 1) update AO (one channel one sample), 2) read and reset a pulse counter, 3) read a GPS timestamp, and 4) read a range of AI channels on two devices.  The 40 Hz master clock triggers the time critical loop, the analog output, the counter gate pulse, and the analog input.  The gate pulse is configured for a 2 ms delay at the start of the iteration.

 

Tasks 1 & 2 MUST be completed during the 2ms delay.  In practice, it has not been a problem to complete all tasks in the alotted time.  AI and AO are hardware triggered and occur instantly.  The counter task (read counts, stop and then start task to reset counter) and timestamp read are software driven, but seem to execute fast enough for our purposes.

 

In the LV 7.1 / legacy DAQ version, loop timing was controlled by waiting for the rising edge of the AI sample clock (AI Single Scan.vi).  That functionality doesn't seem to work with DAQmx, but there are other approaches which do seem to work: I can use the master clock to trigger a timed loop structure.

 

It's critical, however, to get data from all the AI channels within the 25ms peroid.  We're reading 18 channels, but an ODD number from each of two devices.  If the dma only moves two samples per transfer, will this leave one sample from each device behind??  I still need to test this.  I suppose I can sample an even number of channels if necessary to get the data on time.

 

If I still have problems I'm sure I'll need to investigate configuring the data transfer, as you suggest.  We are not pushing a large amount of data, so transfer efficiency isn't an issue, but we do need the data to arrive promptly.

 

Thanks for all your help.  I'll see if I can work this out and get back to you if I still have problems.

0 Kudos
Message 13 of 14
(475 Views)

Sounds good, 

 

If it was working in Daq it will almost certainly work in Daqmx. I look forward to hearing if you can get everything working.

Jesse Dennis
Engineer
INTP
0 Kudos
Message 14 of 14
(468 Views)