Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Is AIconv_Rate the new way of interval scanning?

Hello,
    I'm substituting an M series card for an E series card.  I have heard several people say that interval scanning is not supported on an M series device, only round robin will work on M series (using NiDaqmx).  However, it looks like using DaqmxSetAIConvRate in C++ you can change the interchannel timing (convert clock) to be fast enough so it is like interval scanning.  Could someone confirm this for me?  I listed an example to be verified that I understand the concept.
    What are the units on DaqmxSetAIConvRate?  If I put in 250000 would my inter-channel delay would be 1/250000s?

    Also, say I am scanning at 2000 samples per seconds across 63 channels (this is my scan time/sample clock). And I want the channels (63) to be sampling at a speed of 250ks/s.  This would mean all 63 would be sampled on the rising edge of the sample clock, and the timing between the 63 would be 1/250000.  The A/D would then wait to aquire the new set of samples on the next rising edge of the sample clock, or a little less than 1/2000 seconds away.

Thank you
0 Kudos
Message 1 of 9
(3,430 Views)

Hi sparckis-

M Series cards are in fact capable of performing interval scanning.  I suspect that you might have run across some old info about the way early versions of NI-DAQmx defaulted to using round robin scanning.  The most up to date information is available here.  In short, NI-DAQmx 7.4 and later by default chooses a convert rate equal to the maximum convert rate available on the board (typically, the fastest single-channel scan rate) plus 10uSec of padding for settling between channels.  The only exception is when the 10uSec time won't fit, and then the convert rate will default back to round robin scanning.

Of course you also have the option to override the default NI-DAQmx convert clock rate.  Since you mention 63 channels at 250kHz, I'll assume you're using a PCI-6225.  If you want to scan as quickly as possible you can adjust the convert rate to 250kHz and the behavior of the device will be as you described (one sample clock pulse every 1/2000 seconds which triggers a train of 63 convert clock pulses spaced 1/250k seconds apart).  However, to achieve the accuracy specifications of the device you need to (at least) allow for the 7uSec settling time prescribed by the specifications for a worst-case full-scale step settling of 1LSB.

Hopefully this helps-

Tom W
National Instruments
0 Kudos
Message 2 of 9
(3,424 Views)
Hi Tom,
    Thankyou for getting back to me so quickly.  You are correct in assuming I am using the 6225.  Actually, it is replacing the old 6061e device that used ni-Daq legacy.  I was trying to match up the parameters used in the C++ code and I found that it used the Scan_to_disk function.  It used all the parameters I listed in the example including a hard coded 250,000.0 sample rate. 
    So, is function DaqmxsetAIConvRate the correct way to set the converter clock?  Also, are the units a straight number, meaning second arguments is 250000.0 for 250kS/s on the second argument?

    After reading through the link you posted, I am even more confused on how the old card worked.  The 6061e had a sampleRate of 250,000.0 and a scan rate of 2000 over 63 channels.  As far as I can tell, this defaulted to Round Robin, is that correct?  Since the scans are 1/2000=500uSecs apart and we are attempting 63 scans with 10usec (padding)+ 1/250000usec = 14usec separation between channels.  We end up with 14usec * 63 = 882   >500usec.  Is this right?  If so, with the round robin what is the interchannel delay?  is it 10usec of padding or still 14usec?

Please help, I'm trying to get the new card to behave just like the old.  Here are the parameters from the Scan_to_disk for reference
Scan_to_disk(1 , 63 , arrayorder[], gainarray[],filename, 151200, 250000.0 , 2000 , 0 )   

Is it ok to crank up the sampling and lose some precision since the new card is 16 bits and the original was 12?

Finally, what is the point of calling a card 500kS/sec or 250kS/sec if it is recommended that settling between channels be ~7-10usec = 100kS/sec?  Is it that the card can single channel handle up to 500 or 250 if it doesn't need to switch to a separate channel?

Thanks again
0 Kudos
Message 3 of 9
(3,420 Views)

Hi sparckis-

After reading through the link you posted, I am even more confused on how the old card worked.  The 6061e had a sampleRate of 250,000.0 and a scan rate of 2000 over 63 channels.  As far as I can tell, this defaulted to Round Robin, is that correct?

Yes.  In order to fit 10uSec of padding on top of the 1MHz maximum update rate of the 6071E you would have to operate the convert clock rate at ~91kHz.  For 63 channels, that rate would correspond to a scan period of 63 / 91k = 0.7mS.  A scan rate of 2000Hz requires a scan period of 0.5mS, so the convert clock would need to run faster than what would be required for 10uSec.  In the case of DAQmx, this means that round-robin sampling would be used, and that it would be faster than 10uSec padding but slower than the maximum possible rate.

 

To [hopefully] sum up the rest of your questions, in order to get the same behavior (as before with Traditional NI-DAQ) you need to set your AI.Conv Rate to 250kHz.  According to the 6225 specs, a settling time of 4uS (corresponding to a convert rate of 250kHz) would result in a full-scale step accuracy of +-6LSB (worst case).  The 6071E specs say that for a settling time of 3-5uS you can expect an accuracy of +-0.5LSB.  Keeping in mind that the 6225 is a 16-bit board and the 6071E is a 12-bit board (where the width of a 16-bit LSB is 16x "smaller" than a 12-bit LSB) I would expect better accuracy on the 6225 than you saw on the 6071E.

The M Series device is physically able and allowed by DAQmx to sample at the 250kHz rate, though the overall accuracy might suffer slightly.  This ability is the motivation for spec'ing the minimum convert interval to achieve 1LSB (i.e. 16-bit) settling in addition to the spec for accuracy at the maximum scanning rate.

Tom W
National Instruments
0 Kudos
Message 4 of 9
(3,416 Views)

After reading through the link you posted, I am even more confused on how the old card worked.  The 6061e had a sampleRate of 250,000.0 and a scan rate of 2000 over 63 channels.  As far as I can tell, this defaulted to Round Robin, is that correct?

Yes.  In order to fit 10uSec of padding on top of the 1MHz maximum update rate of the 6071E (I used the 6061e) you would have to operate the convert clock rate at ~91kHz (or 83.33kHz for the 500kS/S on the 6061e).  For 63 channels, that rate would correspond to a scan period of 63 / 91k = 0.7mS (or 63/83.3 =7.6mS) .  A scan rate of 2000Hz requires a scan period of 0.5mS, so the convert clock would need to run faster than what would be required for 10uSec.  In the case of DAQmx, this means that round-robin sampling would be used  (What about in the case of the traditional NI-Daq driver, was the card using round robin or interval scanning?), and that it would be faster than 10uSec padding but slower than the maximum possible rate.

 

To [hopefully] sum up the rest of your questions, in order to get the same behavior (as before with Traditional NI-DAQ) you need to set your AI.Conv Rate to 250kHz.  According to the 6225 specs, a settling time of 4uS (corresponding to a convert rate of 250kHz) would result in a full-scale step accuracy of +-6LSB (worst case).  The 6071E specs say that for a settling time of 3-5uS you can expect an accuracy of +-0.5LSB.  Keeping in mind that the 6225 is a 16-bit board and the 6071E is a 12-bit board (where the width of a 16-bit LSB is 16x "smaller" than a 12-bit LSB) I would expect better accuracy on the 6225 than you saw on the 6071E.  (would this still be true based on the correct old card? Could you compare the Scan_to_disk function and verify that those settings are 1-1 with the Ni-daqmx settings?)

The M Series device is physically able and allowed by DAQmx to sample at the 250kHz rate, though the overall accuracy might suffer slightly.  This ability is the motivation for spec'ing the minimum convert interval to achieve 1LSB (i.e. 16-bit) settling in addition to the spec for accuracy at the maximum scanning rate.

Message Edited by sparckis on 09-18-2007 03:09 PM

  

int main(){

    TaskHandle  taskHandle=0;
   int32       read;
   int         Gain = 10;
   int         SampleRate = 2000;  //Changed in database and setup screen
   float       Period = prescan+period;
// Default 1.2= 0.2 + 1
                    
   int         SamplesPerChannel = (int)(Period * SampleRate); //Default = 1.2 * 2000 = 24000
   int         Sample_array_size = SamplesPerChannel * 63;  //63 is number of channels
   float64     data[Sample_array_size];


   /*********************************************/
   // DAQmx Configure Code
   /*********************************************/
  DAQmxCreateTask("",&taskHandle));
  DAQmxCreateAIVoltageChan(taskHandle,"Dev1/ai0:ai63","",DAQmx_Val_RSE, -1*Gain,1*Gain,DAQmx_Val_Volts,NULL);



  DAQmxCfgSampClkTiming(taskHandle,"",SampleRate,DAQmx_Val_Rising,
                                      DAQmx_Val_FiniteSamps,SamplesPerChannel);
  DAQmxStartTask(taskHandle));
  DAQmxReadAnalogF64(taskHandle,SamplesPerChannel,
                                   10.0,DAQmx_Val_GroupByChannel,
                                   data,Sample_array_size,&read,NULL);
}

Message Edited by sparckis on 09-18-2007 03:26 PM

0 Kudos
Message 5 of 9
(3,383 Views)

HI sparckis-

Sorry for the confusion there.  I looked back at the AT E Series specs and the accuracy numbers look the same, so my earlier comments should still hold true.

The default behavior in NI-DAQ Traditional if you only specified the scan rate would have been 10uSec if it fit (which it wouldn't for your case), or round-robin if not (which would have been the default for your case).  Of course, since you were setting both the scan/sample clock rate and channel/convert clock rate then the default behavior is not applicable. 

In short, your settings for 2k sample clock and 250k convert clock in DAQmx will give the same behavior you had in NI-DAQ Traditional with the Scan_to_Disk function and your parameters.

Tom W
National Instruments
0 Kudos
Message 6 of 9
(3,349 Views)
Thanks!!!!!
0 Kudos
Message 7 of 9
(3,324 Views)
Hi Tom,
    Just to make sure I am clear on the timing, Setting AIConv rate to 250,000 and sample clock to 2000 would give me samples spaced 1/250k = 4uSecs appart.  Over 63 channels this totals 63*4uSecs = 252uSeconds leaving 248uSeconds (1/2k - 252)to spare before the sample time times out.  Is this correct?  And was this the behavior of the old card?

Thanks again (hopefully for the last timeSmiley Very Happy)
0 Kudos
Message 8 of 9
(3,126 Views)

Hi sparckis-

Yes, your calculations seem correct.  Since you mention 63 AI channels and M Series, I assume you're using either an NI 6225 or NI 6255 device.  You should be aware that a 4uSec convert rate interval may sacrifice some accuracy due to the required settling time(s) for multi-channel scanning.

For instance, the NI 6225 device specifications call for a 7uSec convert interval for +-1LSB accuracy in response to an input full-scale step.  A 4uSec convert interval would an approximate full-scale step response accuracy of +-6LSB.  Alternatively, the NI 6255 specifications prescribe settling times of 1.6uSec to 2.5uSec for +-1LSB full-scale step accuracy on most ranges.

So, to achieve the accuracy you are looking for you might need to tweak your convert rate slightly, depending on the device you're using.

Tom W
National Instruments
0 Kudos
Message 9 of 9
(3,124 Views)