Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Official Position on E-Series [vis-a-vis S-Series] Multi-Channel Sampling?

What is the official position on E-Series PCI card [vis-a-vis S-Series PCI card] multi-channel sampling?

Apparently the S-Series PCI cards have one DAC chipset per channel, so they can perform true simultaneous sampling [and guarantee time stamps].

However, the different channels on the E-Series cards appear to share the same DAC chipset, so there is some overhead involved when shifting from one channel to the next.

Do you have any documentation on this overhead? I'm looking for stuff that would be very mathematical in nature, enough so that I would be able to decide whether the overhead in E-Series multichannel sampling is so bad that the timestamps will be too inaccurate for our purpos
es. [Or, conversely, whether NI can guarantee that E-Series multi-channel timestamps will fall in windows that are sufficiently small that the E-Series cards will be adequate for our purposes.]

A link to a PDF file or an HTML file would be great.

Thanks!
0 Kudos
Message 1 of 2
(2,674 Views)
You are correct in how the E Series and S Series boards operate. The S Series board is one of National Instruments high end multifunction DAQ boards and thus has an ADC for each of the analog inputs. This allows these channels to all be sampled simultaneously and at faster rate.

In contrast to this, the E Series devices only have one ADC onboard. In order to accommodate multichannel sampling, the board uses a multiplexer to scan all the different channels. Therefore, there is naturally a small delay between when, for example, channel 0 and channel 2 are sampled. This delay is referred to as the interchannel delay.

All E Series device have two clocks that are used for any analog input. The sample (or scan) clock controls when a scan is initiated, and the convert (channel) clock controls when each individual channel is sampled (the names of the clocks depend on if you are using software with Traditional NI-DAQ or DAQmx). So in order to determine the time between the sampling of channels in traditional NI-DAQ, you will need to know the convert (channel) clocked used.

Traditional NI-DAQ selects the fastest channel clock rate possible. However, to allow for adequate settling time for the amplifier and any unaccounted factors, Trad NI_DAQ adds an extra 10us to the interchannel delay (channel clock period). However, if the scan rate is too fast for Trad NI-DAQ to apply to 10us delay and still sample every channel before the next scan clock, then the delay will not be added. Also, if the user manually specifies a specific rate for the channel clock, the delay will not be added.

Using Traditional NI-DAQ, you can manually set your channel clock rate with the interchannel delay input of the AI Config VI, which calls the Advanced AI Clock Config VI to actually configure the channel clock. This information can also be found at the following sites:

How is the Convert (Channel) Clock Rate Determined in NI-DAQmx and Traditional NI-DAQ?

What Is the Difference Between Interval Scanning and Round Robin Scanning?


How Is the Channel Clock Rate Determined in My Data Acquisition VI Using Traditional NI-DAQ?

DAQmx behaves slightly different by selecting the slowest convert clock rate possible in order to allow for more settling time unless the user manually specifies a convert clock using the DAQmx Timing property node.

The following link takes you to a webpage that discusses the minimum and maximum values for interchannel delay and includes some links and an example that may be useful:

http://digital.ni.com/public.nsf/websearch/9AE87416C8792FC286256D190058C7D3?OpenDocument

With this information you should be able to understand and calculate what the interchannel delay will be for your specific application and for the clocks that are being used.

Regards,
Michael
Applications Engineer
National Instruments
0 Kudos
Message 2 of 2
(2,674 Views)