I get different results in AI voltage when i sample the data with an generated signal (DAQmxCreateCOPulseChanFreq) or an external clock signal - Encoder signal on PFI8. This effect happens with a M-Series 6220 and a X-Series DAQ card. I measure differential in the +- 1V range. When the max. signal ampltude is ca. 20 mV it seems, that there is an offset of ca. -1mV on the signal. The sampling rate is 40kHz. I need to use the external clock, because in the other case i have problems with the capacity of the data buffers. Additional the quadrature encoder is counted.
Can anybody explain me this strange behaviour?
It might help if you can show us some data.
1. How similar are the frequencies when using internal and external clocks? If there are significant differences, settling time issues may be a factor. How much jitter is on the encoder? Over what range does the encoder frequency vary while the acquisition is running?
2. How clean is the encoder signal? Some encoders produce signals with logic-level high and low voltages but rather slow transitions. Many clock circuits do not work well with slow transitions. A Schmidt trigger buffer can help.
3. Can you measure the offset with a voltmeter or oscillosope connected parallel to the DAQ device? Does it show the effect?
4. What is the source impedance for the signal? Are multiple channels being acquired? If so, are some of the signlas much larger than others?
It's a hallsensor measurement. Over 360° you get in this case 2 poles The curve is similar to a sinus curve and is the base for different analysis.
The encoder is by Haidenhain with an additional electronic (IBV), so that there are 50000 ticks over 360°. On turn takes ca. 2 seconds The number of datapoints i get is correct.
The signal is noisy (some mV). The analysis software (smoothing and so on) is the same in both situations. By measuring with internal clock the rate is set to 60 kHz. So I get ca. 3 datapoints per encoderposition. The final value in this case is the mean value.
The complete measuring stations are situated by our customer, therfore it's difficult to make other tests. I can give you an example:
Internal clock: Max signal: 21.6mV Min signal: -21.1 mV - this is correct -checked with a system that uses E-Series (external clock via 6601)
External clock: Max signal: 20.8mV Min signal: -21.8 mV - result with M-Series + 6601 and X-Series.
I'm programmimg with DAQmx and C++.
What happens if you connect the encoder to an unused PFI input and use the internal clock for the measurement? I am wondering if you may have a ground loop between the encoder and the rest of the system.
I've tried different versions. In all cases I use an encoder signal for timing, i get an offset (only low signals - signals >ca. 50 mV seems to be okay).
1) AI measured by 6220 clock via PFI8 (Phase A encoder) ; Encoder 6601 -> offset ca. -1mV
2) Ctr and AI on 6220 ; Clock generation 6601 (DaqmxCreateCODigTick) routed by RTSI -> offset ca +2mV
3) Only AI on 6220 clock via PFI8 (Phase A encoder) -> offset ca. -1mV ( no use of 6601)
general: the encoder is wired to 6220 and 6601
The problem is, that a similar system (placed some thousend km away) with an X-Series card shows the same effect - Ai and Encoder on X-Series, clock via PFI13 - encoder is "double wired".
What can i do else??
I doubt anyone will be able to resolve that here on the Forum. It may require on-site evaluation of the situation.
I would start by calling th NI rep for your area and asking him to take a look at your system to see if anything is apparent about the connections or signals. Although they are significant as a percentage of the signal, the offsets are rather small in general.
Sorry that I cannot offer any better ideas at the moment.
One more thought: Does the Hall sensor have an internal amplifier? Possibly not, based on the voltages you are measuring. Some DAQ devices have hard to quantify effects when measuring signals with high source impedances. The few hundred to few thousand ohms that you may be getting from the sensor might be enough to create the effects you see. Do you have a buffer amplifier that you can put between the sensor and the DAQ? If you use a small battery with a voltage divider (~1000 ohms) to get a 20 mV signal in place of the sensor, do you see the offsets? Do you have any way to generate a 20 mV signal with low source impedance for testing purposes?
Thanks for your ideas. All tests where I must make changes in hardware are difficult, because all components are build in ( QA system in a production).
The sensor has an amplifier, and I can change the gain by switch - factor 10 and I will get 200mV instead of 20 mV.
I think then this problems will not appear, but it also reduces the measurering range by factor 10 and that is the next problem. I must discuss this with our customer, whether he really needs the great range.
The amplifier is an OEM product with an analog output (+-3V) and in the moment i don't know whehter a buffer amplifier is integrated or what's it impedance.
Is it possible to explain in easy words, why the kind of timing has an influence to the measured AI signal?
Since your sensor has an amplifier, its output impedance should be low enough not to be a problem.
Search the Forums for "ghosting" to learn more about how source impedance can affect measurements. Because the source impedance limits the current to charge the input capacitance of the DAQ device, it creates a time constant for that charging. That time constant must be much shorter than the interval between samples to allow the capacitances to fully charge before the sample is taken.
If the customer needs both the range and the low level accuracy, a somewhat expensive option might be to add a second sensor. Set one to each range. Use the reading from the high gain sensor for small signals and the reading from the low gain sensor for large signals. There will probably be a substantial overlap region. In that region use the signal with the best accuracy or take a weighted average to slightly reduce noise.