If the DSA cards use a Delta-Sigma ADC, there must be a speed versus effective number of bits (ENOB) table or curve, but I have not found it.
Modern 24 bit ADC's reduce to 18 bits or so at 100 kHz sample rates. I want to know what the samples-ENOB curve for the DSA cards looks like.
At 100kHz sample rate, for example, what ENOB and therefore useable accuracy can I expect from the DSA cards?
Where can I find this curve, so I can balance speed vs useable accuracy in my application. Thanks.
Actually, it does not change, as the Delta-Sigma converters are running at a constant rate and the sampling rates you configure are decimated from the base clock rate of the converter.
For instance, in 4464, the ADC is modulated at a constant 6.64MHz irrespective of your sampling rate. This 6.64MHz data is decimated further to get the sampling rate you requested.
This is further explained in the user manual
So, unless specified in the datasheet, all the specifications are irrespective of the sampling rate.
Thank you - wow, then it really is a 24 bit at whatever speed I choose......
I presume there is almost always flicker in the last bit or two, even if in a "clean" sensible installation with everything shielded etc. I want to time-integrate signals around 70 microvolt peak - so my peak values will be using only a small range of the total +/- 0.316 V, but since I want to integrate, it would be great to get as many useable bits as possible for the smaller values during the rise and fall times of the signal.
Is it reasonable to assume a useable 22 bits, whatever the sample rate I choose? thanks, Matthew
I would go with the full 24bits useable and consider the accuracy specifications of that range.
Thank you very much for your reply - the key for me was your last phrase - "consider the accuracy specifications of that range"
It was not so easy to find the accuracy specs - until I found this document :-
When I put my range and signals into the calculation - the absolute accuracy is somewhat less than a 12 bit device......
For me the 100µV offset was the problem.
Right at the end of the NI article there is a note that says it all, ending with ... "When we are looking at a difference in voltage, the absolute accuracy becomes less important." So these devices really are intended for voltage difference measurements, with very accurate timing built in...
So I think I can overcome most of this by measuring in forwards and backwards directions - normal and reversed - and averaging the two to give me a computed zero level (normal good practice anyway). This assumes the internal offset remains the same for the two measurements. Certainly it seems the device input itself can't give me a zero level to the accuracy I had hoped for.
DSA devices focus on preserving the frequency content and the relative accuracy and are not intended to be a replacement for a DMM which are good at keeping the absolute accuracy high and hence these DSA devices are used for Sound and Vibration applications where absolute value is of less importance compared to the frequency content and the relative values themselves.