From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
03-22-2016 10:17 PM
We have a requirement for AC RMS voltage measurement resolution. I'm measuring a repetitive AC signal with 512 samples over 1 cycle using the full scale range. I then calculate the RMS value from this data. What is the effective RMS measurement resolution? Shouldn't 512, 12-bit samples (signal varies for each sample) produce higher resolution than one 12-bit measurement? How can the effective RMS bit resolution be calculated?
Solved! Go to Solution.
03-23-2016 05:54 PM - last edited on 06-09-2024 05:40 PM by Content Cleaner
The RMS bit resolution is directly proportional to the resolution of the signal under measurement. Vrms = Vp/sqrt(2). If you are within the Nyquist Sampling rate for the signal under measurement then the resolution will bevoltage range (+/- 10V = 20V) divided by the bitness of your ADC (20 V/4096 = 4.88mV). It looks like the specifications document for your PCI 6110 (https://www.ni.com/en-us/support/model.pci-6110.html) provides absolute accuracy for DC measurements but I could not find any for AC measurements so this resolution calculation will have to do.
03-23-2016 11:38 PM
I don't think this is true for an AC signal. Since the signal is continuously changing, you can't assume the LSB error for each sample is always the same. In some samples, it could be exactly correct while others could be off by as much as 1 LSB. This should be related to some statistical averaging related to the number of samples (not statically biased). Also, the first sample many be slightly late so each 1/4 cycle isn't an exact mirror copy.
03-24-2016 08:01 AM - last edited on 06-09-2024 05:40 PM by Content Cleaner
Please read this document as it outlines the different metrics of an analog to digital converter.
03-24-2016 11:44 AM
Nyquest is not an issue here because the sample rate (2.5 MSPS) is so much higher than the signal I’m measuring (60Hz).
I think the key in that document is figure 10. In my case, the dithering effect is intrinsic because each sample is not an integer multiple of the LSB. If each sample has a different LSB error (some high, some low) and given enough samples, does this imply the LSB error gets nulled out when performing the RMS calculation (N discrete consecutive samples converted to 1 RMS value)? How can this error be calculated for a specified number of samples (e.g. 1 cycle captured by 512 samples producing 1 RMS value)?
Going back to the example shown in figure 10, if the undithered waveform was captured with 8-bits resolution, what equivalent resolution would you get if you dithered 512 waveforms (8 + 9 bits)? Would the dithered resolution be the same for each point in the waveform regardless of the amplitude? The RMS calculation introduces a similar dithering effect, but it’s not simple averaging (quadratic mean).
03-25-2016 11:15 AM
Are you measuring a sine wave?
03-28-2016 02:10 PM
It's a periodic waveform but not necessarily a sine wave. It will have a THD of less than 12%. We always capture exactly 1 full cycle consisting of 512 samples. We would also like to determine the effective RMS resolution with fewer number of samples (always power of 2).
03-29-2016 03:56 PM
The resolution for the calculated RMS voltage will be directly proportional to the amplitude error. Please refer to the section outlining "amplitude error" in the document I provided earlier.
RMS, calculated from a discrete signal is the square of the sum of the squared voltage values divided by the number of samples:
Since the RMS calculation is directly impingent upon the accuracy of the voltage reading, this will be your "RMS resolution"
03-30-2016 09:47 PM
I don't see how this takes into account the benefits obtained with varying LSB errors between samples.
The best I can determine via Google is the resolution increase for a simple average is the square root of the number of samples. So 512 samples would result in an improvement 22.6 times (adds 4.5 bits). The improvement for a quadratic mean is probably different (less) but my tests here with actual hardware are in this ballpark. Google also revealed some applications intentionally add a small amount of random noise which results in a dramatic improvement in the calculated resolution.