We are using a NI 5734 digitizer + PXIe-7966R FPGA combination to monitor an ultra-low noise sensor. These sit in a PXIe-1082 chassis with PXIe-8133 controller. We notice that our noise budget is presently dominated by the DAQ itself. Using terminated inputs (no external device attached) and sampling (AC coupled) for a few tens of microseconds at 120 Ms/s we observe an rms noise in the blank waveforms of 0.2 mV. There are no significant spikes in the frequency spectrum of this noise, which is well described by a 1/f spectrum. In order to reach our goals we would need to improve the stability of these blank waveforms by just a tad (factor of two or so). We have observed no improvement in removing other modules present in the chassis, or moving the digitizer to positions away from the controller. Is this level of waveform stability typical of this digitizer and chassis combination? If so, can anything be done to improve it some? (lower noise chassis?)
Thanks in advance.
You could probably improve it just a little by maximizing your full dynamic range. What is the min/max values of your expected waveform? You will want to increase the gain on the ADC's, which will give you a smaller dynamic range, which will give you better resolution and noise performance. See page 16 of the specifications:
I hope this helps.
Hi Nathan. Thanks for your quick response. Yes, that would help sufficiently, but we are facing another unresolved conumdrum when changing gain. See postings by my student Bjorn at
Thanks in advance for any help you can offer.
What are the waveform characteristics of the signal your measureing? Voltage, Vpp, repetitive, freqency, source, etc.
We are measuring two different signals right now for the sake of troubleshooting.
The first is the output of a function generator (Agilent 33521A) that is operating a pulse signal with a pulse height of 100 mV, a duration of 100 us, a rise and fall time of 10 ns and a frequency of 1 Hz. The second is a terminated channel. We are taking data at 50 MS/s. Our traces are usually 20.000 samples long (i.e. 400 us).
As suggested we have changed the gain on the NI5734 to lower our noise. This step works as expected and the noise really dropped by a factor of ~4 at 12 dB gain. We also know that the gain has really changed measuring the 100 mV output of the Agilent. So thanks again for pointing this out.
But now we are facing a different problem. Once I access the gain on the NI5734 we see a new kind of noise ripples/bunches in the traces that have not been there before. During those noisy periods that are usually about 60 us long the digitizer noise doubles from about 30-40 channels to about 60 channels for a terminated channel. Those ripples are usually on the order of 300~400 us apart within a single channel. The ripple in one channel is usually 100 us ahead/ behind of the ripple in another channel.
The situation gets worse once we acquire traces from the function generator (or any other source). Then those noise ripples are no longer confined to ~60 channels but we see spikes in the signals that are 10.000 channels big or more. It's usually only one sample that's out of the order, whereas the sample before and after looks normal. But we can have multiple (20+) of those spikes in a 5000 sample trace.
I am certain that this is not a problem with the FIFOs, the general structure of my FPGA code or the host application as it works fine as long as I don't try to change the gain. Could it be that the attenuator is messing up the signal, or that the gain control puts out a 10 kHz signal to ensure that the digitizer is still in the right gain setting, which messes up the digitization?
Thank you very much,
Thank you for all of the information! Is there any way that you could post a screenshot of your data and/or any other helpful screenshots as we continue to troubleshoot this issue?
Thank you for letting me know. Lets follow that other forum to consolodate this issue to one post!