From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

daq input resolution

I am using the 12-bit 6115 DAQ board for aquiring noisy, low-voltage signals (less than 100mV). I have set up a voltage range in MAX to +/- .500V. With 12 bits in a perfect world, this should give me about .25mV quantization steps.

My problem: when I measure a signal around 0V with noise on it, it fluctuates between only 2 or 3 different voltages, all well beyond .25mV apart (more like a 2-5mV). I understand there are most likely some other sources of error at work here, but it seems that with a noisy signal, I should be able to see much more variation than this.

I have set up voltage ranges in MAX. Is it necessary to do the same in the Traditional DAQ AIConfig? Is there something else I'm missing? Or, is this the best I can hope for?

---
BC
0 Kudos
Message 1 of 4
(2,800 Views)
Hello DC. Thank you for contacting National Instruments. I took a look at the specifications for the PCI 6115. The accuracy is .71 mV at +/- .5 V input. At +/- .2 V, the accuracy improves to .39 mV, which might be better for your noise.

There are two things that I would like you to try to improve your signal. First, increase your scan rate, you might not be taking in enough points. Second, make sure that the Data Mode is Continuous. If it is in Strip Chart Mode, the data might look skewed. If these settings don't improve the data, please check your configuration.

Once you get a good signal in MAX, you should use the same setup in LabVIEW. Unless you are scaling your data, you should set the input limits in AI Config.vi. This will improve the accuracy of the signal. I hope this answers all of your questions. Have a great day!

Marni S.
National Instruments
0 Kudos
Message 2 of 4
(2,788 Views)
Thank you for the suggestions.

I have found the problem lies in the AI Config vi. If I set up my channels in MAX, but do not wire input limits to the AIConfig vi, my output resembles something like I described above. However, when I DO directly wire input limits to the VI, I get the resolution that I expect.

This is troubling, since according to the AIConfig documentation, the default operation (when limits aren't explicitly wired to the VI), is to use the range declarations as defined in MAX. But, as stated, this did not happen.

Any thoughts? Is there something else I need to be doing to link MAX to LabView?

---
BC
0 Kudos
Message 3 of 4
(2,775 Views)
When you say that you set up your channels in MAX, are you setting up the channel in the test panel or are you setting up a Traditional NI-DAQ Virtual Channel? When you set up the channel in the test panel, that configuration is not saved. You can see this if you close the test panel and re-open it. If you configure a virtual channel, then you have to connect the virtual channel to AI Config for LabVIEW to use it. To do this, click on the pull down menu for channels (on the front panel) and select the virtual channel instead of 0, 1, 2, etc.

If you take a close look at the documentation, it says "If you use channel names, select the best limits for your channel configuration." This is referring to using 0, 1, 2, etc for the channel name, instead of the virtual channel from MAX. I hope this helps. Have a great day!

Marni S.
National Instruments
0 Kudos
Message 4 of 4
(2,761 Views)