Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

gats

Member

06-04-2009 07:49 AM

Options

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

Hi,

could someone enlighten me on what is the minimum voltage that can be read by pci 6024e daq card? I read in the manual that the range is +-0.05 to +-10V, with sensitivity at 0.008mV and 12bits input resolution. Does this mean that the minimum voltage is 0.05V?

Thanks.

AdnanZ

Active Participant

06-04-2009 08:25 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

06-04-2009 08:37 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

Hi, thanks for the reply. I am sorry but I meant what is the closer voltage to zero that I can read reliably. If I understand well from the data sheet I can set the range to -0.5 to 0.5, with sensitive of 8muV and 0.106mV absolute accuracy. I am a bit confused as I don't know how to interpred the absolute accuracy...

Many thanks.

Julien_31

Active Participant

06-04-2009 09:12 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

Have a look at the note which is under the table in your document :

The Absolute Accuracy at Full Scale calculations were performed for a maximum range input voltage (for example, 10 V for

the ±10 V range) ** after one year**, assuming 100 points of averaged data.

What I understand is that accuracy will be around 0,106mV after 1 year (for a 0,05V input signal). But it's not very clear even for me ! Can anyone explain that ? What's the difference between that and resolution (resolution = range/(2^nbits - 1) but what about accuracy)

Julien_31

Active Participant

06-04-2009 09:19 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

Julien_31

Active Participant

06-04-2009 09:20 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

...input range offset error after 1 year in specified conditions of course...

06-04-2009 10:12 AM

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report to a Moderator

Hi,

yes I had a look at the note... Hmmm, it is still a bit confusing to me... Let's leave aside the environmental conditions and time for a moment... and let's say for the range of -0.5 to +0.5, the sensitivity is 0.008mV but the accuracy is 0.106mV. I assume that this is the standard error. So if the accuracy is 0.106mV, then I think that working in ranges smaller than this would be complete nonsense. Am I right?