Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NIDAQmxBase does not use differential signals?

Solved!
Go to solution

I recently upgraded NIDAQmx Base to the latest and greatest version.  But now I am getting horrible corruption of my signals.  I have checked with a scope and the signals are correct but there seems to be horrendous cross talk between channels.  The only thing I can think of is that the board is not truly doing differential switching beyond the first 8 channels.

 

I am using a 64 channel board so there are 32 differential channels, I am only using the beginning 16 channels.  I am using "Dev1/ai0:7,Dev1/ai16:23" for my channel specification.  The hardware works since the default "data logger" shows correct data.  But I am guessing that the data logger was built with an earlier version of NIDAQmx base.  This seems to occur when using more than the first 8 channels.  This VI used to work with the previous version of NIDAQmx base.

 

Is this a known problem?

 

Here is the code for initializing the NI-6033 board

 

DAQ Init Code

 



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Mac OS X 10.4.11 PPC/ LV 8.5.1 / NIDAQmx Base 3.2.0 / VISA 4.4.0

Message Edited by sth on 01-29-2009 08:56 AM

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 1 of 8
(3,476 Views)

Hi Scott-

 

To answer your question, no, this is not a problem that I know of.  I looked through the configuration code for the high-channel E Series boards and everything looks to be in order.  What sampling rate do you specify in your test VI?  What do the signals look like from channel to channel, relative to each other?  In other words, are there lots of steps up or down on the signal levels as you scan through the channels?  Do you know the approximate output impedance of your source signals?

 

Thanks-

Tom W
National Instruments
0 Kudos
Message 2 of 8
(3,444 Views)

Hi Tom,

I knew you would see this immediately!  It is odd and I haven't had time to trace it down but I thought I also had this problem with a previous application.   In this application I am reading some DC volatages (0-10 V) and then some incoming AC power phases (two phases of 12.5 kV input power) which is stepped down thru a transformer and then an isolation amplifier.  Then I am reading 8 output volatages on the order of 500 V also stepped down thru an isolation amplifier.  So the first 4 channels are some slowly varying voltage up to 7 V and the next 4 are AC voltages at 60 Hz from -8 to 8 volts with varying phase and the remaining 8 are voltages in the 0-10 V range slowly changing.  Of course I want to catch fast events when they happen so I am recording on the mS range.  This is a total of 33 kHz which is way below the boards rated 100 kHz and still lower than the 80 KHz recommended for 1 LSB settling time.

 

All the signals should be low impedance as the output from the isolation amplifier is directly to the SCB-100.  The SCB-100 has 100 KΩ resistors  to ground.  Thus the signals should be low impedance and I shouldn't be seeing channel to channel cross talk.  I am reading 16 channels at 2100 Hz scan rate.  I am continuously dumping to files with a new file every 15 minutes.  I have a background task that deletes old files so I have about a 24 hour record stored.

 

I haven't made any hardware changes but may have updated some of the software (ie GPIB drivers, visa drivers, DAQ drivers).  It was working in early December and after I updated it over the holidays it did not work.  I had our electronics shop check the signals at the board and they claim that they are all correct and it is just my bad programming.

 

I will post  some examples of the waveforms when I have a moment.

-Scott

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 3 of 8
(3,420 Views)

Tom,

It is a totally different problem and the subject is misleading.    Something is very very wrong and I mis-diagnosed it.  The problem is in my loop to read the DAQ I am only getting 16 data points instead of 210 datapoints!  In the code snippet above the scan rate is 2100 Hz.  I try to set it up so that there is a 100 msec read time of 210 scans and set that up as the number of "Samples per Channel" in the daq base timing VI in the code above.

 

When I do the read, I use that same 210 (1/10 of the frequency) to retrieve samples.  I consistently get a 16X16 array of data out!  Which is not a 210 X 16 and of course the data really looks funky.  Here is the code I use to retrieve the data from the DAQ system.

 

There are also two Probes that show that I am asking for 210 data points and the other shows the highest point in the array that is filled.  If I make either index 16 there is no data. 

 

The data is passed to a queue so that it can be displayed and written to a data file.  This worked under LabVIEW 8.2.1 and an earlier version of NIDAQmx.  As I said, I was updating systems over the break so that all my systems are on a consistent version of software both OS, LV and drivers.

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 4 of 8
(3,399 Views)

Stupid Lithium timed out while I was trying to edit the message...  sheesh!

 

Anyway the data seems to be collected about 10 Hz but only a 16 X 16 array of data.

 

PS Montior Read code

 

Here is the code and the two probes showing the data.  If I make either index in the data probe a 16 it shows an empty cell in the array.  I think I am going to try to re-install DAQmx base. 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 5 of 8
(3,396 Views)
Solution
Accepted by topic author sth

Hi Scott-

 

I agree that re-installing is the most appropriate next step.  I tried this here with a very similar setup, and I was able to retrieve the correct number of samples during my read, and the data looked accurate.  Please let us know your results after the re-installation is complete.  Thanks-

 

Ed

0 Kudos
Message 6 of 8
(3,381 Views)

We are working on the same time scale.  I did a re-install of NIDAQmx Base 3.20 and all is well.  Sorry about the false alarm.  I had a worry about this differential mode from a previous problem that I never fully diagnosed.  It was Friday and obviously attacking the problem on a Monday morning leads to a more methodical approach.

 

I almost never have to re-install and I am very curious as to what when wrong.  But since all the files have been updated to recompile for PPC there isn't much chance of tracking down a changed file in my configuration.  This is one of my few remaining systems that is not upgraded to OSX 10.5 so there is only a few other systems to compare it to.  Those systems seem to behave correctly, but I had better check.

 

For the 6033 board I keep the total sample rate so that it settles to the full resolution of the board.  I believe that is 80 Khz total rate.  However this isn't true if one of the channels goes into saturation?   Even if the board is connected to signals of low impedance it seems to saturate several channels that follow a single channel that is saturated.  Do I need to put zeners across all my inputs just to make sure that the system is robust?

Message Edited by sth on 02-02-2009 11:54 AM

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 7 of 8
(3,378 Views)

Just to follow up as to where the corruption was.  I was bitten by my own bug fix!  A re-install gets it to work, but the result is so slow as to be useless.

 

See messages at:

LV threading 8.5 much worse than 8.2? 

 

Where I had fixed a problematic slow down in the code. 

 

It turns out that due to a problem in my versioning control the original broken fix was installed.  The fix at the end of the thread will work and clears up the problem.  As well as giving a factor of 1000 in speed! 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 8 of 8
(3,337 Views)