LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

AI-16XE-50 replaced by DAQCard-6036E

Hello,
I have an application that was programmed long ago by an outside vendor.  It used AI-16XE-50, which has now been rendered obsolete.  The replacement provided by NI was DAQCard-6036E.  We purchased this card and installed it.  It fired up without even a hiccup and seemed to run just fine.  After looking more closely at it, it seems that the sample rate is not being set correctly.  Are the sample rates on these cards set the same way?  If not, what are the differences?  BTW, I'm using LabVIEW 7.0.
Thanks.
0 Kudos
Message 1 of 7
(3,063 Views)
You must be using Traditional NI DAQ device dirvers and its functions in LabVIEW 7.0
 
You set the scan rate for both the cards similiarly
The function 'AI Start' is used for setting the scan rate rate in both the cards
 

After looking more closely at it, it seems that the sample rate is not being set correctly.
How did this show up?? can you please elobrate on what your observation was??
 
 
0 Kudos
Message 2 of 7
(3,052 Views)
The software pulls the data from the buffer and stores them to disk.  At the same time, the data are placed on a strip chart on the screen.  The software has a timer that is run off the computers real time clock and thus, if you tell it to record for 2 minutes, it will run until 2 minutes have passed by the real time clock.  At the end of 120 seconds, both the graph and the data file contain about 112 seconds worth of data.  As I watch the display updating during a test, I notice that the time values on the graph gradually fall behind that of the real time clock.  Just to make sure, I compared the time against my watch.  The real time clock is correct.  The effect is not seen at low sample rates.  Anything below 30Hz seems ok, but as it gets higher, the error becomes much worse.  The situation I described above takes place at 200Hz sample rate.  I've been monitoring the scan backlog and it appears that the program is keeping up and not allowing the buffer to fill all that much.  It's as if the sample rate timer in the card has poor resolution and the less time it has to count, the more coarse the setting is.  If that's it, it seems that the driver ought to be able to compensate for this, unless there just isn't a closer setting.  I find that hard to believe.  The software uses AI-Config and AI-Start to get the sampling started.  When I read the "actual sample rate" from AI-Start, it is nearly dead on.  I'm stumped.
0 Kudos
Message 3 of 7
(3,045 Views)
Hi rickford66 -

I'm sorry to say that the Traditional DAQ driver is no longer fully supported, as it was replaced by the DAQmx driver years ago.  I'll try to help out as much as possible, but my experience with the older driver is pretty limited.  Here's a shot in the dark, in case it helps:

It sounds like your application is starting the card at some specified sampling rate and then running a loop to read from the buffer, based on the system timer.  When the timer says that time is up, it stops the loop.  What should actually be done is to set the card up for a finite acquisition of the specified duration, then to read from the buffer periodically inside the loop (while monitoring the available samples per channel).  When the available samples drop to zero, it means the clock on the HW has stopped and you have all the samples. 

You might be running into performance issues in getting data across the PCMCIA bus (via interrupts) and just not reading the last batch of data since your feedback on when to stop the loop is completely independent of the DAQ card's operation.  If you don't want to change anything else, you might just break the loop on the timer and call AI Read once more with the sample to read set to "all available" (or the equivalent).  This should flush the end of the buffer.
David Staab, CLA
Staff Systems Engineer
National Instruments
0 Kudos
Message 4 of 7
(3,025 Views)

I found the problem.  It would take 3 pages to explain what the problem actually was, but is had nothing to do with the drivers or the card.  This must have been an issue all along.  I found a section in the code where, under certain conditions, data that was alread read from the buffer would be discarded instead of being sent to the graph/disk.  I fixed this little loophole and the problem dissappeared.

Thanks anyway.  :O)

 

0 Kudos
Message 5 of 7
(3,023 Views)

 found a section in the code where, under certain conditions, data that was alread read from the buffer would be discarded instead of being sent to the graph/disk.  I fixed this little loophole and the problem dissappeared.

Good to hear that
 
Just for academic interest, could you please share what were these conditions and how did you fix it?
0 Kudos
Message 6 of 7
(3,014 Views)
Going into the details would be quite lengthy, but to make is as short as possible, the original programmer just made a mistake.  The program allowed certain channels to be "turned off" and so the code was written such that it would replace the disabled channels with NAN.  Another feature of the code was that it would allow the user to average a user determined amount of readings, so the buffer was read in chunks the size of the number of readings to average.  The code tested to see if the buffer was full enough to read the chunk and waited if it was not.  If it was full enough to read enough to make 2 averages, it would take both chunks.  There is the problem.  When the buffer would fill just enough for it to take multiple chunks, and when some of the channels were turned off, the code would discard the extra chunks.  This is because the code that inserted the NAN's was designed to only supply the amount of readings equivelant to the size of a single average.  Later, when the data were processed, the extra chunks read from the buffer were discarded.  I fixed it by setting the number of NAN's supplied equal to the number of readings taken from the buffer.
0 Kudos
Message 7 of 7
(3,009 Views)