Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Legacy DAQ Board - Data Collection Cutting Off

Solved!
Go to solution

I'm a graduate student and I've inherited an older DAQ system and LabVIEW program for the experimental apparatus I'm using:

 

OS: Windows XP 32 Bit
RAM: 3GB
LabVIEW Version: 7.1
DAQ Card: AT-MIO-16E-2

 

It reads data from 7 analog channels at 30 kHz. Each data set is 360000 samples. Sampling is externally triggered by a rotary shaft encoder on an engine.

 

The LabVIEW VI is set up in a stacked sequence. First it initializes the DAQ and begins the data collection (reading information into the buffer), then header information is written to a file, then when the data collection is finished all the data is written to an output file and displayed on a waveform chart on the VI. The data files are typically around 16MB in size. I can't get screenshots right now but can upload some later if it would be helpful.

 

This setup was working fine, until a week ago when the data collection started getting cut off around 75% of the way through the sample. The cutoff point is not the exact same every time, but is always around 75% of the way though. It's happening regardless of the test conditions, so it seems to be a problem internal to the DAQ, LabVIEW VI, or the computer.

 

There are no errors that come up in LabVIEW. The data is missing from the VI itself (the waveform chart is truncated too), so it's not just an issue with the file writing. I've run the "test resources" function in NI-DAQ and trialled all the channels for extended periods with the test panels and haven't found any issues.

 

Could this be caused by the onboard memory of the DAQ card failing? Building a new DAQ isn't really an option given my time constraints, so I'm hoping there's some more diagnostics we could perform to see if it's the DAQ card that is failing and we could try to source a replacement.

 

Any help you could offer or ideas of things to check would be very appreciated!

0 Kudos
Message 1 of 9
(927 Views)

Can you post the code?   Or at minimum, a screencap of how the DAQ task is configured and how the data is read from it?

 

The first fork in the road is finding out whether the system is based on DAQmx or legacy NI-DAQ.   You have an E-series board which would have been supported by either one back in the day, and DAQmx was first introduced alongside LV 7.0.

 

If the system is based on the obsolete NI-DAQ driver, I think you'd be hard-pressed to find a compatible replacement DAQ device any more.  But to start with, your symptoms don't make me put the device very high on the list of suspects.

 

Can you experiment with different sample rates, different total # of samples to acquire, different # of channels?   Right now you ask for 360k samples from each of 7 channels while sampling at 30 kHz.  So 12 seconds worth of capture.   What you seem to be *getting* is about 270k samples * 7 channels during about 9 seconds worth of capture.  Some experiments might tell you what the limiting factor is.  It could be the total capture time, it could be the # of samples, it could be a buffer defined by (# samples)*(# channels).  So do some experimenting to see what you can learn.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 9
(913 Views)

I believe it uses NI-DAQ, but I'd have to double check. I'll try to get some screenshots of the code to post up. Right now the lab is closed for asbestos removal so I can't get in until later in the week.

 

I'm leaning towards a hardware issue somewhere in the chain since the (unchanged) code had been working completely fine prior to this. Nothing changed in the hardware or software configuration to set it off. I was thinking about bottlenecks in the PC itself but want to narrow it down more before I blame anything.

 

That's good advice though, thanks. I will create a minimum working example of the VI and experiment with different sample parameters to see what the effects are.

0 Kudos
Message 3 of 9
(904 Views)

I've found that people don't always mean the same thing when they use the term "trigger", so here's another important question:

 

Is the encoder signal really used as a start trigger (where the first pulse starts the acquisition, and the task takes samples at constant time intervals of 1/30 kHz)?   Or is it used as a sample clock (where each pulse causes a sample to be taken giving you samples at constant *distance* intervals, and the combo of engine speed and encoder resolution).

 

Also note: the DAQ device doesn't accumulate the samples.  It captures them, puts them into a fairly small onboard buffer, then the board and driver deliver them up to PC memory.  This process works fine for the first several seconds, that's why I don't especially suspect the DAQ device itself to be the culprit.

 

Are you really *really* sure the code is unchanged?   Is it an executable or a VI?   If it's a VI, can you check the file modification date and make sure?  Sometimes things get changed by accident...

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 9
(887 Views)

So essentially the encoder puts out two TTL signals, so I guess it's both a trigger and a clock. The first signal is high once every two revolutions of the crankshaft, and the second is once every 0.2 degrees of crankshaft rotation, so 1800 per revolution, and the two are in phase. The DAQ reads them on the digital input lines and triggers to begin sampling using the first signal, then takes a sample of data at each rising edge of the second signal. We run at 1000RPM, so the sample rate is 30kHz. We have to sample based on the encoder since the engine speed isn't held to exactly 1000RPM. So if we used an digital clock it would desynchronize from the engine position.


It's a 4 stroke engine (2 revolutions per cycle) and we're sampling 100 cycles at a time, so 360,000 samples in 12 seconds. Right now the data cuts out at around cycle 75-77, but it's not in the exact same place every time.


I'm certain the VI hasn't been modified. A few months ago I started with the VI which previous students had used on the same apparatus, and since then have made some minor cosmetic and quality of life changes. I kept a backup of that original version though, and reverting back to it has the same issue with the data cutting off. I can check through the driver configuration to see if anything's amiss, but haven't (knowingly) made any changes there.

 

Based on what you said about the memory it does sound like the DAQ card is okay then, since it has no problem processing the data initially. It could suggest a problem with either the buffer allocation in RAM, or a fault in the RAM causing the data to get cut off. I don't think XP has a built in memory test but I can try and find a program to do it.

0 Kudos
Message 5 of 9
(882 Views)
Solution
Accepted by topic author Jenga_G

FWIW, I wouldn't expect a system RAM issue to cause such a relatively consistent cut-off.  From one run to the next, one session to the next, one day to the next, the particular block of memory getting allocated for the task would tend to shift around.   I'd also expect some further indication of trouble in such a case -- maybe a LabVIEW error, a Windows blue screen crash, etc.

 

Another specific detail question: what are all the ways that indicate early stoppage?

- some independent means of tracking engine cycles which indicates ~75?

- fewer total samples acquired (~270k) than expected (360k)?

- shorter capture duration (~9 sec) than expected (~12 sec)?

 

Here are two more wild speculations.

 

Suppose an encoder connection is loose and can make intermittent contact due to vibration.  You might get extra apparent edges so that you end early w.r.t. engine cycles and capture duration while still acquiriing the full 360k samples.  (Seems fairly unlikely though due to the cutoff consistency you see.)

 

Suppose the average encoder pulse rate is too low (slower engine speed, dirty or damaged optical wheel in the encoder, delay between DAQ start and engine start, etc).  The DAQ read functions have a default timeout value of 10 seconds under DAQmx, and may have been the same under the NI-DAQ driver.  So maybe the attempt to read times out and gives you only the first 10 seconds worth of data.   (This would have to imply that prior successful runs were regularly delivering 360k samples in less than 10 sec.  Also seems kinda unlikely.)

 

Still mysterious to me.  Please post the code, maybe I'll be able to spot something that makes it vulnerable to the symptoms you see.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 9
(876 Views)

So I was back in the lab yesterday and managed to figure it out. Turns out it was a settings error on the VI (so technically my fault I guess, but I don't remember changing it when the issue first happened, and I'm not sure why reverting to the backup VI didn't solve the issue before).

 

I used a function generator to simulate the trigger and clock signals from the encoder. This let me play with different sample rates without having to run the engine. Right away the issue was still happening, so that ruled out an issue with the encoder. By changing the clock rate I found out that the cutoff was a constant time duration after program start, regardless of the sample rate.

 

After some more playing I found that there was a setting on the VI for the trigger signal time limit. This is supposed to be the maximum time the DAQ will wait for the trigger signal before erroring out, but it seems like it also puts a hard limit on the total sample time. So increasing it allowed the sample size to increase. It was set to 10s, shorter than the 12s test duration we were running, which is why the data was cutting off. I increased it to 20s to give some overhead and everything worked fine again (on the previous VI it was 900s, I must have reduced it when we were first starting out since we were having issues with the trigger signal not getting through and I didn't want to wait that long).

 

Thanks for your help, it really helped to focus my troubleshooting and saved me a lot of time. I attached some screenshots of the relevant part of the VI and block diagram, as well as the entries from the help file. The block diagram chain is the DAQ initialization and parameter settings, the beginning of data collection, and finally reading out from the buffer. The timeout value is set on the final subVI in the chain.

Message 7 of 9
(864 Views)

One thing that surprised me was the use of PFI0 for an Analog trigger.  I'd have expected to need to specify a Digital trigger for PFI0 (or specify a different terminal for an Analog trigger).

 

Beyond that, I would suggest that you urge the equipment owner to work out an upgrade plan ASAP.   That whole setup is likely at least 15 years old, and the hardware may be even older.

 

The PC hardware, the DAQ device, the DAQ driver, the OS, and the LabVIEW version are all candidates for significant modernization.  Just be aware that these things aren't entirely independent of one another. 

    You should also want to build the program into an executable as well to help avoid inadvertent changes.

 

If the equipment is still getting use after all these years, it should be worth some new $ investment to ensure its future.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 9
(849 Views)

I think 15 years is being generous! The DAQ card uses an EISA port on the motherboard, which from what I read was superseded by PCI in the mid 90s. Some of the sensors are even older than that but still work great (the manual for the encoder is dated 1982). I think these days it's only universities that will use things literally as long as they last.

 

We have alternate spare DAQ systems, but there would be a significant amount of work to switch everything over, and I need to focus on finishing my thesis so I can graduate. The way things look I may be the last student to use this apparatus too. We are looking at offloading some of the core functionality to a secondary DAQ system though, so that will provide some redundancy.

 

In the help file using an analog trigger on PFI0 is listed as the default setting for this DAQ card. I don't understand it fully but it makes it sound like the signal is read and converted to a digital start signal when the value crosses the set threshold.

0 Kudos
Message 9 of 9
(775 Views)