LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

dataLossError not cleared when counter Reset

Hi,

If my counter application generates a dataLossError (-10920) it is not being cleared when GPCTR_Control(,,ND_RESET) is called. When the counter is set up again, GPCTR_Watch() etc. return this error again immediately. I have to restart my app to clear the error.

I am using CVI 7.0 and Traditional NI-DAQ 7.1.0f1 but do not recall this being a problem in CVI 6.0 / NI-DAQ 6.9.3
0 Kudos
Message 1 of 6
(2,976 Views)
Hi Jamie,

That's an interesting situation. I'll have to try it on my system. That error is usually generated when you are using interrupts for your buffered counter measurement or if you are using DMA but the gate signal is 500+kHz.

Anyway, performing a device reset instead of a counter reset should clear the error. You can do this in Measurement & Automation Explorer by expanding Devices and Interfaces and selecting your device in DAQmx Devices. There will be a reset button. Another option is to use the function Init_DA_brds(). The NI-DAQ help describes the parameters of this function.

Anyway, hope that helps. Have a good day.

Ron
Applications Engineering
National Instruments
0 Kudos
Message 2 of 6
(2,976 Views)
Ron,

The -10920 error also happens at all pulse frequencies when you are doing a simultaneous AI DAQ (in Traditional NI-DAQ anyway), but I can't get anyone at NI to investigate this. If you fancy a challenge I can give you chapter and verse on this problem.

Re. my original post, I am sure that you did not have to call Init_DA_Brds() to clear this error in 6.9.3 and earlier, can you confirm.

Regards

Jamie Fraser
0 Kudos
Message 3 of 6
(2,976 Views)
Let's have it. Give me as much info as you can on the hardware, the program and if this occurs with shipping examples or not. Hopefully we can resolve this.

Ron
0 Kudos
Message 4 of 6
(2,976 Views)
Hi Ron,

Thanks for your offer of help. I will give you as much information as I can, some of which will come in the form of attached e-mails, one of which is from one a colleague of yours in the States.

The first thing I have to do is convince you that this problem is genuine and not due to bad programming on my part. I suppose the best way is to ask you to recreate an application that reproduces my set-up. I have experienced this problem with a PCI-MIO-16XE-50, DAQCard-6062E, DAQCard-6036E and (I think) a DAQCard-AI-16XE-50 so I think the common theme here is E-Series cards. The problem occurs when doing a simultaneous analogue acquisition and buffered period measurements on a counter.

Today I am even more frustrated as, at NI's recommendation, I have upgraded to CVI 7.0 and have spent some time this week recoding my application to use DAQmx, only to find that the problem is virtually the same in DAQmx as it was in DAQ 6.9.3. The only difference is, rather than the -10920 error being reported from GPCTR_Watch(,,READ_MARK/WRITE_MARK), DAQmx just seems to stop counting pulses, but does not seem to report a library error from the call to DAQmxGetReadAttribute(,DAQmx_Read_AvailSampPerChan,). So, the discussion below applies to Traditional NI-DAQ and DAQmx.

My app does a continuous 8-channel AI DAQ at a 1kHz scan rate (or 1kHz sample rate in DAQmx speak). It does not make any difference whether the primary buffer is being read, or data just being poured into the primary buffer in a continuous manner by the driver (as long, of course, as overwrites of unread data is set as allowed). The card is set to generate an interrupt on half-fifo full. One or other counter is then used to do buffered period measurement using the 100kHz timebase. Note that, under DAQmx I have allowed DAQmx to decide on primary buffer sizes, and also played around with setting the buffer sizes directly, but nothing has helped.

If you then provide, say, a 25Hz (yes, this is Hz, not kHz) input pulse signal, pulse counting will run for a time and then stop (under DAQmx) or give the -10920 error (under Trad DAQ).

Varying the AI DAQ frequency has some effect. If it is reduced then more pulses are, usually, obtained before the period measurement falls over, but there seems to be no direct correlation, but rather a decrease in the probability of the process failing. A similar change in probability occurs when the pulse frequency is varied.

My question then is why does this happen, and how can I stop it?

The way that the failure of the process seems probabilistic rather than deterministic makes me think it is due to some sort of race condition, probably based on the relative timings of the DAQ-STC chip generating AI-FIFO and Counter interrupts.

I will stop at this point to avoid prejudicing your approach to this problem, but I have put forward some theories as to what might be happening, and some additional facts on the duration of the assertion of the IRQ line when the failure occurs (i.e. it is double the usual length when the failure occurs) and these are included in the attached e-mails.

I don't know whether this problem is hardware, firmware, or driver based, but I am convinced it is a genuine NI problem, not related to user level issues, i.e. I do not believe there is anything I can do in the CVI ADE to cause the problem.

In terms of platforms, I have seen the problem on NT4 SP6a and W2k SP4 machines with processor speeds ranging from 700MHz to 1GHz.

I very much look forward to your comments.

Regards

Jamie Fraser

PS. You can have my app code if you want it. Also, re. your question on shipping samples, I am not sure there is one that does simultaneous AI and pulse counting.
0 Kudos
Message 5 of 6
(2,976 Views)
Hi Jamie,

Sorry for the delay, I've been in Canada for the last week-and-a-half. I believe I know the individual who was working with Sacha/Simon in the UK. I will have them contact you directly to hopefully resolve this a little quicker. Once again, sorry for the delay.

Ron
0 Kudos
Message 6 of 6
(2,976 Views)