LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

CTR works with PXI 8196,PXIe 8102, fails with PXIe 8100 - why?

My client has reported a problem.  

 

For years he has used a PXI 8196 RT Controller with PXI 6602 Counter card and my software has given good results.  They have 20+ of these systems and they have worked well.

 

Now they are moving to PXIe 810x controllers, for cost reasons.

 

WIth a PXIe 8102, the same code also works perfectly, measuring total counts over a period, as well as instantaneous frequency.

 

With a PXIe 8100 - the exact same code reports DIFFERENT answers. The reported frequency is always 1% HIGHER than actual (For example, a known 4500 Hz input is reported as 4500 Hz on 8102, but as 4545 Hz on 8100.

 

This happens on any channel, and swapping just the controller will make the problem come and go.

 

Here is the CONFIGURE code, where the channels are set up (again, this has worked for years).

CTR Config.PNG

 

 

Here is the SAMPLE code:

CTR Sample.PNG

 

 

Basically the CONFIG code configures the thing to count edges.   I do this because they need an accurate count over a 20-minute period, in addition to instantaneous frequency readings.

 

The AVG TIME is a user-settable number defining how long a period to average, when showing the "instantaneous" frequency.

So, I create a buffer for N samples, corresponding to that period.

 

At SAMPLE time, I read the counter.  I replace the oldest value in the buffer with the newest, then subtract the newest - oldest to get the total counts in the sample period.

 

The PULSES PER COUNT item is a scaler, to account for a 60-tooth wheel, or something.

 

So, this same code has worked perfectly for years, until I plug an 8100 code in.  Then the result changes by 1 %, and EXACTLY 1%?

 

The CPU burden on the bad controller is 31%.

 

Any ideas?

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 1 of 6
(2,654 Views)

Hello,

 

The PXIe 8100 has a single core processor, where the PXIe 8102 has a dual core processor.  This could potentially be the reason for a performance difference, since LabVIEW will try to parallelize code if it has the option.  Can you benchmark your code and see how long it takes each controller to run your main loop?  You can use a flat sequence structure and timestamps.  See the following link on doing this:

 

https://decibel.ni.com/content/docs/DOC-12892

 

Also, if you have the NI Real Time Execution Trace Toolkit, you should be able to see exactly which processes are being spawned on your real time operating system, and how long it takes each one to execute.  This would help identify what the difference  are etween the execution on one controller vs the other.  If you don't currently own the toolkit, you could download a trial version for now to see if it can help you identify the problem.  Take a look at the execution trace toolkit available here:

 

http://sine.ni.com/nips/cds/view/p/lang/en/nid/209041

 

 

0 Kudos
Message 2 of 6
(2,589 Views)

Well, the controllers are not in my own hands.  I have an 8196 controller and on that, the CPU time is between 2 and 4%.

But the 8100 and 8102 controllers are in my client's hands.

 

I haven't gotten any hard timing numbers other than I saw the 31% figure reported on the video monitor.

 

It's hard to believe that it would be EXACTLY 1% if it was CPU overburden.

 

My software includes a calibration facility; here is a run from the good 8102:

8102.png

 

 

Here is a run from the 8100:

8100.png

 

 

This was with a reference digital freq generator.  You can see the one case where everything is within 0.1 Hz.

 

the other case has everything EXACTLY 1% higher.  My only explanation is that the scan engine is running 1% slower.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 3 of 6
(2,584 Views)

Steve,

 

For the problem I was pointing out earlier, the calibration report may not tell you what is happening.  Since LabVIEW inherently parallelizes code if it can, you may have 1% greater code optimizations applied when the code is run on the dual core processor.  It does seem somewhat unusual that it is exactly 1%.  Have you tried modifying the code at all, and seeing if it is ever off by more or less than 1%?  I would still suggest trying the Real Time Execution Trace toolkit to really find out what the difference between the two might be.

0 Kudos
Message 4 of 6
(2,558 Views)

you may have 1% greater code optimizations applied when the code is run on the dual core processor.


--- But #1) we're using the SCAN ENGINE to set the 1000Hz sample rate.  PLEASE tell me that that doesn't depend on single/dual cores.

#2) the same code ran on an 8196, which (IIRC) is a much faster processeor, indicating freedom from such relationships.

(Although on the 8196, we were also using a hardware AI clock for the sample rate).

 

#3, if CPU overburden were a problem, wouldn't the average CPU LOAD show higher than 31%?

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 5 of 6
(2,553 Views)

Steve,

 

Scan Engine refers to a particular bit file which reads data from an FPGA.  Neither your counter card, nor your RT controller have an onboard FPGA.  Where does the Scan Engine fit into this system?  Is there a piece of this hardware that I am not seeing?  

 

Also, it looks like your code is using DAQmx to pull in the signals.  If you are using DAQmx to pull in the signals, where does the FPGA fit into the system?

 

 

0 Kudos
Message 6 of 6
(2,534 Views)