LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Serious Performance hit on RT (cFP 2120) using LV 8.5 compared to LV 8.2



@jimmie A. wrote:

Ok,

I finally had some time to test it myself on a less powerful target (cFP-2020):

I had one timed loop reading data from an analog input module where the timed loop ran with an iteration time set to 500 ms.

I had another loop (while loop) set to run with 750 ms loop rate.

Between these loops I used RT-FIFO’s in order to perform my inter-process communication and then I used network published shared variables to exchange data to the host back and forth.

The results when using the System Manager or the VI method to get an estimate of the CPU usage was exactly the same on both targets. Around 75% on the normal loops and the rest spent in high priority loops. Blue color was barely visible.

I will need someone test it on a more powerful target and see if they can reproduce the issue you see where the CPU workload differs in different versions of LabVIEW RT:



Hello,
 
As long as you :
 
a) Do not use the same hardware & timints (CPU Load!) as we are running &
b) Do not run the example that we have used time to create and that shows the problem 
 
I am not suprised that your test does not reveal anything.
 
We have time to create an example, please use it.
 
Running at slower Time Rates may also mask the problem if there is a "time constant" overhead (e.g. a new task that takes fixed time that has been introduced in ETS)
 
Also note: We are running LabView 8.2.0, and not 8.2.1. If the performance problem was introduced in 8.2.1, you may not see any difference between 8.2.1 and 8.5
 
 
To repeat: This is my test environment:
 
Setup 1:
LabView 8.2.0 (NOT 8.2.1). Windows XP with SP2 installed.
 
cFP 2120 with the installed software displayed in this posting:
 
 
 
Setup 2:
LabView 8.5. Windows Vista Ultimate.
 
 
cFP 2120 with the installed software displayed in this posting:
 
 
 
 

Geir Ove
Message 11 of 57
(6,460 Views)

Hello,

Can NI please look into this problem more carefully?

We have upgraded to LabView 8.5 to do the final deployment for a client using this version, but the current Performance problem we are experiencing are stopping us from wrapping up this project.

Thanks.

 

Geir Ove
0 Kudos
Message 12 of 57
(6,415 Views)
Hello Geir Ove,

What are the settings of your controller?  Unless you are using non-default settings for the "Pause (ms)" setting in MAX the CPU usage will be at 100%.  By default this value is usually set to 0 ms, causing the CPU usage to be at 100%.  The CPU will be at 100% if the pause (ms) is set to 0 because this defines the amount of time the controller sleeps after polling the I/O modules. A higher value lowers the bank update rate but allows more time for RT applications.  At 0 ms, the update rate is much faster, but the CPU is always fully exhausted.  Please define your pause time so that I can most accurately replicate your issue.

Are you using the State Diagram tools on the host or do you also need them for the controller as well?

I apologize for the issues you are running into and hope that we will be able to get this resolved as soon as possible.
Regards,
Ching P.
DAQ and Academic Hardware R&D
National Instruments
0 Kudos
Message 13 of 57
(6,389 Views)

Geir,

I am sorry I couldn’t test it with the same exact setup and hardware as yours. Since I didn’t have access to a cFP 2120 controller I took the closest I could find when it comes to architecture and drivers used. I cannot run your code with your settings at all since the CPU in 2020 is only 75 MHz while the one you used is 188 MHz What I wanted to achieve with the test though was to see if I could see similar behavior and a reasonable assumption is that this very issue you have encountered should be present on all targets running 8.5 and not just on cFP-2120. This morning I ran your code with minor modifications (timing on loops not as fast due the CPU in the controller) and cFP module I read data from and the shared variables were deployed in the same manner. The same workload on both versions of LabVIEW and even when I changed the timings on the loops the behavior was the same in both versions of LabVIEW.

Regarding XP vs. Vista – my office don’t have access to Windows Vista Ultimate since it is more of a consumer version of Vista and while the underlying architecture of the networking capabilities have changed on Vista most of them have been included in XP and will so in the forthcoming SP3 for XP later on.

Once again – I am really sorry that I couldn’t test it with the exact same setup and that you are experiencing these issues. Hopefully we will find the issue soon.

Regards,
Jimmie Adolph
Systems Engineering Manager, National Instruments Northern European Region

0 Kudos
Message 14 of 57
(6,340 Views)


@ching P. wrote:
Hello Geir Ove,

What are the settings of your controller?  Unless you are using non-default settings for the "Pause (ms)" setting in MAX the CPU usage will be at 100%.  .


Hello Ching,

The Pause is set to 1000 ms for both tests! And as you can see from the performance timings in my original post in this thread, the CPU load is not 100 % on either test.

 

I sincerely hope NI now can use my example and exactly the same setup to test this. I consider this a serious issue.

Geir Ove
0 Kudos
Message 15 of 57
(6,329 Views)


@jimmie A. wrote:

Geir,

........

This morning I ran your code with minor modifications (timing on loops not as fast due the CPU in the controller) and cFP module I read data from and the shared variables were deployed in the same manner. The same workload on both versions of LabVIEW and even when I changed the timings on the loops the behavior was the same in both versions of LabVIEW.

Regarding XP vs. Vista – my office don’t have access to Windows Vista Ultimate since it is more of a consumer version of Vista and while the underlying architecture of the networking capabilities have changed on Vista most of them have been included in XP and will so in the forthcoming SP3 for XP later on.


Hello Jimmie,

Sorry, I feel we are waisting time. Surely National Instruments must have the resources to be able to reproduce the same environment as the Client is having problem with. We are talking about a) Equipment that NI sells (cFP 2120) and b) Windows Vista Ultimate (also supported by LabView).

The "devil is in the details" and we must get to the bottom of this!

Hopefully NI does not expect us to tell our Client that NI do not have the equipment / resources to test for our problem?

Message Edited by geirove on 10-10-2007 03:28 AM

Geir Ove
Message 16 of 57
(6,330 Views)

We also are seeing serious performance issues with cFP 2120's and LV 8.5.

Jimmie and/or Ching please contact me at

bar@dsautomation.com

and i will get you in contact with the engineer that is working this project and has access to a pile of cFP 2120's. (which may be part of the reason Jimmie is having trouble getting access to one Smiley Wink )

I think we just ordered another pile of them so we should have plenty to test with.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 17 of 57
(6,291 Views)

Hi Geir Ove,

Thanks for the information.  Using your example and your set-up I did see an increase in CPU usage, but I did not get the same results you were getting.  My CPU increase was much smaller.  Some increase is expected, but they type of performance degradation you are seeing is not expected.

I think that it would be most beneficial to work with the program you are trying to deploy.  Can you please create a service request at www.ni.com/ask and call an Applications Engineer (AE) to speak with them?  Once you get in touch with an AE, refer them to this forum post and let them know that I am already working with you.  Have them send me your contact information so that I can contact you personally.  The first step in the troubleshooting process is to take a look at a trace using the LabVIEW Execution Trace Toolkit which gives a detailed look into the processes running on the controller.  Understanding this can tell us what is causing the performance issues you are seeing.  If you do not already have the LabVIEW Execution Trace Toolkit, I will be more than happy to grant you a temporary license when we speak.

Also, you may want to take a look at this article that discusses some advanced real-time programming techniques that may help optimize your real-time code.

We are taking your concern very seriously and hope to help you.  If you would like to bypass speaking with an applications engineer and do not mind posting your contact information, I will be more than happy to contact you that way as well.

Regards,
Ching P.
DAQ and Academic Hardware R&D
National Instruments
0 Kudos
Message 18 of 57
(6,037 Views)
Hi Ching,
 
Thank you very much for stepping in here to help. You wrote;
 
"... CPU increase...  Some increase is expected,..."
 
As I understand it the big diference is that that the Shared Variale support moved from UDP to TCP/IP. Since UDP is lossy but TCP/IP is not, I would expect the UDP implementation to require more work to compensate for the lossiness (did I just invent that word?) of UDP.
 
So....
 
Why should I expect an increase?
 
Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 19 of 57
(5,866 Views)
Hello Ching,
 

@ching P. wrote:

Hi Geir Ove,

Thanks for the information.  Using your example and your set-up I did see an increase in CPU usage, but I did not get the same results you were getting.  My CPU increase was much smaller.  Some increase is expected, but they type of performance degradation you are seeing is not expected.


Ching and Ben: I think the clue to find the roots of this problem may be my Test application that I posted a link to in this thread on October 10. For convenience, here is the link again:  http://objective.no/gostemp/lv_8_5_probleml.zip (also see that posting for more details).
 
I also suspected the Shared variables that have changed from UDP to TCP implementation: But I ran my test with or without Shared variables on 8.2 and 8.5, and got the very same results: 8.5 shows 30 % higher CPU load whether the Shared variables where included or not !
 
 
Since I am indeed seing a 30% increase in CPU Load running this very simplistic application, and Ching says he don't, what is the difference?
If we find out this, we have the solution to our and maybe also Ben's problem!
 
We have already done lots of optimizations under 8.2 to get the system to run at all (1 man month spent on this!).
 
All at a sudden, without warning, with the release of a new LabvView version, we cannot use Compact Fieldpoint for this kind of solutions any more: It is both in our and NI's interest to find out why.
 
 
Is it the way we compile our application? 
 
 The Advanced Tab under Build Specifications Dialog  for a cFP (realtime) project has changed completely from 8.2 to 8.5, could there be som crucial settings there that we are missing?  Under 8.2 we had to use "Remove As Much as possible", or the app would not compile!
 
 
Below are links to the different settings for the 8.2 and 8.5 project:  Could the problem be burried in these settings some place ?
 
http://objective.no/gostemp/8_2_Advanced_Settings.jpg
 
http://objective.no/gostemp/8_2_Source_File_Settings.jpg
 
 
 
http://objective.no/gostemp/8_5_Advanced_Settings.JPG
 
http://objective.no/gostemp/8_5_Source_File_Settings.JPG
 
http://objective.no/gostemp/8_5_Additional_Exclusions_Settings.JPG
 
Is it because we compile under Vista? (far fecthed, but of course possible !)
 
 
Ben, what platform are you compiling your 8.5 apps under when seing this problem?
 

 
Geir Ove
0 Kudos
Message 20 of 57
(5,844 Views)