LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview 8 Performance , FGV's very very slow. Stuck,Need solution ASAP.

Hi,

 

I have a very STRANGE issue here, I migrated my working Labview 7.1.1 RT code to Labview 8.0 and noticed the performance degradation beyond acceptance level.Smiley Mad

 

I use "RT Simple CPU Usage.VI" to check the CPU resources. I usually poll the FPGA cards in a time critical VI, (I have 2 FPGA cards). After polling I use a FGV to transfer it out into a normal priority thread and process it. In Labview 7.1 my CPU Usage never exceed 40%. However in labview 8 it’s a constant 100% . I have attached a screen shot of my 7.1 Vi's which have been migrated to 8.0.

 

The processing would be comparing values, storing to disk etc. However, the FGV’s in Labview 8 are hogging up the resources and that seems to be the bottle neck. ( I disabled every other loops/ tasks inorder to come down to this conclusion). I also tried using the so called "Shared variables w/FIFO's enabled" and the result is still the same. if i disable the reading of the FGV's, my idle usage with polling is 20%. This doesn't make sense to me, in theory what works in LV 7.1 should work in 8.0.

 

Also, I cannot switch to LV7.1 once 8.0 is installed. Does anyone know the dependencies required to bring it back to 7.1 through the MAX "add/remove software"? 

 

 

Please advice.

Download All
Message 1 of 38
(26,308 Views)

ashm01,

The answer to your second question is that to move from 8.0 Real-Time to 7.1 your will need to uninstall the Network Variable Engine and the Variable Client Support from your RT device.  This should allow you to revert back to Real-Time 7.1.

For the first question, it is very interesting that it works with 7.1.1 and not in 8.0. Since using shared variables with a RT FIFO also gives a cpu usage of 100% this makes me think it is most likely it is specific your code.   You may try doing the following:

1) In the timed loop set the source name to be 1KHz and the period to be one, this way you are not taking too much cpu usage and wasting the 999 cycles

2) In fact go ahead and just use the regular while loop as time critical as opposed to using a timed loop. The clock rate will be as mentioned in step 1

3) Also in the normal priority loop increase the wait time a little and observe the cpu usage

4) Use the shared variable with the RT communication wizard and do a simple read and write. Observe if it is going to 100% or not

Also you may want to refer to the following links that recommend how to prevent the cpu usage going to 100%.

http://digital.ni.com/public.nsf/websearch/F4D776187EFCC30986256EFC007FC922?OpenDocument

Hope this helps, please let me know if you have more questions. Good luck

Thanks

Steven B.
0 Kudos
Message 2 of 38
(26,206 Views)
Hi Steven,
 
I am seriously contemplating reverting back to RT 7.1, considering I don't have much time to do extra R&D. Anyway, to ensure that the problem was not the code I have wasted three days without much progress by changing the things listed below.
 
  • Yes, I did change the timed loop to a regular while loop with a "wait until next ms" with 1 ms. and i played around with the Normal priority VI until I saw the CPU usage come down. Basically I started the NPL to a 100ms and gradually came down. The conclusion was anything less than 25ms caused the CPU  to peak at 100%. This latency is simply not acceptable considering what is required.
  • I reinstalled Labview 8 on a fresh PC to make sure that no version issues existed with 7.1 and 8.0, still no go. Smiley Mad
  • Mass compiled all VI's.
  • Took the same above example and replaced the FGV with Shared variables and the result is the same.
  • Put additonal benchmarks for calculating the time required to execute each part of the code by getting the tick count and tickcount after the code is executed in a sequence structure. The sad answer is the difference was ZERO.
  • Within the timed loops, also check whether "Finished Late" turned on. Answer = NO.

Anything else which I can do to investigate?

This one has really got me, maybe I should have learnt from prior experiences that any version of Labview x.0 is buggy, I prefer the versions which have revisions. Smiley Indifferent
 
Regards,
Ash
Message 3 of 38
(26,180 Views)

Steven,

Need to add to your comments on my second question. I uninstalled the Network Variable Engine and the Variable Client Support, but you would also need to downgrade the NI-RIO to 1.4, only then will it allow you to install LV7.

However, this did not solve my problem because a 78433R was not recognized in the MAX(Needed NI-RIO 1.3, which Magically disappeared as an option) . Hence, uninstalled all NI products, 7.1/8.0 and reinstalled. A few hrs laterSmiley Mad

Same old code is perfectly stable in 7.1.1, allowing me interthread transfer @ 1ms.Smiley Very Happy So this may be an issue which someone may want to recreate on their systems.

Until this problem is solved, I cannot migrate to LV8.

Regards,

Ash

Message 4 of 38
(26,167 Views)

Hi Ash,

Could you please post enough code for us to take a closer look?

I can not guarentee a solution that will help you but if I see something that will help, I will share it with you.

Otherwise, I may be able to figure out what is wrong, pass the issue to support, etc, etc, so that the LV 8.0.01 maybe able to fix this issue.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 5 of 38
(26,152 Views)
Ben,
 
I have attached the VI's associated to the polling activities. However, this is LV 7.1.1 Code since I don't have much time to reinstall LV RT 8.0 on the PXI controller.
 
You will have to recompile the FPGA's and recreate the embedded projects for them to work. I doubt, that the resources will be available. Anyway, you may look at the code, it's a bit messy but it captures the core requirement of interthread transfer.
 
Regarding new releases of Labviews, I don't have much complaints except issues like this.
Labview 8 has done a wonderful job in terms of integrating my targets and creating projects. The only two bad things I can say about it is
 
1- it feels bloated/sluggish and some rework in control methods/properties for listboxes etc.
2- this interthread problem.
 
I appreciate your assistance,
Regards,
Ash
Download All
0 Kudos
Message 6 of 38
(26,124 Views)

Ash,

Thanks for posting your code.  I took a look at your code and I noticed that you were using the RT CPU Usage VIs that were developed for LabVIEW 7.1.  I believe the VIs you are using were found from the following link:

Programmatically Monitoring the CPU Usage of a LabVIEW Real-Time Target (ETS Only)
http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=BEC1E4CCD3E15E28E034080020E74861&p_...

These VIs have most likely not been tested with LabVIEW 8.0 and could be the cause of the behavior you are seeing.  I would suggest removing them from your code and try using  the Real Time System Manager in LabVIEW 8.0 to monitor your CPU usage.  You can fine the RTSM tool by going to Tools > Real-Time Module > System Manager.

Let me know if you get different results using the RTSM. 

Steven B.

 

Message 7 of 38
(26,086 Views)

Thanks for stepping in Steven!

Ash, please keep us updated.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 38
(26,080 Views)

Steven, Ben.

Yes, the link is correct. However, I did try this before. I had removed the VI's which were polling the RT CPU usage and enabled the Real time system manager. The Results were the same and the inter thread transfer did not improve.

Within the RTSM my CPU also spiked and hovered around 70-80%. Which is too much

From prior experience, the RTSM adds additional overhead to update the memory usage and CPU usage on the CPU. Disabling the memory checkbox yields a bit bettor usage but still below 7.1.1 acceptance level. Smiley Mad

I am really stumped on this problem and haven't done any further experiments with LV8.Smiley Sad
Regards,
Ash

0 Kudos
Message 9 of 38
(26,062 Views)

Ash,

You are correct that the RTSM will add some overhead to monitor the CPU Usage and Memory but that would be expected.  I tested one of the example programs distributed with LabVIEW RT 7.1 on LabVIEW RT 8.0 with the RTSM and found when using the RTSM they both gave the same CPU Usage (about 6%).  I would be curious to see if this result would be the same on your RT device.  I have attached the project I used in LabVIEW 8.0 and the example can be found in the LabVIEW 7.1 Example finder by going to Toolkits and Modules > Real Time > Communication > Functional Global Communication.vi

Thanks,
 
Steven B.

Message Edited by sbassett on 02-10-2006 11:53 AM

0 Kudos
Message 10 of 38
(26,014 Views)