From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

NI TestStand

cancel
Showing results for 
Search instead for 
Did you mean: 

TestStand 2014 Very Slow HTML Report Generating

Hi

 

We've recently moved to TestStand 2014 for one of our production tests. We modified the ReportGen_html to write the Critical Failure stack info to a custom data type, which we use to write the failure info to a seperate database. From time to time we see that the report writing takes very long. Sometimes close to a minute when it should be a few seconds. To make sure that it wasn't something that we had done to the ReportGen_html file, we ran with the stock standard file for a while and experienced the same issue. However, when running with the DLL it works perfectly. When we ran the modified ReportGen_html in TestStand 2010 it ran perfectly. 

 

The problem almost seems to be random, in that some days it works fine and other days it's very slow - even after a Windows reboot. 

 

I can't see any Windows related issue that might be slowing this down, like a virus scanner. I also don't think it's anything hardware related like a HDD failing. 

 

Has anyone experienced similar issues? 

 

We are currently running:

TestStand Version 2014 SP1 (14.0.1.103)

TestStand Engine Version 2014 (14.0.1.103) 32-bit

 

Eventually we are going to move away from the report file and purely query from the database for report generating. However, for now, I wouldn't mind understanding this issue. 

 

Thank you for your time. 

 

Regards

Wesley

0 Kudos
Message 1 of 7
(3,899 Views)

Hi Wesley,

 

Just to check, where is the database you are writing to located? Also, when this slowdown does occur, how long does it last? Unfortunately, if this is something that happens over the scale of days, the DLL or the TestStand 2010 version would need to run for a few days in order to make sure that the slowdown isn't coincidence that just has yet to happen on the DLL/2010 configuration.

 

A few other things that might help- do you have on the fly reporting on, and do you have any information on the CPU/memory usage of the TestStand execution thread? 

 

Regards,

 

GiantDeathRobot

William R.
0 Kudos
Message 2 of 7
(3,858 Views)

Hi GiantDeathRobot

 

The database is located at the factory, so there is minimal delay writing to the database. However, I don't believe that writing to the database is a problem. The structure is as follows:

- The ReportGen_html file was modified to write any critical failure stack information to a custom data structure that we then reference in the Sequence Callback "TestReport". 

- We then modified "TestReport" to only write to our failure database if the custom data structure holds any values. 

 

When I managed to do some investigating on a problematic jig, I found that the delay happened while the ReportGen_html file was running and not during any of the steps in "TestReport".

 

While using TestStand 2010, with the custom ReportGen_html file, we never had this issue. We ran with that setup for about 2 years with no issues. 

 

I just checked with the factory and a problematic jig I had switched to DLL Report Generating has been going fine for the past 2 days. I'll leave the DLL option selected until the end of their run, which is early next week and ask for feedback. 

 

I've never used On-The-Fly Reporting. 

 

I have even tried running the testing sequence in the development enivornment to see if it made any difference. No luck. 

 

The jig that is now running the DLL is speced as:

NI PXI-8101 Controller

2Ghz CPU

2 GB RAM

32-bit Windows 7

 

How would I be able to get the info for resource use of the TestStand execution thread? 

 

I'm not disregarding that this could be a Windows issue, but then I'm not sure why the DLL works better than the ReportGen_html sequence file. Unless the ReportGen_html sequence file uses more resources, because of how it needs to be threaded with whatever else TestStand is doing at the end of a sequence. That being said I have specified that TestStand mustn't thread for both Report Generating and Database Writing. 

 

I'm sorry I can't supply more debug information. I'm happy to put anything in place, that will help, so long as it doesn't affect test times too much. 

 

If we can't find a solution to this we are going to have to move away from capturing the failure information a different way. 

 

On a side note: is it possible to capture the Critical Failure Stack from within the Sequence Callback "TestReport" without needing to run the ReportGen_html sequence? I'd image that somehow that information is passed, or made available, to the ReportGen_html sequence? I understand if I need to move this question to a new thread. 

 

Thanks for the help. 

 

Regards

Wesley

0 Kudos
Message 3 of 7
(3,847 Views)

Okay, just checking some of the common sources of slow-down that we know of! (Connecting to remote servers and on-the-fly reporting.) The odd thing is that the stock report generation sequence is showing this behavior. My initial thought had been that the issue could be due to running a TS 2010 file in TS 2014 could have unexpected performance hits, since the process models changed significantly, but that shouldn't impact the stock file.

You can get resource information from the Windows Task Manager, typically. Another useful tool is the TestStand Trace Utillity https://decibel.ni.com/content/docs/DOC-26574

Typically the DLL is slightly more optimized and executes faster than executing a sequence file, but this sounds unusually slower. I am curious about the CPU/RAM usage while this is running. If the 8101 has a traditional hard drive and is having to page, that could definitely cause a slowdown. I would expect one of the bigger optimizations of the DLL to be in memory usage.

On the side note, I'm guessing you did something like this: http://www.ni.com/tutorial/4563/en/ ? The critical failure stack looks to be generated in PutOneResultInReport.seq, which is part of the reportgen_html.seq. So while you must run the report generation sequence to caputure the failure stack, you could also look at the implementation of the stack and do something similar that would not depend on the report generation sequence. If you do have more questions about that, then yes, it would probably be a good idea to start up a new thread for that question.

William R.
0 Kudos
Message 4 of 7
(3,787 Views)

Morning.

 

Thank you for the feedback. So something interesting happened yesterday, which I want to investigate further today. Two of our 2010 test jigs started running slowly too. So that throws the "TS2014 only" theory out the window. The obvious difference between those 2 jigs, and the others on the line, was that both had Windows Updates waiting to be installed on the next shutdown. Since Windows had a chance to shutdown and install the updates the report writing has gone quickly again. Windows Updates does erk me a bit and I've tried to disable it, but it keeps re-enabling itself. In an ideal world I'd prefer our jigs to be locked down, with no updates, once everything is stable and vetted. But I digress... 

 

With regards to CPU usage... while the test is running the CPU runs flat out and I'm guess that when Windows is busy with background tasks, like Windows Updates, it slows things down more. 

 

The Hard Drive, in the 8101, is the stock standard drive that is shipped with the controller. We took it upon ourselves to upgrade all our controllers from 1GB to 2GB RAM. There is one controller that we've installed a test SSD drive on and it goes like a bomb! Specially startup times. We'll leave it in the field and can start to compare test times with it's duplicate jig that runs in parallel. 

 

I'll play around, in the office, with the TestStand Trace Utility and see how we can use it. 

 

For now I'm going to try and solve this case by case. I'm wanting to blame Windows, so the Windows Update scenario will be my first go to. Failing that an overnight defrag may be in order. I deleted 7.9GB of test report files yesterday off the one jig. 

 

Thank you for help. I'll report back as I find things. Perhaps it will prove useful to someone else in the future. 

 

Regards

Wesley

 

 

0 Kudos
Message 5 of 7
(3,776 Views)

Hey Wesley,

 

Another option is create the report after your test is finished with the TestStand Offline Result Processing Utility. This shouldn't slow down your test times and while you troubleshoot this behavior. 

 

http://zone.ni.com/reference/en-XX/help/370052N-01/tsref/infotopics/offline_processing_utility/ 

 

You should also try disabling the other report plugs that you have to make sure they are not contributing to this slow down. 

 

Also, could you elaborate with how you are logging to the database? Is it part of the HTML report sequence? Do you have it in your sequence? Is the database logging plug-in enabled?

 

Thanks

0 Kudos
Message 6 of 7
(3,764 Views)

Hi guys

 

Sorry for the long delay. Other projects, leave and public hoidays meant I couldn't spend too much time on this. 

 

I had a look at the Offline Result Processing Utility. It's not something that we would use at the moment. We need relatively realtime data and wouldn't want data be become available only at a later stage. As units move through the production line our dashboard is updated. When failures occur we need to be able to show why the boards are failing and take action on that. Dirty test probe, faulty component, etc. 

 

To make sure that it wasn't any of our mods that was causing the slow down, I tried using the stock standard ReportGeneration_html.seq file and it didn't make a difference. The only thing that made a difference was the DLL report generating. 

 

The failure database logging happens in the TestReports sequence. We only use the ReportGen_html.seq file to help build in the critical failure stack information. This is put into an array which is then written to the database. TestStand database logging is enabled, however, the failure information is written to a seperate database. The reason we choose to do that was that it took a very long time to query why a unit failed. This way, the critical failure stack information is written to a seperate database and is quick to read and analyse. 

 

We are looking to move away from report generation all together and are busy trying to optimise our TestStand database to allow for quick querying of failed units. The upside of the optimization will also allow us to relatively quickly plot various measurements on our dashboard to analyse trends and provide feedback to hardware. 

 

Regards

Wesley

0 Kudos
Message 7 of 7
(3,645 Views)