10-01-2010 04:14 AM
Well then, I'm fighting with the same thing right now, and weather we call this a bug or not is not interesting, since it's disabling me from using the web-publishing stuff anyway. What's the status on this problem under LV2010 ? Anybody tried that ?
If NI hasn't come up with something to improve the situation, this surely is buried deeep down there...
Martin
10-01-2010 04:28 AM
Hi Martin
WE've been busy developing a new version of our code designed for a new HW platform, so we are yet to move from 8.6.1 - Trying to keep the number of variables in devlopment fixed, so upgrade to 2010 will not be prior to major release in about a week or so.. The solution posted here using the processhandler works like a charm on both XP and win7 and we have +200 machines around the globe running with memory consumption under control, but I do not know if changing to 2010 runTime engine will crash that.. This is one of the first issues that needs to be checked upon upgrading.
10-01-2010 08:44 AM
I've recently tried LV2010 and have been under the impression that the problem had been solved, but my testing was not extensive. I could not find on the NI resources mentioned earlier any specific statement about the resolution, and did not want to rush upgrading in the systems where webserving is critical. And no, where I've been trying with LV9, resizing the process size was uneffective, so I consider it ultimately an unnecessary hack.
Enrico
10-02-2010 04:14 PM
Hi,
very sorry to be back in this thread so late, being busy, but I feel I have to make some clarifications since I post the processhandler.dll here.
First of all this phenomenon has nothing to do with memory leaks , or something like that, the problem is more subtle and it involves the mechanism of trimming or cleaning the working set of the applications by OS.
Just to be clear, maybe most of you already knows that, since win2000 or NT (not sure) a mechanism of automatic working set trimming was introduced for avoiding page file thrashing that can incur if avalable RAM is not enough. So if a user minimize an application, the OS try to trim the app working set (pages that are touch frequently by the app) meaning that memory is paged out to disk.
If this mechanism does not exist, some applications will consume all available RAM resulting in computer freezing or app lack of response. So, the memory consumption of an aplications stays the same, just the location is different, part in RAM and part on hard disk.
Interestingly, the task manager, by default, under the column Mem Usage is showing the Memory Working Set!!! this is why the user that post a message in this thread saw a difference beetwen task manager and performance monitor.
Is this an important matter, in can be the difference between life and death in some applications, this is why Microsoft in one of their KB says that a developer some time will need to implement by himself the routines for trimming the working set, if the OS can handle it.
What can be the reasons of such errors, all I can think is UI rendering. The most use app with such a problem is Firefox and the way it's rendering just about every visual component and element of the browser, trimming memory used by the browser forces Firefox to reallocate and rerender all visual elements on the browser as well as the web page that's loaded, causing some grief and possible hard drive thrashing.
Maybe this is why trimming memory does not work with labview web server, because the UI must be rendered even if the front panel is minimized.
I have been forced to use programatic trimming because one of the application, that is functioning 24h all year, was consuming all RAM after 2 or 3 days. On the same computers, 3 labview app are functioning in paralel and only one was creating this problem. This was the only sollutions and is working for 2 years now. This problem occur if the computer is not used frequently by a user (process controllers) so the user interface is not manipulated on daily bases.
cosmin
07-19-2011 03:12 AM
So I have had the same problem with TestStand. I run this VI just once a day, and the problem is away. Until now I haven't got any system crash.
Thanks for your inputs!
07-19-2011 08:28 AM
Hi Rolf,
I know you are not a developer of Windows, but you seem to be able to answer my questions 😄
Suppose, I allocate some memory for my application, just in LabWindows/CVI, with malloc(). If it is successful, I can use that part of memory, until I explicit free() it. In the moment, when I free() it, windows should be aware, that my application doesn't need that part of memory anymore. Then why is this memory block assigned to my application, since I released it?
I haven't experienced it really in CVI, I've just written it because it is 100% clear for me (or just 80%?) how allocating and deallocating memory works. But I have the same issue with TestStand 2010. If I clock on the minimize button, the memory usage is allright again.
07-22-2011 04:50 AM
Hi mitulatbati,
Do you can make me a copy of Vi in labVIEW version 8.2 ??
Thxs a lot !!
Regards,
Oriol
07-22-2011 05:44 AM
Yes, I can do it. But please read the complete thread, this is not a 100% solution.
07-22-2011 05:58 AM
Thxs !!
I had the same problem with Teststand. Today i'm solving it.
I have been done :
Step 1: - In all steps : --> "Run options" and disable "Record Results".
Step 2: - In call sequence step --> Step 1 + "Sequence Call Trace Setting" = "Disable tracing in sequence".
Step 3: - "Configure" in the main teststand sequence editor window ---> "Report Options" --> "Contents" --> "Disable Report Generation" = Activate. ---> "Database Options" --> "Logging Options" --->"Disable Database Logging" = Activate.
I'm not sure if all steps are required to solve the problem.
07-22-2011 06:27 AM
Well memory management is a bit more complicated than you think. First if the system would update global memory tables at every malloc and free you could not use LabVIEW anymore as it would be mostly busy waiting for the system to cleanup its memory allocation tables. So an application gets on startup a heap from which the individual memory blocks are allocated. This heap can grow if necessary but doesn't automatically decrease in size each time a chunk is freed by the application.That usually means that once allocated memory stays allocated for that application. A new malloc can reuse such memory but only if it fits into any currently available continous block of memory. Otherwise the memory manager requires from the OS another set of heap pages to satisfy the request.
Supposedly the aformentioned function does among other things such as paging out memory that hasn't been accessed for a long time also collect freed memory blocks and returns them to the OS, effectively reducing the number of assigned heap pages to the application. Basically it is a bit like a garbage collection in other environments. Memory that the application doesn't use currently but the system still has assigned to the process is freed and returned to the system. This can also reduce fragmentation of memory pages so that the OS again can satisfy memory heap requests that were previously not possible anymore.
So the function to reduce the working size of an application can solve some problems but is not a catch all solution and certainly can cause other problems such as performance degradation or collisions with memory requests done by the application while the working size reduction is done. That is why Windows only calls this function on specific moments such as when minimizing the application.
First this is a user operation so it is not happening frequently (such as several times a second), the user usually doesn't expect the application to go into a whirl of activation at the moment he minimizes it, and the fact that it is synchronous to the Windows event handling more or less blocks most other possibilities that the application might try to go into lots of memory intense operations at that point that could conflict with the reduction of the working size.
You can call that function explicitedly in your application but need to make sure that you don't produce any of the above mentioned problems such as calling it in a loop or otherwise frequently, doing at the same time lots of other things that might cause memory reallocations, etc. And it won't fix memory leaks. Memory leaks is when the application or a system library allocates a memory block and then forgets about it. The growing working size of an application however is not a memory leak. It's simply an optimization that avoids that the system comes down to a crawl when lots of memory allocations and deallocations are done inside an application. The memory is properly accounted for in the memory manager and can be returned to the OS with this method.