Disadvantages of enabling ‘Low Fragmentation Heap’ LFH on Windows Server 2003? [closed] – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.
But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about windows, windows-server-2008, windows-server-2003, iis, memory.
I’ve been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation.
One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site’s global.asa file to ‘turn on’ Low Fragmentation Heap (LFH).
The following code (with a registered version of the accompanying DLL) did the trick.
Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult)
(Really the code isn’t that important to the question).
An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008.
So, naturally, this left me a little concerned:
- Why is this setting not enabled by default on 2003, or
- If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003?
I suspect the answer to the above is the same for both (if there is one).
Obviously, we’re testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I’m really just trying to understand if there’s any technical reason why we should do this, or if there are any gotchas that we need to be aware of.
In Windows, heaps get fragmented when a poorly-written application allocates memory in an inefficient manner, such that its heap is left in a fragmented state. When an application’s heap is totally fragmented, it can no longer make memory allocations, because there is no single contiguous block of memory in the heap large enough to satisfy the request. It’s all in little fragments. Even if the sum of the sizes of all those little fragments adds up to enough to satisfy the memory allocation.
Emphasis on poorly written application.
Sysinternal’s VMMap is good for viewing the address space of a process and checking it for this sort of fragmentation issue.
Do note that ASLR was introduced in 2008 as well, which does exacerbate this fragmentation issue to a degree. I imagine that had some bearing on the decision to enable LFH by default in that operating system. Also, a LFH policy tends to require more memory up front AFAIK, which may have been more of an issue in the 2003 era than it was in the 2008 era.
To get a more definitive answer than that as to why Microsoft decided to change this policy in 2008, you’ll probably have to ask those engineers at Microsoft.