Discussion:
User Memory Quotas
(too old to reply)
Bodger
2004-12-22 22:05:36 UTC
Permalink
User Memory Quotas

We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.

We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.

http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp

We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.

How do you find out if a quota exists? And how does one change it if
it does exist?

I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.

We are not a domain, only a workgroup of exactly one computer in a
remote site.

Details:

We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.

When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Herb Martin
2004-12-22 22:18:06 UTC
Permalink
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
Usually it's set in the program code.
Post by Bodger
How do you find out if a quota exists? And how does one change it if
it does exist?
Such quotas normally exist on a PER PROGRAM
basis but are typically left at the defaults by programmers
except for clear cases like SQL Server, Echange and
other such "big systems."
Post by Bodger
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
There are programmer tools (IIRC) that will show/set
the quotas on excutables. You might query the Visual
Studio or VC lists.
Post by Bodger
We are not a domain, only a workgroup of exactly one computer in a
remote site.
Irrelevant probably.
Post by Bodger
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Have you done the obvous and just looked in TaskManager
and sorted by the various memory settings, including
Paged and NON-PAGED pool?

I don't recall any specific ActiveState Perl issues and I
run it on multiple machines daily. (It doesn't matter
if no Perl program is actually running).
--
Herb Martin
Ken Schaefer
2004-12-23 01:07:45 UTC
Permalink
IN addition to Herb's comments, perhaps you have some other issue with your
app.

Tools like memmonitor, memtriage etc might help
http://www.microsoft.com/resources/documentation/WindowsServ/2003/all/techref/en-us/Default.asp?url=/Resources/Documentation/windowsserv/2003/all/techref/en-us/rktools_overview.asp

Cheers
Ken
Post by Bodger
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
Usually it's set in the program code.
Post by Bodger
How do you find out if a quota exists? And how does one change it if
it does exist?
Such quotas normally exist on a PER PROGRAM
basis but are typically left at the defaults by programmers
except for clear cases like SQL Server, Echange and
other such "big systems."
Post by Bodger
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
There are programmer tools (IIRC) that will show/set
the quotas on excutables. You might query the Visual
Studio or VC lists.
Post by Bodger
We are not a domain, only a workgroup of exactly one computer in a
remote site.
Irrelevant probably.
Post by Bodger
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Have you done the obvous and just looked in TaskManager
and sorted by the various memory settings, including
Paged and NON-PAGED pool?
I don't recall any specific ActiveState Perl issues and I
run it on multiple machines daily. (It doesn't matter
if no Perl program is actually running).
--
Herb Martin
Roger Abell
2004-12-23 07:12:02 UTC
Permalink
Also, the user right policy you have mentioned governs
the ability to adjust quotas used the processor scheduling
algorithms. AFAIK it is not involved in adjustment of
memory usage.
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
How do you find out if a quota exists? And how does one change it if
it does exist?
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
We are not a domain, only a workgroup of exactly one computer in a
remote site.
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Herb Martin
2004-12-23 08:23:19 UTC
Permalink
Post by Roger Abell
Also, the user right policy you have mentioned governs
the ability to adjust quotas used the processor scheduling
algorithms. AFAIK it is not involved in adjustment of
memory usage.
There are rights involved in changing Working
Set -- which is the programmer term for the memory
allocation of a single process/program.
--
Herb Martin
Post by Roger Abell
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Roger Abell
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
How do you find out if a quota exists? And how does one change it if
it does exist?
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
We are not a domain, only a workgroup of exactly one computer in a
remote site.
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Roger Abell
2004-12-23 09:23:53 UTC
Permalink
Post by Herb Martin
Post by Roger Abell
Also, the user right policy you have mentioned governs
the ability to adjust quotas used the processor scheduling
algorithms. AFAIK it is not involved in adjustment of
memory usage.
There are rights involved in changing Working
Set -- which is the programmer term for the memory
allocation of a single process/program.
Well, yes, but I believe that is not this quota privilege.

For example, from
http://www.microsoft.com/technet/security/topics/issues/w2kccscg/w2kscgcc.mspx
and repeated in
http://www.microsoft.com/technet/security/prodtech/win2000/win2khg/appxb.mspx
http://www.microsoft.com/Resources/Documentation/windowsserv/2003/all/techref/en-us/ntrights_remarks.asp
etc
Increase quotas (SeIncreaseQuotaPrivilege)

Allows a process that has Write Property access to another process to
increase the processor quota that is assigned to the other process. This
privilege is useful for system tuning, but it can be abused, as in a denial
of service attack.


AIUI this allows an account to change the process's quanta, which is used
by the scheduler algorithm when it comes time for the adjusted process to
get a time-slice so that the process gets a larger/smaller than normal
slice.
--
Roger
Post by Herb Martin
Post by Roger Abell
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Herb Martin
Post by Roger Abell
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
How do you find out if a quota exists? And how does one change it if
it does exist?
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
We are not a domain, only a workgroup of exactly one computer in a
remote site.
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Roger Abell
2004-12-23 10:03:43 UTC
Permalink
This is curious Herb, but based on the Whidbey era doc
http://whidbey.msdn.microsoft.com/library/en-us/dllproc/base/setprocessworkingsetsizeex.asp
SE_INC_BASE_PRIORITY_NAME is the needed privilege
to use SetProcessWorkingSetSizeEx to increase either the minumum
or maximum WS size to greater than the then current values.
This privilege is in the GUI "Increase Scheduling Priority"
i.e. SeIncreaseBasePriorityPrivilege

So, to impact the time-slice results from the scheduler algorithm
one needs the "quota" priv, but to alter the mem allocation algorithms
one needs the "scheduling" priv.
Figures.
--
Roger Abell
Post by Herb Martin
Post by Roger Abell
Also, the user right policy you have mentioned governs
the ability to adjust quotas used the processor scheduling
algorithms. AFAIK it is not involved in adjustment of
memory usage.
There are rights involved in changing Working
Set -- which is the programmer term for the memory
allocation of a single process/program.
--
Herb Martin
Post by Roger Abell
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
User Memory Quotas
We are trying to track down a problem that is plaguing us. It seems
even though we have a ton of memory and large pagefile.sys our programs
start to fail as if there is no memory in which to run.
We think there might be a Quota on our User that we are hitting some
maximum allowed memory quota.
http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/Default.asp?url=/resources/documentation/WindowsServ/2003/standard/proddocs/en-us/541.asp
Post by Herb Martin
Post by Roger Abell
Post by Bodger
We have allowed the user and administrator to adjust the quotas, but we
cannot find out a way to query if a quota exists or how to set the
quota itself.
How do you find out if a quota exists? And how does one change it if
it does exist?
I cannot find any gui in our system that allows us to access it. Only
a local policy that allows the user to change quotas.
We are not a domain, only a workgroup of exactly one computer in a
remote site.
We have Windows 2003 Server in a remote data center, it is a web server
among other things. To support the services provided by the web
server we run some home grown programs in a logged in user account
running in the task bar. We have 4 gigs of memory (we rarely use over
700 megs) and 6 gigs of pagefile.sys.
When our programs have been running for a long time the memory does
creep up, but the total between them is no more then 100 megs. We run
Active State Perl programs as well that come and go as necessary.
Bodger
2004-12-23 17:19:52 UTC
Permalink
Thank you for your replies.

Following most of your descriptions, it seems the quota's are on a per
process basis. If that is true, I think this is not my problem. The
problem can best be described as the system hits a brick wall.
Programs that are running continue to run but have odd behavior and new
programs will not start (they fail out of the starting gate). I have
started a performance log to monitor every 2 minutes various counters
such as available memory and such. But I find it hard to believe we
have run out of physical and virtual memory since between them we have
10 gigs. Our best post mortem analysis indicates that this happens if
one or both of the following is true, we have a high number of perl
processes running and/or our main programs are using a larger amount of
memory then usual. Our programs do have a memory creep but each one is
using only about 25 megs and we have 5 of them running, so that does
not seem to be too taxing for this machine.

So assuming that quotas are not my problem, could the logged in user be
limited on the overall number of handles that are open, or overall
memory consumed or overall resources in general consumed?

Bodger
Herb Martin
2004-12-23 19:12:14 UTC
Permalink
Post by Bodger
Thank you for your replies.
Following most of your descriptions, it seems the quota's are on a per
process basis. If that is true, I think this is not my problem. The
problem can best be described as the system hits a brick wall.
We were sort of just free chatting while waiting
for you to report something concrete we can try to
analyze ....

If you notice we aren't even 100% sure what you mean
by memory quotas.

Especially now that you say the "system hits a brick wall".

You need to report the exact symptoms and something about
what you see in TaskMgr.

Now, that migh not make it clear -- I know of one program
that SEEMS to cause something that might be described this
way, when it runs the system out of Handles or GDI objects.

(The particular program displays hundreds or thousands
of icons and pictures.)

When the problem occurs, there is memory to spare and no
obvous limit for the objects above is reached but other
programs like Outlook and even Explorer itself will have
trouble opening new windows, etc.
--
Herb Martin
Post by Bodger
Programs that are running continue to run but have odd behavior and new
programs will not start (they fail out of the starting gate). I have
started a performance log to monitor every 2 minutes various counters
such as available memory and such. But I find it hard to believe we
have run out of physical and virtual memory since between them we have
10 gigs. Our best post mortem analysis indicates that this happens if
one or both of the following is true, we have a high number of perl
processes running and/or our main programs are using a larger amount of
memory then usual. Our programs do have a memory creep but each one is
using only about 25 megs and we have 5 of them running, so that does
not seem to be too taxing for this machine.
So assuming that quotas are not my problem, could the logged in user be
limited on the overall number of handles that are open, or overall
memory consumed or overall resources in general consumed?
Bodger
Bodger
2004-12-23 21:04:46 UTC
Permalink
First we have 4 gigs of Ram and 6 gigs of Virtual Memory in 2
pagefile.sys (one on C and one on E drives). We have 2 CPU's, and 273
gigs of disk.

Software installed:

- Sun Webserver 6.1 (it is current).
- MySQL (not sure version)
- MailTraq (current version) is handling our SMTP and POP3 needs.
- ActiveState Perl (handling CGI requests from Sun Webserver).

- And a homebrew application (with 5 instances) that uses TAPI heavily
for dialing out.

- We run the homebrew application in a logged in user and can see the
applications in the task bar.

The situation is as follows:

99% of the time everything works smoothly. But periodically all heck
breaks lose and the best way to describe that is to say it is like we
hit a brick
wall.

When the failure happens the following is true:

- TAPI fails (we wrote the TAPI application so we are intimate with
what is failing). If you are familiar with TAPI the function
lineInitialize (very first step in using TAPI) returns an error.

- New programs do NOT start, they just fail at execution.

- Mailtraq does not seem affected it continues to function.

- The Webserver cannot start any perl programs to handle CGI forms.

- Our homebrew application cannot start any programs (we have it start
a program to email us if there is a failure with TAPI)

- Already running perl programs crash.

Sometimes this "storm" goes away after time and things go back to
normal by themselves, but most of the time we do a reboot to clear the
situation.

When a storm is happening either or both of the following are true:

- We have a higher volume of perl programs running at the time

or

- Our homebrew application has higher then normal memory usage.

Now our homebrew application does suffer from memory leaks but the
problem starts occurring when they are at 25 megs (not very high) so
times 5 we are
talking a max of 125 megs for all 5 programs.

Our current theory that we are working under is that there is some
maximum amount of memory or handles or someother resource that can be
open at any
given time for a logged in user. We feel that all our programs added
together are hitting that artificial limit.

We are no where near filling up Physical Ram much less Physical and
Virtual put together so we feel this is some artificial limit imposed
by the OS.

I have NEVER seen the Commit Charge Peak greater then 800 megs.

But during the storms the Task Manager does not respond so I cannot
confirm the Commit Charge/Peak value. The CPU is pegged, or the Task
Manager would have responded.

Are there any artificial limits to the max amount of memory, handles
etc that a logged in user can use?
Any other ideas on what to do or look for?

Thank you

Julian Brown
Roger Abell [MVP]
2004-12-23 22:12:08 UTC
Permalink
OK, first a couple of prefix remarks:
1. I did not follow the link in your initial post until after the
exchange about what the quota policy did, and so I was
running on assumption of which, rather than seeing in the
link it was the W2k3 new one
2. On a x86 system having such large memory does not
mean much once you get to two gigs available to user space,
although you can throw the three gig switch.

It sounds like you may be running out of heap for system objects.
You might want to review the following KB and related info:
http://support.microsoft.com/?id=184802
--
Roger Abell
Post by Bodger
First we have 4 gigs of Ram and 6 gigs of Virtual Memory in 2
pagefile.sys (one on C and one on E drives). We have 2 CPU's, and 273
gigs of disk.
- Sun Webserver 6.1 (it is current).
- MySQL (not sure version)
- MailTraq (current version) is handling our SMTP and POP3 needs.
- ActiveState Perl (handling CGI requests from Sun Webserver).
- And a homebrew application (with 5 instances) that uses TAPI heavily
for dialing out.
- We run the homebrew application in a logged in user and can see the
applications in the task bar.
99% of the time everything works smoothly. But periodically all heck
breaks lose and the best way to describe that is to say it is like we
hit a brick
wall.
- TAPI fails (we wrote the TAPI application so we are intimate with
what is failing). If you are familiar with TAPI the function
lineInitialize (very first step in using TAPI) returns an error.
- New programs do NOT start, they just fail at execution.
- Mailtraq does not seem affected it continues to function.
- The Webserver cannot start any perl programs to handle CGI forms.
- Our homebrew application cannot start any programs (we have it start
a program to email us if there is a failure with TAPI)
- Already running perl programs crash.
Sometimes this "storm" goes away after time and things go back to
normal by themselves, but most of the time we do a reboot to clear the
situation.
- We have a higher volume of perl programs running at the time
or
- Our homebrew application has higher then normal memory usage.
Now our homebrew application does suffer from memory leaks but the
problem starts occurring when they are at 25 megs (not very high) so
times 5 we are
talking a max of 125 megs for all 5 programs.
Our current theory that we are working under is that there is some
maximum amount of memory or handles or someother resource that can be
open at any
given time for a logged in user. We feel that all our programs added
together are hitting that artificial limit.
We are no where near filling up Physical Ram much less Physical and
Virtual put together so we feel this is some artificial limit imposed
by the OS.
I have NEVER seen the Commit Charge Peak greater then 800 megs.
But during the storms the Task Manager does not respond so I cannot
confirm the Commit Charge/Peak value. The CPU is pegged, or the Task
Manager would have responded.
Are there any artificial limits to the max amount of memory, handles
etc that a logged in user can use?
Any other ideas on what to do or look for?
Thank you
Julian Brown
Bodger
2004-12-23 22:31:31 UTC
Permalink
Roger

Wow, that might be the answer. We are reviewing it now. If we make
this change we will do it next week, can't afford to risk it over
Christmas vacation.

Thanx

Julian
Bodger
2004-12-23 22:45:00 UTC
Permalink
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.

Can this 48 mb buffer be increased?

I am afraid if I tinker with it, some of my services will not properly
run.

Thanx

Julian
Roger Abell
2004-12-23 23:33:31 UTC
Permalink
That is exactly the got ya here. AFAIK the limit is hard-wired
and was set back when memory was much more limited and
expensive, and when systems were single user machines.
The issue is how many winstation desktops are there on your
running system as it seems you may need to increase this
(or to figure out why the app is consuming so much heap and
adjust design so objects are released at earliest opportunity)
but that increase in size of each impacts the total number of
allocations you have available. I would be glad to find out
there is a doc'd way to adjust this 48 meg size, but I have not
run on that info.
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Herb Martin
2004-12-23 23:40:56 UTC
Permalink
Post by Roger Abell
That is exactly the got ya here. AFAIK the limit is hard-wired
and was set back when memory was much more limited and
expensive, and when systems were single user machines.
The issue is how many winstation desktops are there on your
running system as it seems you may need to increase this
(or to figure out why the app is consuming so much heap and
adjust design so objects are released at earliest opportunity)
but that increase in size of each impacts the total number of
allocations you have available. I would be glad to find out
there is a doc'd way to adjust this 48 meg size, but I have not
run on that info.
Did I misread or isn't this what you referenced article
describes? (How to change these heaps...)

http://support.microsoft.com/?id=184802

This may actually help my 1 GIG system that has trouble
with the program which allocates 10,000 GDI and Handle
objects.
--
Herb Martin
Post by Roger Abell
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Roger Abell
2004-12-24 00:29:56 UTC
Permalink
It is pretty easy to misread. This reg key controls the limits
for a session, that is a winstation. The 48 meg is systemwide
pool from which all desktops get their piece as controlled by
this reg key.
--
Roger
Post by Herb Martin
Post by Roger Abell
That is exactly the got ya here. AFAIK the limit is hard-wired
and was set back when memory was much more limited and
expensive, and when systems were single user machines.
The issue is how many winstation desktops are there on your
running system as it seems you may need to increase this
(or to figure out why the app is consuming so much heap and
adjust design so objects are released at earliest opportunity)
but that increase in size of each impacts the total number of
allocations you have available. I would be glad to find out
there is a doc'd way to adjust this 48 meg size, but I have not
run on that info.
Did I misread or isn't this what you referenced article
describes? (How to change these heaps...)
http://support.microsoft.com/?id=184802
This may actually help my 1 GIG system that has trouble
with the program which allocates 10,000 GDI and Handle
objects.
--
Herb Martin
Post by Roger Abell
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Herb Martin
2004-12-24 02:50:21 UTC
Permalink
Post by Roger Abell
It is pretty easy to misread. This reg key controls the limits
for a session, that is a winstation. The 48 meg is systemwide
pool from which all desktops get their piece as controlled by
this reg key.
Do we think such problems are due mostly to the
system wide limit or the individual winstation?

(Most people on a non-TS machine would be using
one WinStation and one DeskTop -- ok, technically
a couple of desktops since the Logon Screen and I
believe the Screensaver security are technically 2
others.)

I haven't written any WinStation code for years but
my guess would be that TS sessions are run in
separate WinStations or at least separate desktops.
--
Herb Martin
Post by Roger Abell
--
Roger
Post by Herb Martin
Post by Roger Abell
That is exactly the got ya here. AFAIK the limit is hard-wired
and was set back when memory was much more limited and
expensive, and when systems were single user machines.
The issue is how many winstation desktops are there on your
running system as it seems you may need to increase this
(or to figure out why the app is consuming so much heap and
adjust design so objects are released at earliest opportunity)
but that increase in size of each impacts the total number of
allocations you have available. I would be glad to find out
there is a doc'd way to adjust this 48 meg size, but I have not
run on that info.
Did I misread or isn't this what you referenced article
describes? (How to change these heaps...)
http://support.microsoft.com/?id=184802
This may actually help my 1 GIG system that has trouble
with the program which allocates 10,000 GDI and Handle
objects.
--
Herb Martin
Post by Roger Abell
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Roger Abell [MVP]
2004-12-24 04:17:53 UTC
Permalink
Post by Herb Martin
Post by Roger Abell
It is pretty easy to misread. This reg key controls the limits
for a session, that is a winstation. The 48 meg is systemwide
pool from which all desktops get their piece as controlled by
this reg key.
Do we think such problems are due mostly to the
system wide limit or the individual winstation?
Indeed. That is the key question the OP's team must face,
assuming we have found their root issue that is.
It would seem they should be able to reason out which,
exhaustion of the pool or of an account's allocation, is
operative based on what they know of their "wall" behaviors.
--
Roger
Herb Martin
2004-12-24 05:00:05 UTC
Permalink
Post by Roger Abell [MVP]
Indeed. That is the key question the OP's team must face,
assuming we have found their root issue that is.
It would seem they should be able to reason out which,
exhaustion of the pool or of an account's allocation, is
operative based on what they know of their "wall" behaviors.
I don't think they are even looking at counters yet --
all requests for detail get more stuff about software
and hardware models not the memory usage and other
counter stuff I (we, I presume) would be looking at
were we there.
Roger Abell [MVP]
2004-12-24 05:38:17 UTC
Permalink
Post by Herb Martin
Post by Roger Abell [MVP]
Indeed. That is the key question the OP's team must face,
assuming we have found their root issue that is.
It would seem they should be able to reason out which,
exhaustion of the pool or of an account's allocation, is
operative based on what they know of their "wall" behaviors.
I don't think they are even looking at counters yet --
all requests for detail get more stuff about software
and hardware models not the memory usage and other
counter stuff I (we, I presume) would be looking at
were we there.
Well, one key piece of info that would be quick is how many
accounts show up as process contexts under the User column
in taskmanager. If it is not quite many then they likely are dealing
with the limit impacted by second number in the parameter of key
discussed in first KB about user32/kernel32 (again, if this is
their culprit), but the question is whether to address it in the app
design/quality or by increase of the parameter value, or both.
I know I have increased on my workstations just to keep IE from
fudging out on me after too many windows are open, but then there
are usually only about 8 or 10 different identities owning processes
(mostly due to SQL, IIS, and SFU).
--
ra
Bodger
2004-12-24 17:10:00 UTC
Permalink
Let me know what parameters and other things you want me to report on
and I will be glad to report on it.

Regarding the Task Manager, today it is quiet (Christmas Eve)

Our user 14 processes (this will be steady)
LOCAL SERVICE 1 process
NETWORK SERVICE 2 processes
SYSTEM 32 processes

Note the webserver and the perls that support the web server run in the
SYSTEM process, and from our performance logs, we start having troubles
when we get upwards of 50 perls per minute running. Typically those
perls come and go quickly but a few stick around so I do not have an
analysis as to how many are active at one time.
Thanx

Julian a.k.a Bodger
Roger Abell
2004-12-24 19:20:56 UTC
Permalink
That makes it sound like it is the per-context allocation
rather than the system-wide pool.
I am really, really tending to believe that you need to
look at what might be leaking from the perl routines.
If you are not seeing other principals then you are having
these all run by IIS in a worker process pools of a single
principal.

Have you looked at the features of IIS 6 that let you
cycle the worker pools on different triggers, including
how long they have lived ??
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Let me know what parameters and other things you want me to report on
and I will be glad to report on it.
Regarding the Task Manager, today it is quiet (Christmas Eve)
Our user 14 processes (this will be steady)
LOCAL SERVICE 1 process
NETWORK SERVICE 2 processes
SYSTEM 32 processes
Note the webserver and the perls that support the web server run in the
SYSTEM process, and from our performance logs, we start having troubles
when we get upwards of 50 perls per minute running. Typically those
perls come and go quickly but a few stick around so I do not have an
analysis as to how many are active at one time.
Thanx
Julian a.k.a Bodger
Herb Martin
2004-12-24 20:38:05 UTC
Permalink
Post by Bodger
Note the webserver and the perls that support the web server run in the
SYSTEM process,
That would be "System account" -- there is no
system process per se.
Post by Bodger
and from our performance logs, we start having troubles
when we get upwards of 50 perls per minute running.
I have previously mentioned memory usage by process,
and both Paged and Nonpaged Pool, especially the latter.

(If you use the TaskMgr to see these, then just the top few
users when you sort by each column will likely be of
interest.)

You might also note the leading GDI objects and Handles
processes.
Post by Bodger
Typically those
perls come and go quickly but a few stick around so I do not have an
analysis as to how many are active at one time.
You might also try this problem on the IIS group --
although I use Perl regularly, I am NOT allocating
so many instance through the web or other server.

I am not sending you away, just pointing out where
more expertise may lie.
--
Herb Martin
Post by Bodger
Let me know what parameters and other things you want me to report on
and I will be glad to report on it.
Regarding the Task Manager, today it is quiet (Christmas Eve)
Our user 14 processes (this will be steady)
LOCAL SERVICE 1 process
NETWORK SERVICE 2 processes
SYSTEM 32 processes
Thanx
Julian a.k.a Bodger
Bodger
2004-12-24 22:27:45 UTC
Permalink
We use the Sun WebServer 6.1 not IIS. We talked about the Paged and
Non-Paged Pool earlier, I will start recording those parameters as well
as I can.

Thanx

Julian
Herb Martin
2004-12-24 23:05:28 UTC
Permalink
Post by Bodger
We use the Sun WebServer 6.1 not IIS. We talked about the Paged and
Non-Paged Pool earlier, I will start recording those parameters as well
as I can.
Fine contact those who use Sun, but my thinking
was along the lines of asking people who regularly
run large numbers of Perl processes on Windows
server.

Recording those paramenters may not be as important
as just seeing which process(es) has(have) the most
of those resources in use.

Recording is good, but getting a feel of what is
happening may be enough.
Roger Abell
2004-12-26 05:54:36 UTC
Permalink
Just keep in mind that in the post with multiple KB citations,
the one indicates that one cannot monitor desktop heap use,
so perfmon will apparently only get you so far relative to
that aspect. I imagine you have to hook up the debugger.
--
Roger
Post by Bodger
We use the Sun WebServer 6.1 not IIS. We talked about the Paged and
Non-Paged Pool earlier, I will start recording those parameters as well
as I can.
Thanx
Julian
Roger Abell
2004-12-24 00:23:12 UTC
Permalink
Hey, check this ancient KB out
http://support.microsoft.com/default.aspx?scid=kb;en-us;156484
as it indicates that (then) perl multithreaded apps were known to
show desktop heap leaks if processes terminated prematurely.

Anyway, I had to poke around as the being hard wired seemed
unlikely - undoc'd maybe, but hardwired no. I think this little
bit of info published for terminal server may be the light . . .
http://support.microsoft.com/default.aspx?scid=kb;en-us;840342
I would be pretty cautious about fooling with this until you have
determined that it is not actually a misbehavior that is causing
excess heap consumption, and until you have figured answer to
the next . . .

However, notice that your issue may be the number of desktops
consumed, making you run of the buffer pool, rather than any
one desktop running out of its allocation. In fact, this might
seem more likely given some of what you have said about that
wall you run into.
For IIS specific issues take a look at
cgi apps in IIS pre-v6
http://support.microsoft.com/default.aspx?scid=kb;en-us;217202
conserving desktop consumption in IIS 6
http://support.microsoft.com/default.aspx?scid=kb;en-us;831135

Other related KBs uncovered :
http://support.microsoft.com/default.aspx?scid=kb;en-us;142676
which indicates use of the third SharedSection parameter to limit
size of a non-interactive desktop heap allocation
http://support.microsoft.com/default.aspx?scid=kb;en-us;318677
mentioned if only because it confirms that "You cannot check the
usage status of the desktop heap while an application is running"
http://support.microsoft.com/default.aspx?scid=kb;en-us;126962
very old, mentions how NT3.5 had a much smaller heap allocation
than did NT 3.1 (same as what is now current) in order to "increase
performance". My gosh, did them walk this as a linked list !!
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Bomber
2005-01-04 17:19:10 UTC
Permalink
I read all the documents that you listed. I also have an issue with the
memory management. Mu issue revolves around article 184802, cause 2.

My server is Windows 2000 (all versions).

The appllication I run enables me to reduce the heap value from the default
512 to 128. Which enables 160+ applications to be ran. However this
cripples other appilcations such as our Network backup.

It seems the solution is to change the 'system-wide buffer of 48Mb
(user32.dll)', how can this be modified?

Regards

Richard
Post by Roger Abell
Hey, check this ancient KB out
http://support.microsoft.com/default.aspx?scid=kb;en-us;156484
as it indicates that (then) perl multithreaded apps were known to
show desktop heap leaks if processes terminated prematurely.
Anyway, I had to poke around as the being hard wired seemed
unlikely - undoc'd maybe, but hardwired no. I think this little
bit of info published for terminal server may be the light . . .
http://support.microsoft.com/default.aspx?scid=kb;en-us;840342
I would be pretty cautious about fooling with this until you have
determined that it is not actually a misbehavior that is causing
excess heap consumption, and until you have figured answer to
the next . . .
However, notice that your issue may be the number of desktops
consumed, making you run of the buffer pool, rather than any
one desktop running out of its allocation. In fact, this might
seem more likely given some of what you have said about that
wall you run into.
For IIS specific issues take a look at
cgi apps in IIS pre-v6
http://support.microsoft.com/default.aspx?scid=kb;en-us;217202
conserving desktop consumption in IIS 6
http://support.microsoft.com/default.aspx?scid=kb;en-us;831135
http://support.microsoft.com/default.aspx?scid=kb;en-us;142676
which indicates use of the third SharedSection parameter to limit
size of a non-interactive desktop heap allocation
http://support.microsoft.com/default.aspx?scid=kb;en-us;318677
mentioned if only because it confirms that "You cannot check the
usage status of the desktop heap while an application is running"
http://support.microsoft.com/default.aspx?scid=kb;en-us;126962
very old, mentions how NT3.5 had a much smaller heap allocation
than did NT 3.1 (same as what is now current) in order to "increase
performance". My gosh, did them walk this as a linked list !!
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Roger Abell
2005-01-05 07:04:12 UTC
Permalink
Noop, but good question.
I have not found public info on this, but the mention of the
http://support.microsoft.com/default.aspx?scid=kb;en-us;840342
(TS in W2k3 edition at least) reg values in the
ccs\control\Session manager\memory management
reg key sound real close to the ones controlling the heap.

You may need to open a support incident with PSS if this
has high impact on your environment.
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bomber
I read all the documents that you listed. I also have an issue with the
memory management. Mu issue revolves around article 184802, cause 2.
My server is Windows 2000 (all versions).
The appllication I run enables me to reduce the heap value from the default
512 to 128. Which enables 160+ applications to be ran. However this
cripples other appilcations such as our Network backup.
It seems the solution is to change the 'system-wide buffer of 48Mb
(user32.dll)', how can this be modified?
Regards
Richard
Post by Roger Abell
Hey, check this ancient KB out
http://support.microsoft.com/default.aspx?scid=kb;en-us;156484
as it indicates that (then) perl multithreaded apps were known to
show desktop heap leaks if processes terminated prematurely.
Anyway, I had to poke around as the being hard wired seemed
unlikely - undoc'd maybe, but hardwired no. I think this little
bit of info published for terminal server may be the light . . .
http://support.microsoft.com/default.aspx?scid=kb;en-us;840342
I would be pretty cautious about fooling with this until you have
determined that it is not actually a misbehavior that is causing
excess heap consumption, and until you have figured answer to
the next . . .
However, notice that your issue may be the number of desktops
consumed, making you run of the buffer pool, rather than any
one desktop running out of its allocation. In fact, this might
seem more likely given some of what you have said about that
wall you run into.
For IIS specific issues take a look at
cgi apps in IIS pre-v6
http://support.microsoft.com/default.aspx?scid=kb;en-us;217202
conserving desktop consumption in IIS 6
http://support.microsoft.com/default.aspx?scid=kb;en-us;831135
http://support.microsoft.com/default.aspx?scid=kb;en-us;142676
which indicates use of the third SharedSection parameter to limit
size of a non-interactive desktop heap allocation
http://support.microsoft.com/default.aspx?scid=kb;en-us;318677
mentioned if only because it confirms that "You cannot check the
usage status of the desktop heap while an application is running"
http://support.microsoft.com/default.aspx?scid=kb;en-us;126962
very old, mentions how NT3.5 had a much smaller heap allocation
than did NT 3.1 (same as what is now current) in order to "increase
performance". My gosh, did them walk this as a linked list !!
--
Roger Abell
Microsoft MVP (Windows Security)
MCSE (W2k3,W2k,Nt4) MCDBA
Post by Bodger
Hmm, something bothers me. Although this is likely our problem, it
seems hard to believe that their is a fixed size "system wide 48 mb
buffer" that all desktops are allocated from.
Can this 48 mb buffer be increased?
I am afraid if I tinker with it, some of my services will not properly
run.
Thanx
Julian
Continue reading on narkive:
Loading...