How much RAM is enough per client? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

How much RAM is enough per client?

Could anybody tell,
How much RAM is enough per client? Thanks.

Can you be more specific? If I had to guess, then I would assume that you are asking about the "Minimum Memomory Per Query" setting (it is in the server properties window in SQLEM, Memory tab, or use sp_configure). It is 1024KB by default, but a good performance boosting trick is to increase it to a more reasonable extent. It is the actual memory reserved as an initial value per connection. Just be careful not to exceed the physical memory!
Sorry about misunderstanding. The question is really about physical memory.
I wander about whether 2G is enough for 200-300 concurrent OLTP clients. (NT4, SQL7)
Thanks.
You can’t use more anyway with this platform. 2GB is the max NT4,SQL7 can use.
Stanle is correct, so unless you can upgrade your software versions, you will have to live with a maximum of 2GB. Unfortunately, it is very hard to predict how much RAM is needed for a client. This depends on so many different factors that coming up with a figure is very difficult. If your system is already in production, watch the buffer cache hit ratio. If it is above 99%, then memory is probably not a major issue for the time being. If you are planning a new system, the only way to know for sure is to simulate the system on a test system and see what happens. What we do to resolve this question (and while it works, it is expensive), we always order our SQL Servers with 4GB RAM minimum, use Windows 2000 Advanced Server, and use SQL Server 2000 Enterprise. This way, if the 4GB is not enough, we still can add more if need be once production starts. Anybody got any better, less expensive ideas?
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
If UFOBIX wants to stick to the NT4 environment, and is certain that the 2GB aren’t enough, then he can upgrade the RAM to 4GB and install a second instance of SQL Server 7. Thus both servers will be taking advantage of whole RAM installed. However, this will include some re-development, as part of the functionality will have to go to the second instance. This is a good set up (unless you have unlimited HW resourses <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> for separation of the distirbutor in replication enviroment.<br /><br />Cheers,<br />Stanley
My experience tells me 500KB of memory is required per client connection. In your case w/ 200-300 users you’ll need 100 to 150 MB of memory for client connections only. Depending on the size of your database will determines how much DB cache you’ll need. 2GB is not the max memory available to be allocated to SQL 7 on NT. You can use the /3GB boot.ini switch to allocate 3GB of RAM to SQL. Check out http://www.sysinternals.com/ntw2k/info/bootini.shtml for more details
To use the /3GB switch, and for it to work, you just have the Enterprise version of both NT 4.0 and SQL Server 7.0. See: http://www.sql-server-performance.com/awe_memory.asp
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Fyi, the usage of memory and other resources on OS depends on MAX WORKER THREADS option. For example, if the maximum number of user connections to your SQL Server box is equal to 50, you can set the ‘max worker threads’ options to 50. This frees up resources for SQL Server to use elsewhere. If the maximum number of the user connections to your SQL Server box is equal to 500, you can set the ‘max worker threads’ options to 500, which can improve SQL Server performance because thread pooling will not be used. HTH Satya SKJ
satya, this is new information to me. Would you happen to have a link to a source for this info??? It was my understanding the MAX WORKER THREADS was a limit to the amount of threads SQL would generate before pooling would be used. So for a MAX WORKER THREADS setting of 255 and 50 client connections, SQL would only create 50 connection threads.
thanks…
Check under books online for this option and other configuration information. hTH Satya SKJ
The MAX WORKER THREADS option is one that I am not an expert on, so I did some research in Karen Delaney’s book, Inside Microsoft SQL Server 2000. Here is a summary of what she has to say about them. The default setting for this option is 255. What this means is that SQL Server will allocate as many of them as needed, not all 255 of them. For example, if there are 100 connections to SQL Server, then only 100 worker threads will be allocated. But if over 255 connections are made, then worker threads will be shared among the existing connections. She also recommends to generally leave this setting alone, even if there are as thousands of connections, as most of these will most likely not be active at the same time. Of course, this is a general rule of thumb, and each situation is different. In Ed Whalen’s book, Microsoft SQL Server 2000 Performance Tuning, he agrees with Kalen, in that this setting should only be changed under very heavy loads, and only then, trial and error testing should be done to find the ideal setting. Books Online says that reducing the number may help performance in cases where there are few connections, but this conflicts with Kalen’s discussion where she says that only as many are used as needed. If this is the case, then lowering the value shouldn’t make any difference in performance. Any one got any good test figures on how this setting affects performance?
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Hello, I am new member of the forum. I have one urgent question:
we have: SQL 2000 and Windows Advanced Server. We’ve just activated the /3GB option (in boot.ini). With which command or in which windows option can we check/verify that the /3GB option is indeed activated. Thanks for any reply,
Fijnje
@Finjie, Use task manager on server, check out how much "mem usage" the sqlservr.exe has. In my experience it will build up to 2.8 GB or so..
Also capturing the h/w counters using SYSMON. _________
Satya SKJ
Moderator
SQL-Server-Performance.Com

Thanks for the quick reply guys. I’ve checked and compared the mem_usage to another system where /3GB option is not set yet. On 2 systems /3GB is activated and systems rebooted. (All 3 have 3.9 Gb RAM.) mem_usage is on all 3 currently app. 1,600,000 K (I couldn’t find h/w counters using SYSMON) So I am now not sure whether I did the /3Gb option activation on the right way:
I added the entry in boot.ini:
=====================================
previously
=====================================
[boot loader]
timeout=5
default=multi(0)disk(0)rdisk(0)partition(1)WINNT
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect
C:CMDCONSBOOTSECT.DAT="Microsoft Windows 2000 Recovery Console" /cmdcons
=====================================
=====================================
after adding /3Gb entry
=====================================
[boot loader]
timeout=5
default=multi(0)disk(0)rdisk(0)partition(1)WINNT
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect
multi(0)disk(0)rdisk(0)partition(1)WINNT="Microsoft Windows 2000 Advanced Server" /3GB
C:CMDCONSBOOTSECT.DAT="Microsoft Windows 2000 Recovery Console" /cmdcons
===================================== Thanks for any reply,
Fijnje
One of the AWE Memory SQL Server Performance Tuning Tips page refers :
If you use AWE memory, you can track its performance characteristics using some special Performance Monitor counters found under the SQL Server Buffer Manager object. They include: AWE Lookup Maps/Sec: This counter measures how many times that a specific database page was requested by SQL Server, was found in the buffer pool, and then was mapped as AWE memory (or the server’s virtual address space). This would be a combination of the AWE Stolen Maps/Sec and AWE Write Maps/Sec described later. AWE Stolen Maps/Sec: This counter measures how many times that a free database buffer was taken by SQL Server and mapped as AWE memory. AWE Write Maps/Sec: When SQL Server runs out of free buffers to map to AWE memory, it has to write to a dirty buffer instead, which hurts performance because a disk write has to occur to clean up the dirty buffer before it can be used. This counter measures the number of times that SQL Server has to map a dirty buffer. If this figure is high, more memory should be considered. AWE Unmap Call/Sec: Sometimes SQL Server will unmap buffers from AWE memory (because they have not been used lately). This counter measures how many times SQL Server calls for an unmap operation, which can affect one or more buffers at the same time. AWE Unmap Pages/Sec: Closely related to the above counter, this counter specifically measures the number of SQL Server buffers that are unmapped.
HTH
_________
Satya SKJ
Moderator
SQL-Server-Performance.Com

Hi Satya, Is the modification in my boot.ini (adding the /3GB line), as shown in my previous mail, correct and also syntax-correct ? Thanks for your time,
Fijnje
I think there is an error, u need to use the switch in the same line as /FASTDETECT. So the entry should be multi(0)disk(0)rdisk(0)partition(1)WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect /3GB HTH. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

@Gaurav – that’s my take as well.
HAving two lines in the boot.ini means two options will be shown to the user when the system boots up. When you select either of these, you loose the other option – /fastdetect or /3GB Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

]]>