Hardwareupgrade recomandations | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Hardwareupgrade recomandations

Hello, my company will in the near future launch an advertising campaign for our website that hopefully will increase our traffic 6-10 times from what we have now. For the time beeing our servers are doing just fine but with an increase like this we will have to do some upgrades. We will be using pro’s to do the proper TSQL-optimizations and database tuning, but the need for further hardware is unavoidable I belive. What we have now is a stand-alone dual AMD 2400+ 2Ghz processors, 1GB ram and RAID1. At present the server is running on average at about 20% processor capacity. What I’m wondering about is which options I have, and what are best practices when upgrading like this. I would assume that adding more processors and RAM would do the trick but I would like to know more about it befor a decision is made. Maybe you have some links where I can read more about it…? —
"Real programmers don’t document, if it was hard to write it should be hard to understand"
More processors and more ram will be fine.
If you have good perfomance now, I suggest go on with same RAID arquitecture, even is you are planing to upgrade disks too.
How many processor can you add and, what SQL do you have?. Remember Standard Edition can handle up to 2GBytes.
Luis Martin …Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
Bertrand Russell
To be honest I don’t know how many processors are supported by the motherboard, but seriously doubt that it supports more than two (we rent dedicated servers from a hosting provider). And we run Standard edition… —
"Real programmers don’t document, if it was hard to write it should be hard to understand"
if your average CPU is 20%, what are the peaks during heavy usage periods? 40% or more?
If your application is architected well, then you should get fairly linearly throughput versus utilization up to 80%, ie, you can expect about a 4X increase in average throughput on your current system,
however, you probably have less than 4X headroom for peak usage.
also, above 70% CPU, individual query response times will become noticeably longer a simple formula for predicting SMP performance in the absence of serious locking problems is:
P(n)/P(1) = S**(log2(n)) where n is the number of processors, S is ~1.5 for 32-bit processors,
P(n) is the performance for n processors, hence P(n) is specified relative to one processor there is plenty of public information on disk and memory. note that disk and memory are not distinct requirements, but rather a balancing act, more memory means less disks, hence used the cost of memory and disks to determine the best balance
Refer thru the information on the articles listed on this website. Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

The requirement for extra RAM will also depend on the size of your database. If they are really small and will pretty much entirely fit in 1GB then adding RAM is not going to gain you anything. Typically the order of likely problem areas would be
Network If the additional traffic is read intensive and reading the same data (like a website which just gets more hits) then the disk is not likely to become stressed, as it won’t have more work then it does now. The reads will come for RAM in this case, unless you are having RAM issues now If your application can do with some tuning, then depending on how it tunes up, you may not need any additional hardware. Something as simple as adding an index can take a query down from seconds to milliseconds. It is not uncommon to get 10, 100 or even 1000 fold improvement in performance Cheers
What Joe means, read: http://www.sql-server-performance.com/jc_processor_performance_update.asp by Joe.
Luis Martin …Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
Bertrand Russell
Joe: in peak hours average is abut 40% but having maximum of 90-100% on a few occasions. It seems like processor utilization is almost linear to the number of users. I will have to dig up my old books on mathematics to understand that formula of yours but I’ll manage, thanx alot <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br />Twan: my database is really small at the moment, only about 140MB so for the time beeing it fits just fine in the RAM (however it says in windows task manager that sqlserver.exe consumes some 625MB’s of RAM, do you know why this is?). I’m unfortunately not too good with indexes myself but I have recieved help from a SQL-server MVP recomended by satya or brad (can’t remember which of them) and he sure worked some magic on my database. I will get some more help from this guy in the near future…<br /><br />Luis: thanx for the link! I have been going through the list of articles but it must have slipped through I guess…<br /><br />–<br />Frettmaestro<br />"Real programmers don’t document, if it was hard to write it should be hard to understand"
don’t bother with the math book, the simple interpretation is every 2X increase in processors nets ~1.5-1.6X performance gain,
the performance of your 2P system is referenced to 1.0
2P – > 1.0
4P – > 1.6
8P – > 1.6*1.6 = 2.56 the 90-100% spikes are probably from a table or index scan, you can use profiler to chase that down.
if your peak hours is 40% cpu, and you want to support 10X gain in traffic,
your current system can support 2X, a hardware upgrade to 4P will net another 1.6, for a total of 3.2X, which means you need to net ~3X on software tuning. (or 5X on sw w/o hw change)
Hi ya,<br /><br />SQL Server by default takes as much memory as it can, until the operating system has about 4-5MB free which is the point at which the OS would normally start to push stuff to the page file.<br /><br />If your database is 140MB, then you should be able to tune that to work pretty efficiently. Profiler is a great tool, I normally set it to capture TSQL:BatchCompleted, TSQL<img src=’/community/emoticons/emotion-7.gif’ alt=’:S’ />tmtCompleted, SP:RPCCompleted, SP<img src=’/community/emoticons/emotion-7.gif’ alt=’:S’ />tmtCompleted filtering it to anything taking longer than 100ms in duration or cpu time. This usually gives a short-ish list of queries that need to be tuned.<br /><br />Cheers<br />Twan
Allright, thanx alot fellas…I have gotten an idea of what I’m facing and allthough quite scaring I think I’ll manage somehow. I do have another question though just out of curiosity: what happens when the amount of users to your system exceeds the hardware-limitations of one single server? Is it possible to cluster say 10 sql-servers together…? Will the boost in performance per cpu still be about 1.6? This will offcourse not be the case for me for quite a while (probably never) but I’m just curious. The reason I’m asking is that my server is running at about 20% capacity with only about 30 simultanious users and this is not alot for a website. The app is quite intense… —
"Real programmers don’t document, if it was hard to write it should be hard to understand"
Hi ya,<br /><br />If you’re usage is going to exceed the capacity of a single database server (and you can afford a 32 or 64 CPU box with 512GB of RAM) then you may be in trouble…<br /><br />seriously, it depends on your application. If you have a readonly database (or the database can easily be split into a readonly part and an updateable part) then you may be able to load balance multiple web servers, each with their own readonly database and sharing the updateable database.<br /><br />application tuning/database design is the first step though. Almost all the time this yields huge performance gains. The 2-way 2.4GHz should easily be able to cope with a 140MB database, pretty much irrespective of the number of users, as long as the application is tuned and designed for speed <img src=’/community/emoticons/emotion-5.gif’ alt=’;-)’ /><br /><br />Cheers<br />Twan