SQL Server 2005 on consolidated hardware platform | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

SQL Server 2005 on consolidated hardware platform

Dear Gurus,
I have been having a good read of the articles on this forum about hardware setups for SQL Server 2005 and the posts have been very interesting. I am looking for advice on running SQL Server 2005 in a consolidated environment. My background is that I have been mainly been working on Oracle enterprise systems using SANs where my only real involvement in hardware is to ring up Unix/SAN administrators and moan about performance. So I am a liitle rusty.
Here’s the deal though. We have many SQL Server 2000 boxes (DL380 and DL385) each running one SQL Server instance. The applications all tend to be "small". Small in that they are normally 10 to 20 Gig in size, run on the internal drives on the servers in RAID5 and don’t pressure the CPUs at all. They are workgroup apps and not massive OLTPs or Warehouses. This set up is all OK until you have to license the instances, the monitoring software and the software to back up to off site tape drives which are all CPU based. Then pay FTEs to maintain and patch the ever growing number of servers. So for SQL Server 2005 we plan to run 16 of these small apps – each in their own instance on lumpier hardware. I have spoken to Microsoft who say SQL Server "plays nice". HP have been muting discounts on Integrity hardware (rx6600 with Itanium 2 chips). I kind of like the Integrity hardware idea because of the amount RAM you can add using 2 Gig DIMMS and databases love RAM. Anyway, the current (default) plan is to purchase HP DL585 G2 servers each with 32 Gig of RAM and attach drives using an MSA50 or MSA60. This disks being mainly 72Gig (either 10k 2.5" or 15k 3.5" disk depending on the enclosure) using a Smart Array P800 controller.
Now, I can see what will happen, the bean counters (managers) will want the least number of MSA enclosures per server so that the attached disks will be utilised as much as possible. I know from a performance point of view that this will be fairly crazy.
Have you any advice on running this sort of consolidated environment on this sort of hardware? Is it even sensible ?
i.e. multiple database logs on the same disks, temp dbs, patching, security, administration Other notes
I am planning on RAID1 or RAID 10 but I know RAID5 will be pushed.
I am planning on limiting the memory of each instance.
I am not planning to mix SQL Server 2000 & 2005
I am not planning to allocate instances to specific CPUs Any advice welcome.
Simon

normally, the advice is each db with very active log should get its own dedicated disks for the log
the problem in a consolidation environment is this adds up to a lot of log disks. i would say each db that generates more than 200 log writes/sec definitely needs its own dedicated log disk (pair in RAID 1)
any db less than log writes 100/sec could probably be shared,
what exactly to do when between 100 and 200 needs to be call case by case
start with 4-6 external enclosures and tell the bean counters to quit bitching
they will bitch harder if they did not have a job

Thanks for the advice Joe.
Can I ask a follow up question(s).
Let me say I get 4 enclosures)
Would you then try and span each database over all four enclosures or would put database a,b,c & d on enclosure 1 , databases e,f,g, & h on enclosure 2 etc etc
Would you favour putting log files on internal drives or on drives in the enclosures?
Simon
each high activity db data file should be spread across as many physical disks as possible
ex.
create 1 or 2 luns (or disk arrays) per external unit, say 4 ext unit, 2 luns each for a total of 8 luns
then each db would have 8 data files per file group
if a db has more than 1 filegroup, then each file group has 8 files no preference on log location, other that logs may not need a lot of disks, so the internal bays are a convenient place to find 2-8 drives for the logs,
note, a good place for the low activity db logs is the system drive
Thanks again.
]]>