SQL Server Performance

Hardware Recomendation

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by RJonesUSC, Oct 23, 2006.

  1. RJonesUSC New Member

    First off, thanks for this great resource. I've already learned so much from the articles and forum discussions.

    We're about to undergo a major migration. Our current sales/accounting software runs on a Progress database on SCO UNIX. The next upgrade of the software is currently being written for MSSQL 2005. I've received hardware recommendations from our vendor but after reading quite a bit here I am questioning whether or not those recommendations are the way to go.

    Here's a biit of info on our databases:
    - 2 databases of 10GB and 7GB. While testing, the 10GB database increased to 18 moving from Progress to MSSQL. I expect the database size to go to up to 4x over the next 5 years.
    - 100 users with up to 2 applications connecting to the database(s) at a time.
    - 200 record reads per second, 3 record updates per second, 2 record creates per second, 30 DB reads per second, 10+ DB writes per second. (this data is from the SCO server.)

    Here's the hardware recommendations from our hardware vendor:
    - HP DL380, dual 3.4 GHZ Xeon
    - 8 GB RAM
    - MSA1000 SAN to store databases with 4 146GB 15K drives
    - SQL Server 2005 Standard
    - Windows Server 2003 Ent.

    After all the information I've learned from this site, here are some hardware changes I'm considering:
    - Move to dual core CPU's, possibly 64-bit
    - Replace MSA1000 with MSA500G2
    - Use all 14 drive bays in the MSA to increase disk I/O.
    - Use 72GB disks in the following setup:
    -- Use a single RAID 01 array for logs on 4 disks
    -- Use a single RAID 01 array for both databases on the remaining 10 disks
    - SQL Server Ent.
    - 64-bit Windows and SQL

    Now for the questions:
    - What hardware changes would you recommend?
    - What RAID configurations would you recommend?
    - How much performance would be gained from moving to 64-bit?
    - How much performance would be gained from moving to SQL Ent.?
    - Anything else to add?

    Thanks in advance,
    Rick
  2. joechang New Member

    your vendor is clueless on performance, ignore their rec
    it is also an older config

    on the new, if both db are transactional and active at the same time
    1 RAID 1 array (2 disks) for each log may be better
    but for light load, the 4 disk RAID 10 might be ok,
    if your database shows high disk activity, make provisions for a 2nd rack of external storage, and move one of the logs to the new rack, using the remaining disks for data

    before accepting the upgrade purchase, ie, sending them the check
    make sure they provide a Profiler Trace and Performance Monitor logs of their new version running SQL Server to make sure there are no serious performance gaffs
  3. RJonesUSC New Member

    Thanks for the quick reply.

    Would replacing the MSA1000 with the MSA500 be a good idea? How do they compare when it comes to throughput? Would a different external storage solution be a better choice?
  4. joechang New Member

    the MSA1000 has a max realizable sequential bw of approx 160MB/sec (2Gbit/sec FC, to disk, not cache)
    versus 480MB/sec for the MSA500 with 2 U320 channels, or atleast that is what you can do with U320
    I am not sure if the MSA500 can actually do this

    if you do not need cluster support, just go with the MSA50 SAS or MSA30 SCSI solutions

  5. RJonesUSC New Member

    Thank you very much for the help. As far as the MSA30 and MSA50 go, how do they compare performance-wise? Is one one preferred over the other? The servers I'm going with have the same SATA/SAS drives used in the MSA50 so I was leaning toward that solution...unless the MSA30 outperforms the MSA50 by a large degree.
  6. joechang New Member

    the MSA30 is older U320 technology
    the MSA50 is new SAS, but uses the SFF disks, which are more expensive, better suited to high denisty requirements
    SAS can support more sequential bandwidth
    both are the same in random IO, (10K SAS to 10K SCSI) but check when 15K SAS SFF disks are available.

    I would prefer to go with SAS for new deployments
    but i am not convinced on going with SFF (2.5" disks) exclusively
    plan on a mix of SFF and LFF(3.5" disks) as appropriate

    the only real reason to continue with MSA30 and SCSI is interchangeability with older equipment
  7. Tom Metzie New Member

    But note that SAS 3.5" have a latency that is much larger than on equivalent SAS 2.5" drives.<br /><br />I checked the 72GB 10K rpm 2.5" agains the 72GB 15K rpm 3.5" and the latency figures (from HP were 3 times higher for the 3.5" disks<br /><br />Easy choice, I went with MSA 50 and 2.5" Small Form Factor SFF SAS<br /><br />Not too much difference betwek 36 and 72GB drives but 146 drives take a hit in comparison<br /><br />HP suggested that 3.5" Large Form Factor LFF SAS is for low end use, IE Bulk store and that the 2.5" SFF SAS is the ones to go for if you want performance<br /><br />So for me, I have legacy MSA30 with U320 drives, but from now on MSA 50 and SFF SAS<br /><br />(There is a MSA70 being announced in the new year as listed at the TPC council, but what that will do is anyones guess)<br /><br />And finally, if I understand things right, the SAN option doesn't offer nearly the speed of Direct attached storage.<br /><br />If you look at TPC, you won't find any of the high performance Intel/AMD systems there using SAN technology [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]
  8. joechang New Member

    not sure what you are talking about on latency,
    for Seagate 10K Cheetah 3.5in: r/w seek 4.6/5.2 millisec
    10K Savio 2.5in: 3.8/4.4

    if you only use 1/2 the capacity of a 3.5in, you will probably be in the range

Share This Page