SQL Server Performance

Storage Upgrade

Discussion in 'Performance Tuning for Hardware Configurations' started by bburnett, Sep 24, 2004.

  1. bburnett New Member

    I'm researching an upgrade for our sql server cluster. We currently run an active/active cluster on dual Xeon 2.8 GHz machines with 4GB ram. We utilize a 512GB Powervault 660F as our external storage solution, which direct-connects to our servers via a 1Gbps fibre link. We're using the 32bit/33MHz QLogic cards as HBA's. Our systems are hosted in a colocation facility with UPS backup power.

    We need to increase our storage capacity to 2 or 3TB. The powervault 660F can be upgraded using a daisy chained Powervault 224F. This option seems attractive, but we have several problems with the powervault 660F. First, we can't upgrade to Windows 2003 on the database servers due to an incompatibility with the powervault. Second, the 660F itself is older and we're worried about loosing more and more drives in the unit. The same is true for a 224F expansion that we'd have to buy used. Finally, the unit only supports 1GB fiber and if we're going to spend money we'd like to get a faster interconnect for future scalability.

    Some clarification on our requirements. Our purchasing budget is just under $18K, but obviously we'd like to get something as inexpensively as possible. Our timeline is to buy in less than 6 months. Our db servers are used 24/7, but we only worry about maintaining high availability from 7am - 9pm. Our usage is pretty evenly distributed between writes and reads. Whatever solution we use must support clustered servers. We also need a solution with at least Raid 0+1, but preferably Raid 10.

    In our research we've found two possible options. The first option is DAS SATA raid enclosures. Adaptec makes a unit called the FS4500. It's a 3TB unit that operates over 2Gb Fibre. I'm concerned about the performance impact of SATA drives, but I only find general statements about SATA being slower that SCSI and FC. The only thing I can actually see is that SATA drives have about twice the latency of a SCSI drive. I've asked Adaptec support for SQLIO and iometer results for comparision with our current unit, but I'm still waiting for those results. I'm also concerned becuase the storage reseller I'm using said they haven't sold many of these units.

    Our second option is more of a custom build. We use fibre today, but not as a fabric. We don't anticipate putting anything but DB's on our storage and we don't really require the high availability of multi-pathing of a fabric. Given this it seems that an external SCSI soultion might work for us. Specifically we're considering using the LSI MegaRaid 320-2X PCI-X adapters. These adapters have an optional battery backup allowing us to use write-back caching. The adapters support U320 drives, contain 128MB cache, and can utilize two external busses. It seems that using these adapters and an external jbod enclosure would give us better performance than today and be cheaper than the Adaptec unit.

    I'd love to hear if anyone else has thoughts or products they could recommend.
  2. joechang New Member

    I think SATA is very good for certain applications, it is the desktop hard drives themselves that are only rated for 2 years at 20% while SCSI drives are typically rated for 5 years at 100%.
    hence my preference is to use SATA for ETL and development functions, where occasional failures is not a big deal.
    I would go for a SCSI as being able to generate the most IO/$ when factoring product life
  3. bradmcgehee New Member

    I agree with Joe. SCSI or Fiber are the best options for any mission-critical application.

    -----------------------------
    Brad M. McGehee, MVP
    Webmaster
    SQL-Server-Performance.Com
  4. simondm New Member

    I know you've said you don't like this idea - and I agree with you. But I would like to share my experience with you to ensure you don't do it!

    We used 660F's and decided to chain 224's of the back of them. Big mistake!

    We have had massive problems with disks just "disappearing" from Windows. Also terrible disk error's/ back end fibre dead messages. The worst problem is that not only do you have to reboot the server to get the disks back but you have to power cycle to PowerVaults as well - not good when you use an external data centre that take 30 minutes to do anything!

    It is worth pointing out that we use PowerEdge 8450's which we have also had big problems with. The disk problems we've had may be associated with the server rather than the disks. However Dell have never been able to solve the problems, and noteably the 8450's have been withdrawn (apparently for low sales(??)).

    As you say you cannot upgrade to Win2003 if you use the 660's as well which is a real pain.

    To be honest, after the above experience's having caused so many problems we are currently getting rid off all Dell PowerEdge and PowerVaults and replacing them with HP Proliants on an EMC SAN. The main driver here is the failure of 660's and Dell's inability to fix it.

    In your position I would base any new purchases on moving away from your current 660's.
  5. joechang New Member

    my recollection is that Dell quickly abandoned the internally developed 660 and offered credits to replace them with the EMC products

Share This Page