Recommended Hardware Configurations for SQL Server | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Recommended Hardware Configurations for SQL Server

<b>Dell PowerEdge 68×0 withdrawn, 6950 substituted</b><br />I am now withdrawing the Dell PowerEdge 68×0 as one of the recommended systems at 4 sockets, and replacing it with the PowerEdge 6950 based on the Opteron processors.<br />I do not think there is a meaningful performance difference between the Xeon 7140 and Opteron 8220, the Intel posts a better TPC-C (OLTP), and the AMD does better on TPC-H (DW/DSS)<br />The primary reason is that the PowerEdge 6950 has 7 PCI-E slots, compared with 4 for the PowerEdge 68×0,<br />I am not concerned about the theoretical IO bandwidth, <br />but I am concerned about the capability of current PCI-E SAS controllers in being to utilize a x8 PCI-E port<br />I have not actually use the Dell Opteron systems yet, so this is purely a specification driven decision<br /><br />The same does not apply the HP 4-socket Intel/Opteron because the ML570G4 has 6 PCI-E x4 slots <br /><br /><b>Dell 15K 73G disk price</b><br />Dell has recently dropped the price of the 15K 73GB drive from $379 to $299, only $50 more than the 15K 36GB drive, so I consider this a very good deal, but still, get lots of disks, there is no substitute.<br /><br /><b>Notes from Intel IDF 2007 Spring</b><br /><b>Intel Xeon 7300 series with Core2 micro-architecture</b><br />Processor code name is Tigerton, chipset: Clarksboro<br />Platform codename is Caneland<br />Expected in Q3 2007, probably meaning late September.<br /><br />The processor socket is designed for an MCM with two dual-core 65nm dies, <br />i.e. a quad-core product, so there are a total of 16 cores in the 4 socket system.<br />There will be a dual core SKU for memory bandwidth intensive applications<br />Unlike Tulsa, which had a shared 16M L3 cache on top of the desktop cores, <br />Tigerton is just a pair of the standard 65nm Core 2 die (2 cores and 4M shared L2)<br />like the other current quad core products<br />meaning the large cache options of the Xeon MP line are no more for now, <br />but hopefully this will mean Intel can turn the 4-socket platform more quickly, <br />instead of trailing 2-socket by 1-2 years.<br />Hopefully this means the 4 socket system will support the Penryn procs soon 1Q-2008?<br /><br />The new chipset North Bridge or Memory Controller Hub has 4 FBD channels, <br />common with the 2-socket 5000 chipset.<br />The new system will have quad independent processor busses (QID?), <br />up from dual (DIB) in the 8501, meaning FSB should be 1333MHz.<br />Also, a 64MB snoop filter cache, similar to the 5000X, which I will talk about below.<br />Max memory support is 256GB with 32 DIMM sockets and 8GB FB DIMMs,<br />Not the 64 sockets I was hoping for.<br />IO is the same 5 x PCI-Express x4, plus 1 ESI and 2 x4 for the ESB, <br />all Gen 1, not Gen2 as the 5000 chipset.<br />There is an option for a PCI-E Expander, presumably to support more slots <br />but not the extra bandwidth. This is still useful so I will not complain loudly.<br /><br /><b>Intel 5000X chipset</b><br />The 2 socket chipset for most recent Intel server systems use the 5000P chipset.<br />The Intel 5000X was billed as a workstation variant, <br />(motherboards listed on the Intel and SuperMicro website & the Dell Precision 490) <br />(1600MTS FSB to be supported later?)<br />The 5000X features a 16M snoop filter<br />to improve performance with 2 independent processor busses. <br />Various slide sets and one part of the 5000X data sheet says this is for workstation apps<br />“One of the architectural enhancements in Intel 5000X chipset is the inclusion of a Snoop Filter to eliminate snoop traffic to the graphics port. Reduction of this traffic results in significant performance increases in graphics intensive applications.”<br /><br />Does this imply that the snoop filter is not important for server applications?<br />While the statement may true, it should be important for server apps too.<br />The 8870 SPS (cross bar for Itanium) has snoop filters. <br />The new 4-socket chipset has a snoop filter.<br /><br />In fact, further on in the 5000X data sheet describing the Snoop Filter, the purpose is to improve useable bandwidth for systems with multiple processor busses, nothing about the graphics port.<br /><br />Another slide from IDF has the following:<br /><b>Snoop Filter Overview</b><br />Snoop Filter is a cache tag structure stored in the chipset<br />Keeps track of the status of cache lines in the processor caches<br />Contains only TAGs and status of cache lines and not the data<br />Filters all un-necessary snoops to the remote bus<br />Snoop Filter decreases the FSB utilization<br />Forwards only requests that need to be snooped to the remote bus<br />Cache line that coult potentially be present in a dirty state on the remote bus<br />Cache lines that need to be invalidated<br />Filters all other processor snoops and large fraction of IO snoops<br />All I/O bound applications benefit automatically<br /><br /><b>Intel Seaburg chipset/Stoakley Platform</b><br />The next generation 2-socket chipset<br />24M snoop filter, 1600MHz FSB<br />128GB max memory (still FBD)<br />4 x8 PCI-E Gen 1 or 2 x 16 PCI-E Gen 2<br />(PCI-E Gen 2 is 5Gbits/sec per lane, so 8Gbytes/s in each direction on x16) <br />+ ESI + x4 PCI-E for ESB<br /><br /><b>Intel San Clemente chipset/Cranberry Lake Platform</b><br />DDR2 memory, ICH9, <br />may be an entry 2-socket?<br /><br /><b>Intel Core 2 and Penryn improvements</b><br />Core 2 (Xeon 5100 and 5300 lines) featured the following:<br />-Wide Dynamic Execution, 4 IPC vs 3, 14 stage pipe, micro+macro fusion, enhanced ALUs<br />-Advanced Smart Cache<br />-Smart Memory Access – HW Mem disambiguation, Load can pass store, improved prefetchers<br />-Advanced Digital Media Bo0st – single cycle 128bit SSE<br />-Intelligent Power<br />Penryn adds the following:<br />Fast Radix 16 divider, faster OS primitive, Enhanced VT<br />(Radix 16, 4 bits per cycle vs 2, fp and int, square root,)<br />(Super Shuffle)<br />(OS Synch spin locks, interrupt masking, time stamp – RDTSC 3X)<br />Larger cache, 12MB(6M), 24 –way vs 16-way<br />Split load cache enhancement<br />Improved store forwarding, higher bus speed<br />SSE4 super shuffle<br />Enhanced Dynamic Acceleration<br /><br />Aside from larger cache, the faster OS primitives (spin locks, interrupt masking) should help server ops, can’t wait to get one<br />__________________________________________________________________________<br /><br />Without considering the details of each specific application usage characteristics, <br />I will make the following general hardware configuration recommendations:<br /><br /><b>1. No favoritism for vendors</b><br />Prices are from vendor website (Oct 10-19, 2006) except as noted.<br />I have bought several Dell systems so I have a reasonable understanding of their pricing system. If any has bought HP systems recently, please advise of any significant discrepancies, particularly if meangingful discounts apply to small quantity server purchases. There are equivalent IBM systems, but I do not have time to track these, nor do I have contacts with IBM on the server side.<br />The Dell 4 socket price is much lower than the HP. <br />I understand that HP will discount for their big customers, but this makes it difficult for me to buy just a few of their systems for performance testing.<br />On vendors in general, I would focus on technical competence of the personnel they provide to assist, rather the features of their platforms<br /><br /><b>Selected TPC-C transaction processing results</b><br />|Vendor|System| Processor| memory| disks| tpm-C| <br />|HP|DL380G5| 1 Intel X5355QC 2.66GHz| 32GB| 386-15K+14-15K| 138,979| <br />|Lenovo| | 2 Intel Xeon DC 3.73GHz| 32GB| 504-15K+14-15K| 125,954| <br />|HP|ML370G5| 2 Intel 5160 DC 3.0GHz| 64GB| 506-15K+8-15K| 147,293| <br />|HP|ML370G5| 2 Intel X5355QC 2.66GHz| 64GB| 528-15K+24-15K| 240,737| <br />|HP|DL385G2| 2 Opteron 2220 2.8GHz| 32GB| 400-15K+16-10K| 139,693|<br />|HP|ML570G4| 4 Intel 7140 DC 3.4GHz| 64GB| 600-15K+202-10K+16-15K| 318,407| <br />|HP|DL585G2| 4 Opteron 8220SE 2.8GHz| 128GB| 528-15K+24-10K| 262,989|<br />|HP|RX6600| 4 Itanium 9050 DC 1.6GHz| 192GB| 756-15K+8-10K| 344,928|<br />|IBM|x3950| 8 Xeon 7150N DC 3.5GHz| 128GB| 1568-15K+24-15K| 510,822|<br />|Unisys|ES7000/one| 8 Xeon 7140M DC 3.4GHz| 256GB| 1080-15K+26-15K| 520,467|<br /><br /><b>Selected TPC-H 100GB Data Warehouse results</b><br />|System|Processor| memory| disks| Power| Throughput| QphH| Scale Factor|<br />|PE2900|2 Intel X5355QC 2.66GHz|48GB|180-15K+4-10K|20,587.9|12,009.1|15,723.9|100GB|<br />|DL580G4|4 Intel 7140 DC 3.4GHz|64GB|84-15K+2-10K|22,401.0|13,084.4|17,120.3|100GB|<br />|DL585G2|4 Opteron 8220SE 2.8GHz|128GB|84-15K+2-10K|25,040.9|14,910.8|19,323.0|100GB|<br /><br /><u>2 socket system examples</u><br /><b>Dell PowerEdge 2900 (2007 Jan 18 pricing)</b><br /> a. 2 x 3.00GHz Dual Core Xeon 5160, 4M cache, 1333MHz FSB $4,976<br /> b. 2 x 2.66GHz Quad Core Xeon X5355, 2x4M cache, 1333MHz FSB $6,126<br /> 4x2GB 667MHz DIMMs (additional 4x2GB $1,790)<br /> Internal SAS RAID Controller<br /> Redundant power supply <br />3-4 x PERC 5/E SAS RAID adapters, PCI-Express $799 ea<br /> a. 8 x 36GB SAS 15K hard drives (internal) $249 ea<br /> b. 8 x 73GB SAS 10K hard drives (internal) $299 ea<br /> c. 8 x 73GB SAS 15K hard drives (internal) $299 ea<br /> d. 8 x 146GB SAS 10K hard drives (internal) $369 ea<br />4 x 2GB memory $1,700 <br /><br />3-4 PowerVault MD1000 external storage units<br /> w/ 15 x 36GB SAS 15K hard drives $5,576 each (currently discounted, normally $7,435)<br /> w/ 15 x 73GB SAS 10K hard drives $6,146 ea<br /> w/ 15 x 73GB SAS 15K hard drives $7,046 ea (or $6,926 for 146GB 10K)<br />Notes: 48GB Max mem, 1 x8, 3 x4 PCI-e, 2 PCI-X<br /> PowerVault MD3000 external storage – single port<br /> w/ 15 x 36GB SAS 15K hard drives $8,555 each (currently discounted, normally $10,694)<br />Dual port RAID option for cluster support + $1,200<br /><br /><b>HP ProLiant ML370 G5</b> <br />Proliant ML370 G5 SAS <br />2 x 2.66GHz Dual-Core Intel 5150 $7,300? <br />or<br />2 x 2.33GHz Quad-Core Intel E5345 $7,700? <br />2 x 2.66GHz Quad-Core Intel X5355 $7,900 (based on TPC-C report) <br />4x2GB PC2-5300 (667MHz) <br />2nd Memory Board<br />2nd SAS SFF 8-bay drive cage<br />16 x 36GB 10K SAS SFF hard drive $279 ea<br />16 x 72GB 10K SAS SFF hard drive $329 ea *<br />16 x 146GB 10K SAS SFF hard drive $439 ea *<br />16 x 36GB 15K SAS SFF hard drive $369 ea **<br />16 x 72GB 15K SAS SFF hard drive $519 ea<br />3-6 x Smart Array P800/512MB SAS Controller $1,300 ea<br />3-6 MSA-60 Storage enclosure $3,250 ea (or $6,850 w/ 12 LFF disks)<br />24 x 36GB 15K SAS LFF hard drive $269 ea<br />24 x 72GB 15K SAS LFF hard drive $379 ea<br />Notes: 64GB Max mem, 6 x4 PCI-e, 2 PCI-X<br />* I thinks these are good values, not too expensive<br />** for best performance, and reasonable price, the 72G 15K is just too expensive<br /><br /><b>ProLiant DL385 G2</b> $5,139<br /> 2 x AMD Opteron 2218 2.6GHz Dual Core <br /> 4 GB Memory (4x1GB)<br /> Smart Array P400 SAS RAID controller<br />8 x 72GB 10K SAS SFF hard drive$329 ea<br />3 x Smart Array P800/512MB SAS Controller $1,300 ea<br />3-6 MSA-60 Storage enclosure $3,250 ea (or $6,850 w/ 12 disks)<br />24 x 36GB 15K SAS hard drive $300 ea<br />Notes: 16GB Max mem, 3 x8, 1 x4 PCI-e<br /><br /><u>4 socket system examples</u><br />The Dell PowerEdge 6950 is recommened over the 6800/6850 for better PCI-E config<br /><b>Dell PowerEdge 6950</b> <br />4 x Opteron 8220SE – 2.8GHz/2x1M L2 cache $12,127 (13,727)<br /> 16GB DDR2-667MHz (8x2GB)<br />Internal disks<br />6 x PERC 5/E SAS RAID adapters, PCI-Express$799 ea<br />6-12 PowerVault MD1000 ext storage w/15 disks ~$7K ea<br />Notes: 64GB Max mem, 7 PCI-e (2×8+5×4), 0 PCI-X ?<br /><b>Dell PowerEdge 6800 or 6850</b> <br />4 x Xeon 7140M – 3.4GHz/16M L3 cache $17,166<br /> 16GB DDR2-400MHz (8x2GB)<br />2 x 5 Split Backplane<br />1 x PERC 4/eDC RAID Controller?$799<br />4 x PERC 5/E SAS RAID adapters, PCI-Express$799 ea<br />8 x 146GB U320 SCSI 10K hard drives (internal)$369 ea<br />4-8 PowerVault MD1000 ext storage w/15 disks ~$7K ea<br />Notes: 64GB Max mem, 4 PCI-e, 3 PCI-X<br /><br /><b>Proliant ML570G4</b> $25,192<br /> 4 x Dual Core Xeon 7140 3.4GHz/16M L3 <br /> 16 GB memory (8x2GB)<br /> 1 x Smart Array P600 SAS RAID controller<br />10 x 72GB SAS SFF 10K hard drive $329 ea<br />6 Smart Array P800 SAS RAID Controllers$1,300 ea<br />6-12 MSA-60 Ext. Storage w/12 disks $6,850 ea<br />Notes: 64GB Max mem, 6 x4 PCI-e, 4 PCI-X<br /><br /><b>ProLiant DL585 G2</b> $19,800<br /> 4 x AMD Opteron 8220SE 2.8GHz Dual Core <br /> 16 GB Memory (16x1GB)<br /> Smart Array P600 SAS RAID controller<br />4 x 72GB 10K SAS SFF hard drive $329 ea<br />6 x Smart Array P600 SAS RAID controller $800 ea<br />6-12 MSA-60 Ext. Storage w/12 disks$6,850 ea<br />Notes: 64GB(128?) Max mem, 3 x8, 4 x4 PCI-e, 2 PCI-X<br /><br />ProLiant DL585 G2 4 x 2.6GHz, 16GB $15,400<br /><br /><b>HP Integrity rx6600</b><br /> 4 x 1.6GHz/24M Dual Core Itanium 2$43,845<br /> 4x4GB DDR2 $18,977 <br />MSA1000 $6,995<br />MSA30 $2,829<br />36GB 15K U320 disk $269 ea<br /><br /><b>Unisys ES7000/one – NUMA</b><br />Unisys just release an 8-socket Xeon MP result for their ES7000/one.<br />From what I can find on their website, max sockets is 32 and max memory is 512GB,<br />so you would think the max memory for an 8 socket config is 128GB, not 256G.<br />Anyways, it is interesting to compare with the IBM x3950 result at 128G memory,<br />Tpm-c scores are close enough, IBM had 128G memory and 1568 data disks, Unisys 256GB mem and 1080 data disks.<br />Overall gain from the 4 socket ProLiant is 1.63X, not bad for an 8 core to 16 core scaleup. 1.6X was consider very good for the 4-to-8 core scaleup, and each step up is more difficult. As with all NUMA systems, achieving good performance for your application beyond 1 NUMA node requires special analysis of your apps characteristics, understanding NUMA features in SQL 2005, and particularly network interrupt affinity, port affinity etc etc.<br /><br /><b>IBM x3950 and other NUMA system</b><br />IBM just published an 8-scoket Xeon 7150N (Netburst core) TPC-C result of 5 10K,<br />The architecture of this system, along with some earlier models,<br />are derived from technology IBM acquired from Sequent several years ago.<br />That is, they are NUMA system, with a high NUMA ratio <br />(remote node memory access time relative to local node memory access time)<br />unlike the Opteron architecture with low NUMA ratio.<br />It is possible to get decent scaling on these systems,<br />but this requres special skills (me for example)<br />A couple of years ago, a DBA at a very large firm was loaned <br />the then current system to benchmark against an HP quad Opteron DC<br />Without special precautions, it showed no meaningful scaling beyond<br />1 NUMA node, possibly even negative <br />(8 sockets in 2 nodes performs less than 4 sockets in 1 node)<br />Apparently the IBM account rep could not provide access to <br />the required skill, or did not know he was supposed to.<br />Same thing happened with a Unisys situation.<br />So if anybody is considering an 8+ socket box,<br /><br /><b>2nd Generation PCI-E SAS/RAID controller</b><br />Many of the first generation SAS RAID controllers used the Intel IOPS 80333 engine<br />which had an XScale core between 500-800MHz, <br />64-bit 333MHz internal bus<br />dual port 64-bit memory controller, DDR-333 or DRRII-400<br /><br />Intel recently launched the next generation 8134x line with<br />2 XScale cores 667-1200MHz <br />128-bit 333-400MHz internal bus<br />multi-port SRAM controller<br />multi-port DRAM DDRII-533<br /><br />without time for exhaustive analysis, <br />it was my suspicion that the first gen could drive about 800MB/sec per controller<br />when i get one of the new controllers, i will test again<br /><br /><b>HP ProLiant Quad Core (2007 Feb 11)</b><br />the HP website allows configuring the ProLiant ML370G5 <br />with the Intel Quad Core Xeon E5345 2.33GHz, <br />but not the Xeon X5355 2.66GHz.<br />Someone should complain to HP that they were obligated <br />to make the ML370G5 available with the X5355 <br />because they published a TPC-C result listing system availability <br />on Feb 1, 2007.<br />If not, then they should withdraw the result.<br /><br /><b>Update SAS cluster support (2007 Jan 1<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /></b><br />The Dell website now lists the MD3000 SAS external storage unit, <br />which is supports clustering.<br />In order to do this, a RAID controller is integrated into the unit, <br />so the server system has uses just a plain SAS adapter, <br />with the RAID handled in the MD3000.<br />A single port version has 1×4 SAS channel in from the server, <br />and 1×4 SAS channel for expansion.<br />The dual port version has 2 1×4 SAS channels in from the 2 clustered servers, <br />and 1×4 SAS for expansion.<br />The implication is that the expansion units are MD1000 which are significantly less expensive than the MD3000.<br />Now normally, I would like to have each controller connected to a single storage unit, until I have used all the PCI-e slots in my system, before adding expansion units, that is, 2 or more units chained to a single adapter. Because of the higher price of the MD3000 over the MD1000, it is cheaper to buy 1 MD3000 and 1 MD1000 connected to a single adapter in each server, but 2 MD3000 each connected to a single adapter per server should have better sequential bandwidth. Would be nice if Dell made these unit available for performance testing.<br /><br /><u>General notes:</u><br />PCI-Express and SAS is preferred going forward, even though this is still relatively new technology at the complete system level.<br />While I would like to strongly advocate quad core proc, I am not convinced all code will run well with a high number of cores or logical procs. <br />I believe this is more serious problem with SQL Server 2000, and less with SQL 2005, but atleast one member has reported strange problems with the 4 socket, 8-core, 16 logical (HT) Xeon 71xx system and 64-bit SQL 2005<br />Some serious testing is required for sucessful transition.<br /><br /><b>HP MSA and Smart Array controllers for SAS drives</b><br />MSA-50, 1U, 10 SFF drives (2.5"), $1,899<br />MSA-60, 2U, 12 LFF drives (3.5”), $2,999<br />MSA-70, 2U, 25 SFF drives (2.5”), 1 x4 SAS In, 1 Out, $3,199<br /><br /><b>SmartArray SAS RAID controllers/HBA’s</b><br />P800 – PCI-Express card, <br />2 x4 external, 2 x4 internal (I really would like a 4 x4 external option)<br /><br />The P600 – PCI-X card, <br />1 x4 external – shared with 1 x4 internal<br />1 x4 internal dedicated <br /><br />P400 – PCI-Express <br />2 x4 internal only<br /><br /><b>On PCI-e x4 versus x8 slots</b><br />I do not yet have hard evidence on this, but I am inclined to favor more x4 slots over fewer x8 slots. The new Intel chipset (P5000 & E8501) have 3 x8 and 1 x4 PCI-Express ports where each x8 port can be configured as 2 x4 ports.<br />I am not convinced any current IO adapters can really use the full bandwidth of a x8 port, which is 2GB/sec in each direction. So would like to have the extra x4 ports.<br />6 x4 PCI-e slots in a 2/4 socket server allows configuring 6 SAS adapters, which I think is good for most heavy load applications (along with 6+ racks of external storage)<br /><br /><b>Clovertown/Kentsfield update (2007 Jan 02)</b><br />Dell website now shows the Intel Quad Core Xeon X5355 for about a $600 premium per socket over the Dual Core Xeon 5160 3.00GHz. I consider this a very good step up, but I am not sure I would want the Quad Core at the same price point as the Dual, ie, the 1.86GHz, as not all apps can benefit from the additional cores without substantial rework<br />Intel officially announced launch of the Quad core line (14 Nov 2006).<br />HP ML370G5 with 2 X5355 2.66GHz & 64G mem hit 240,737 tpm-C, which not far off the Opteron 4 scoket score with 128G mem, <br /><br /><b>Opteron w/DDR2 update</b><br />HP just posted 4 socket TPC-C and TPC-H for the new 2.8GHz DC Opteron processor with DDR2<br />Both results are impressive, 262,989 tpm-C and 19,[email protected], being much higher than what frequency scaling over the 2.6GHz DC suggests.<br />Is it the improved memory system? The new TPC-C result shows that HP is using the port affinity feature in S2K5 (see my High Call Volume article). This trick was probably first used by the HP Itanium team(?)<br />The onus is now on Intel to produce a 4-socket TPC-C for SQL Server<br /> <br /><b>Tulsa update</b><br />Tulsa has launched as Xeon 7140 (or 71xx line) on Aug 29. Dell web site shows a preliminary ship date of 9/13 vs 9/8 for the 7041 procs.<br />There is a very good SQL Server TPC-H result for the 7140, <br />but suprisingly not for TPC-C at this time.<br />There is a good IBM result for the Xeon 7140 on Linux/DB2 of 314,468 tpm-C, but IBM also achieved 273K tpm-C on the Xeon 7040, while the best SQL Server result on the 7040 was 188K.<br />There is an HP ML570G4 TPC-C in review at 318,407 tpm-C for a 4-socket Xeon 7140, slightly off the expected 340K, but the system has 64GB<br /><br /><b>Woodcrest update</b><br />The Dual Core Intel 51×0 line is officially launched.<br />The Xeon 51×0 line is the default recommended processor/system for most situations. <br />If you were considering a 4-socket system, I suggest getting the 2 socket Woodcrest, except under certain extreme situations.<br />I will provide actual performance measurements soon after taking delivery of my new servers with the 5140 procs.<br />I just got my Dell PowerEdge 2900 with 2 x 2.6GHz procs<br />So far, I have only done LiteSpeed backup compression tests. 2×3.2GHz Xeon can compress 275MB/sec for data with 2.7 compression ratio, the 2×2.6GHz dual core does 600MB/sec.<br />More performance info to follow.<br /><br /><b>13 Oct 2006 Note</b><br />I am removing the older Xeon 50xx and 70xx from new purchase consideration, in favor of the newer 51xx and 71xx (and 53xx). Anyone, please let me know when the Opteron 22xx and 82xx are actually shipping. HP lists P800 SAS RAID controllers in their TPC reports, but only the P400 in their website, same with MSA60 external storage.<br /><br /><b>2. Processor architecture. Opteron vs Xeon, Itanium etc.</b><br />My last reference point was an Opteron 2.0GHz was about equivalent to a Xeon 3.2GHz averaged over a broad range of SQL read operations, with specific ops favoring one or the other. <br />Given that the fastest dual-core Opteron is now 2.6GHz, and Xeon at 3.73GHz, this probably gives the Opteron approx 11% advantage over Xeon 3.73GHz, but I have not personally calibrated the newest processors. <br />On the TPC-C benchmark, which is very write intensive, the 2xXeon DC 3.73GHz is 10.5% higher than the 2xOpteron DC 2.6GHz. <br />I am guessing based on the above that the 3.2GHx Xeon 5060 will probably be comparable on C to 2.4GHz Opteron.<br />At the quad level, the 4×2.6 DC Opteron is 13% better than the 4×3.0 DC Xeon 7041.<br /><br /><b>3. Whitebox system</b><br />Some people prefer white box systems (build it yourself). <br />My preference on whitebox is for Supermicro over of Intel. <br />Supermicro has a much broader selection of motherboards and chassis to fit specific needs.<br />However, because my database servers need external storage, <br />I was spending to much time troubleshooting the storage configuration, so now I just buy complete systems when external storage is required.<br />I think it is OK to whitebox for single box solutions (all storage in the system chassis).<br />This is probably best suited for turn-key solution providers, but not worth the learning curve effort on a one-time build.<br /><br />The Itanium 2 1.6GHz /9M (Madison core, 130nm) is really old and barely on par at the single processor core level compared with the 90nm Opteron and Xeon processors, and not competitive with the dual-core Opteron and Xeons. <br />The new dual core Itanium 2 9050 (Montecito, 90nm) produced a very impressive TPC-C result, 4-socket 344,928 tpm-C. The SPEC CPU 2000 integer is slightly higher than Madison, so the TPC-C gain could be due to either the very large cache or Hyper-threading. HT on Xeon was very problematic for SQL, but this may be fixed on the Itanium line.<br />Given the higher platform costs, it is probably more suited to high-end systems at &gt;4 sockets. HP wants to advocate Itanium for 4 sockets and up.<br />If the vendors will provide access to equipment, <br />I can give a much more accurate assessment<br /><br />I am not entirely happy with the system level performance of the Intel 704x (and their earlier Potomac) processor lines for 4 socket systems, partly because of the low frequency, second because of the inability to effectively use a large on-die cache. <br />Intel is claiming a 1.7X gain for the successor, Tulsa, which is very impressive. <br />Some significant performance bottleneck was probably resolved.<br /><br /><b>Storage Clustering Support Notes:</b><br />The Dell PowerVault 220 external SCSI storage units support clustering, not sure if extras are required. The Dell PV MD-1000 external SAS storage unit does not support clustering. Presumably a later SAS storage model will. <br />The HP MSA-30 external SCSI has a Multi Initiator option (MI) but this does not work with ProLiant<br />Not sure on HP SAS storage<br /><br /><b>Disk IO:</b><br />I care don’t what the size of your database is, get enough adapters, and disks to handle the difficult queries that generate table scans, or high queue depth random IO.<br />Unless you enjoy stress, make sure your storage system has the ability to withstand a crisis (Krisenfestigkeit). My standards are:<br />1. A large query should not “severely” impact transaction processing throughput.<br />2. A checkpoint should not “severely” impact transaction processing.<br />3. A transaction log backup should not “severely” impact transaction throughput.<br />The definition of “severely” depends on how important this is to your app.<br /> (If there are many concurrent bad queries, no storage system will save you.)<br /><br />My expectation is a configuration with 23 drives (8 internal disks + 15 external) at RAID 0 or JBOD will sustain 1.7GB/sec sequential transfer (78MB/sec+ per disk), and 150 random IOPS over full disk and low queue depth.<br />It is the possible to get as much as 400-500 random IOPS per disks by using a only a small fraction of the disk at higher queue depth with only a moderate penalty in latency (hence disregard capacity considerations)<br />I have tested SCSI disk systems, and I have reason to believe PCI-Express/SAS can deliver, but the hardware vendor should provide tests to prove this, <br />or make their systems available for independent verification.<br /><br />I am thinking of standardized ratings for storage systems,<br />for data it would be something like the following<br /><br />Class, Seq Read, Random IOPS (&lt;10ms / &gt;20ms latency)<br />AA: 10GB/sec, 40K / 100K<br />A : 3GB/sec, 12K / 30K<br />B : 1000MB/sec, 4K / 10K<br />C : 300MB/sec, 1200 / 3K<br />D : 100MB/sec, 400 / 1,000<br />E : 30MB/sec, 200 / 300 <br />F : &lt;30MB/sec, &lt;150 / &lt;300<br /><br />I provided scripts for SQL based IO testing in the post below<br /<a target="_blank" href=http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=16995>http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=16995</a><br /><br /><b>Operating system and SQL Server preferences </b><br />my preference is that all this hardware be setup to run <br />Windows Server 2003 64-bit<br />SQL Server 2005 64-bit<br /><br />if you cannot be on SQL Server 2005 for some reason,<br />strongly consider running 32-bit SQL Server 2000 sp4 + hotfix 2187 on 64-bit Windows<br />if you cannot be SQL Server 2000 sp4, then be on sp3 + hotfix 1031 or later<br /><br />if you are currently not on SQL Server 2005<br />i suggest first establish a performance baseline for your current situation<br /><br />see my instructions on www.qdpma.com<br /<a target="_blank" href=http://www.sql-server-performance.com/qdpma/inst_3_pmlogs.asp>http://www.sql-server-performance.com/qdpma/inst_3_pmlogs.asp</a><br /<a target="_blank" href=http://www.sql-server-performance.com/qdpma/inst_3_pmlogs.asp>http://www.sql-server-performance.com/qdpma/inst_3_pmlogs.asp</a><br /><br />get the new hardware setup<br />install Windows Server 2003 64-bit + sp1 + hotfix<br />install SQL Server 2000 32-bit<br />get your database running on the new system<br />establish a new performance baseline<br />look for any significant differences<br /><br />after verifying expected behavior<br />install SQL Server 200 5<br />check performance again<br />
Thanks Joe. That looks like a great configuration with a lot of spindles and the price seems much more reasonable than I would have thought. I am still a bit fuzzy on how the relationships between the controllers to the drives to the database files on them are… Can you give more detail on how the drives are attached and controlled by the raid adapters? (i.e. How many disks to each channel? and in what raid configuration? and what files are on each logical drive?) Louder+Harder+Faster = Better
for U320 SCSI
I recommend dual channel controllers, good for 250MB/sec per channel or 500MB/sec total
quad channel adapters cannot hit 1GB/sec for various reasons, so dual channel preferred 3 15K drives or 4 10K drives per U320 can saturate the channel, but you might consider 4 15K or 5 10K per channel because its not always possible to get pure sequential ops and to amortize the cost of the enclosure. RAID adapters usually let you create array groups from disks on each channel
but this still means 1 array minimum per adapter, so be prepared to split your data into multiple files. data from multiple databases can be shared on a common pool of disks
only logs get dedicated disks. RAID level depends on the activity your server gets, and is discussed adequately elsewhere
the more important item is to verify the actuall performance of your configuration
in terms of random IOPS at low and high queue depth, for read and write
and sequential read/write performance
Wouldn’t this make for a great new forum on its own rather than "just a thread"? <br />Something like "Joe’s Hardware Corner" or some more serious equivalent. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />–<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Heute schon gebloggt?<a target="_blank" href=http://www.insidesql.de/blogs>http://www.insidesql.de/blogs</a>
Something like "Ask Tom" on Oracle site [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]
i do think the SQL 2000 & 2005 hardware sections could be consolidated,
but otherwise there is no need to change the name.
this could have been an article, but it seems every body comes here first, so lets just put it on top. also, lets get readers buying new hardware to press vendors for hard information on configuration,
not the ridculous marketing stuff or benchmark configurations that are excesive for normal use.
when they realize they do not have anything, they can ask me
Just posted a news link for this post, that helps too.<br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by FrankKalis</i><br /><br />Wouldn’t this make for a great new forum on its own rather than "just a thread"? <br />Something like "Joe’s Hardware Corner" or some more serious equivalent. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />–<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Heute schon gebloggt?<a target="_blank" href=http://www.insidesql.de/blogs>http://www.insidesql.de/blogs</a><br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote">
Hi Joe,
I’m currently debating whether to purchase a dual processor PE2950 with Woodcrest or quad processor PE6850 for our SQL server. Your comment below suggested the Woodcrest. Would you recommend that over the quad xeon processor, and if so, why?
p.s. I’m new at this, so any help you can provide is greatly appreciated! Lho
quote:Originally posted by joechang Without considering the details of each specific application usage characteristics,
I will make the following general hardware configuration recommendations: 1. No favoritism for vendors
Prices are from vendor website (June 26, 2006) except as noted.
The Dell 4 socket price is much lower than the HP. I understand that HP will discount for their big customers, but this makes it difficult for me to buy just a few of their systems for performance testing.
On vendors in general, I would focus on technical competence of the personnel they provide to assist, rather the features of their platforms 2. Processor architecture. Opteron vs Xeon, Itanium etc.
My last reference point was an Opteron 2.0GHz was about equivalent to a Xeon 3.2GHz averaged over a broad range of SQL read operations, with specific ops favoring one or the other.
Given that the fastest dual-core Opteron is now 2.6GHz, and Xeon at 3.73GHz, this probably gives the Opteron approx 11% advantage over Xeon 3.73GHz, but I have not personally calibrated the newest processors.
On the TPC-C benchmark, which is very write intensive, the 2xXeon DC 3.73GHz is 10.5% higher than the 2xOpteron DC 2.6GHz.
I am guessing based on the above that the 3.2GHx Xeon 5060 will probably be comparable on C to 2.4GHz Opteron.
At the quad level, the 4×2.6 DC Opteron is 13% better than the 4×3.0 DC Xeon 7041. The current Itanium 2 1.6GHz (Madison core) is really old and probably not up to par at the single processor core level compared with Opteron and Xeon.
With the upcoming July launch of Montecito, adding dual core, hyper-threading and other enhancements, it is expected to be reasonably competitive processor to processor, but given the higher platform costs, is probably more suited to high-end applications at >4 sockets.
HP wants to advocate Itanium for 4 sockets and up.
If the vendors will provide access to equipment,
I can give a much more accurate assessment I am not entirely happy with the system level performance of the current Intel 7041 processor line for 4 socket systems, partly because of the low frequency, second because of the inability yo effectively use a large on-die cache.
Intel is claiming a 1.7X gain for the successor, Tulsa, which is very impressive.
Some significant performance bottleneck was probably resolved. Woodcrest update
The Dual Core Intel 51×0 line is officially "launched".
The delivery dates on the Dell website is for early Aug delivery, approx same for HP.
So June 26 was really just a paper launch.
The Xeon 51×0 line becomes the default recommended processor/system when you can actually get it.
If you were considering a 4-socket system, I suggest getting the 2 socket Woodcrest, except under certain extreme situations.
I will provide actual performance measurements soon after taking delivery of my new servers with the 5140 procs. If you cannot wait, then the Xeon 5080 or dual core Opteron are good choices. 2 socket system examples
Dell PowerEdge 2900 $4,074
2 x 2.33GHz Dual Core Xeon 5140, 1333MHz FSB
4x2GB 667MHz DIMMs
Internal SAS RAID Controller
Redundant power supply
2 x PERC 5/E SAS RAID adapters, PCI-Express $799 ea
8 x 73GB SAS 10K hard drives (internal) $299 ea
4 x 2GB memory $1,700
2 PowerVault MD1000 external storage units
w/ 15 x 36GB SAS 15K hard drives $6,765 each
w/ 15 x 73GB SAS 10K hard drives $7,515 ea
w/ 15 x 73GB 15K or 146GB 10K ~$8,700 ea HP ProLiant ML370 G5
Proliant ML370 G5 SAS base unit $4,298
2 x 2.33GHz Dual-Core Intel 5140
2x1GB PC2-5300 (667MHz)
2 x Smart Array P800/512MB SAS Controller $1,300 ea
2 MSA-60 Storage enclosure $3,250 ea (or $6,850 w/ 12 disks)
24 x 36GB 15K SAS hard drive $300 ea 4 socket system examples
Dell PowerEdge 6800 or 6850 $13,945
4 x Dual Core Xeon 7041, 3.0GHz
8x1GB DIMMs?
2 x 5 Split Backplane
1 x PERC 4/eDC RAID Controller$799
4 x PERC 5/E SAS RAID adapters, PCI-Express$799 ea
8 x 146GB U320 SCSI 10K hard drives (internal)$369 ea
4 PowerVault MD1000 ext storage w/15 disks ~$7K ea Proliant ML570G4 $25,196
4 x Dual Core Xeon 7041 3.0GHz
8 GB memory (8x1GB)
1 x Smart Array P600 SAS RAID controller
8 GB memory (8x1GB) additional$1,720
10 x 36GB SAS 10K hard drive $300 ea
4 Smart Array P800 SAS RAID Controllers$1,300 ea
4 MSA-60 Ext. Storage w/12 disks $6,850 ea ProLiant DL585 $22,271
4 x AMD 880 Opteron 2.4GHz Dual Core
16 GB Memory (16x1GB)
Smart Array P600 SAS RAID controller
4 x 36GB 10K SAS SFF hard drive $300 ea
4 x Smart Array P600 SAS RAID controller $800 ea
4 MSA-60 Ext. Storage w/12 disks$6,850 ea
General notes:
get dual core procs
PCI-Express and SAS is preferred going forward, even though this is still relatively new technology at the complete system level Disk IO:
I care don’t what the size of your database is,
get enough adapters, and disks to handle the difficult queries
that generate table scans, or high queue depth random IO
basically, you do not want your transaction processing to take a dive just because someone executed a bad query (if there are many concurrent bad queries, no amount of disks will save you) My expectation is a configuration with 23 drives (8 internal disks + 15 external) at RAID 0 or JBOD will sustain 1.7GB/sec sequential transfer (78MB/sec+ per disk), and 150 random IOPS over full disk and low queue depth.
It is the possible to get as much as 400-500 random IOPS per disks by using a only a small fraction of the disk at higher queue depth with only a moderate penalty in latency (hence disregard capacity considerations)
I have tested SCSI disk systems, and I have reason to believe PCI-Express/SAS can deliver, but the hardware vendor should provide tests to prove this,
or make their systems available for independent verification.

assuming you can wait till august.
lets at the two choices, base system, excluding disks and controllers, which will be the same PE 29×0, 2 x 3.0GHz 5160, 8GB, $4.8K 169K tpm-C (based on HP result)
PE 68×0, 4 x 3.0GHz 7041, 8GB, $12.6K 188K tpm-C (Fujitsu result) that $8K more for 11% on the base system, excluding additional memory, controllers, and disks for each system the real kicker is in SQL Server licensing costs, assuming you are on enterprise edition, which you need in many cases
i recall SQL 2K being $16K per proc, is more for S2K5 (anybody)?
my recollection is std ed being around $3K per proc so you have to pay for 2 extra procs on the 4-way

Where might I find benchmark on both of these? Thanks Joe!
quote:Originally posted by joechang assuming you can wait till august.
lets at the two choices, base system, excluding disks and controllers, which will be the same PE 29×0, 2 x 3.0GHz 5160, 8GB, $4.8K 169K tpm-C (based on HP result)
PE 68×0, 4 x 3.0GHz 7041, 8GB, $12.6K 188K tpm-C (Fujitsu result) that $8K more for 11% on the base system, excluding additional memory, controllers, and disks for each system the real kicker is in SQL Server licensing costs, assuming you are on enterprise edition, which you need in many cases
i recall SQL 2K being $16K per proc, is more for S2K5 (anybody)?
my recollection is std ed being around $3K per proc so you have to pay for 2 extra procs on the 4-way

www.tpc.org
Joe, what do you think of using Blade servers, such as the new Dell 1955, as the platform for high-performance SQL Server data warehouse? They’re attractive in that you can add more HDD as needed, but are you better off I/O performance-wise with dedicated RAID (0, 5, 10, etc) arrays as you describe above?
Thanks in advance.
blades are for web servers with almost no disk io requirements
the dinky FC port can not do squat
infini-band isn’t bad, but a single 4x? you really need the 2900 with 1 x8 and 3 x4 PCI-Express ports
this lets you plug in 4 PCI-E SAS controllers I do not currently have good numbers for the max sustain IO on PCI-X SAS,
each dual channel U320 controller could get you 500MB/sec
in theory, an 8-port SAS on PCI-E could do 2GB/sec in each direction,
but i need to run the test myself. the 2950 has 2 x8 and 1 x4 PCI-E ports, but the question is whether a single adapter can sustain more than 1GB/sec to do serious DW, you want your disk system to drive 2GB/sec,
I think you could do this with 30 disk drives over 3 PCI-E SAS adapters.
so why bother with a blade that might do 160MB/sec to an expensive FC storage sys
Joe, We’re building up a new SQL System (2005) for use with a large amount of data (8 million inserts per day) that we then aggregate and use for reporting. What are your thoughts on using SATA-II Raptor drives (10k RPM, 150GB) with 16 drives or so across 4 controllers? Would you recommend RAID 5, RAID 1+0, and would you split the database files or use Windows striping? Or should we just go with SCSI? TIA!
I do not believe there is good reason today to go with a pure SATA solution
there is no significant cost difference between the 10K SATA and 10K SAS drives
so go with SAS
I have not seen NCQ work properly in SATA, although it is possible i do not have the right combination of controllers for new systems, there is no reason to deviate from my rec of a
2 socket Xeon 51×0
4-8 internal drives
2 external units of 10-14 drives feel free to substitute 4-6 of the big capacity SATA drives for backups
I just bought a 2950 with the new dual core proc. But I have been setting up my SQL servers all wrong!!! I bought it with 4 300GB drives and created a RAID-10 container.
Looking at your post I dont have the ability for 8 drives but I do have the ability for 6 drives. What would you recommend as far as equipment, and what type of containers would you setup with your suggestion below? 2 socket system examples
Dell PowerEdge 2900 $4,625
2 x 3.73GHz Dual Core Xeon 5080, 1333MHz FSB
or
2 x 3.00GHz Dual Core Xeon 5160, 1333MHz FSB
4x2GB 667MHz DIMMs
Internal SAS RAID Controller
Redundant power supply
2 x PERC 5/E SAS RAID adapters, PCI-Express $799 ea
8 x 73GB SAS 10K hard drives (internal) $299 ea
4 x 2GB memory $1,700
2 PowerVault MD1000 external storage units
w/ 15 x 36GB SAS 15K hard drives $6,765 each
w/ 15 x 73GB SAS 10K hard drives $7,515 ea
w/ 15 x 73GB 15K or 146GB 10K ~$8,700 ea Thanks in advance
Rick
it really depends what you are trying to do
if you expect a heavy load,
considering that the 2950 is capable of handling a very heavy load
then your disk system should also be able to handle a heavy load,
even if normal disk load is light, and your database fits on 4 disks as i said above, 2 SAS controllers & 2 PV MD1000’s
(plus the embedded controller for internal drives) is a good balance for the 29×0
technically, the smaller drives provides more IOPS/$,
the 36GB 15K provides the best
but as you can see,
for only 1-2K more per enclosure, you can get the next step up
Great post.
One quick question: the HP ProLiant ML370 G5 has 16 internal bays – why go to external storage? TIA
Stephen

i think i missed this point when i used the on-line configuration for the HP ML370G5 the base system has 1 memory board with 8 sockets, 1 SAS drive cage with 8 SFF slots
a second memory board can be added for a total of 16 sockets
a second drive cage can be added for a total of 16 internal drives for a small database that essentially fits in memory, the 16 drives might be fine (12 for data, 2 for logs, 2 others)
but given that the Xeon 51xx line is very powerful, i think would prefer to match it with the 16 internal drives plus 1 external MSA-50 unit with 10 additional drives.
i suspect this configuration will do over 1.5GB/sec on table scans
Joe, I’ve just bought a couple of PE2900. Standard config is 8 drives, but you can add on an extra 2 drive bay. Both bays link to the internal RAID controller, so I think you could get 10 internal drives running RAID instead of 8. This is something I didn’t notice in your reccommendations, and isn’t clear in the dell specs! Afro

i do not drive into every last detail of each particular system
because the important point is to distribute IO load across multiple controllers
and multiple disks.
my preferred configuration for the Dell PE 2900 is internal bays + 1-2 MD 1000’s with 15 drives so with the 1-2 MD 1000’s, whether you have 8 or 10 internal drives is not critical
of course, its always good.
you will notice there is still an open 5.25 bay after putting in the 2 drive bay,
you can get a non-hot-swapp disk bracket for an eleventh drive
i put a big SATA drive here for misc storage
I have taken your advice on controllers, I specified the machines with an extra controller ready for expansion with MD1000’s. Being able to run 10 drives on the RAID controller was a suprise – I read that the controller was 8 port and assumed that would limit it to 8 drives. However, both bays are connected, and this is very useful for our application. Good tip about the SATA – might try and fit two extra drives. One in the floppy bay, one as you suggest in the 5.25 bay.
I have put a config based on this article (among other sources) for my main SQL server.
Over time the plan is to add additional disks as I move additional SQL databases into this new server. A HP DL585 G2 4 x 2.8 Dual Core, 8GB mem with SAS disks Channel 0 Internal 8 x 72 GB 15k RPM internal disks (connect to a P400 controller)
Channel 1 MSA50 8 x 72 GB 15k RPM connect to a P600 controller with 512MB cache
Channel 2 MSA50 8 x 72 GB 15k RPM connect to a P600 controller with 512MB cache I have seen some references to a new controller called P800, but not found any information about it on the web (it was listed in a TPC document). The plan is to use Raid 0+1 for speed throughout. There is little information available as far as comparisons on SAS MSA50 + P600 and U320 SCSI MSA30 P64xx controllers goes. Will I get better performance out my proposed SAS solution than if I go for a dual channel U320? The design goal is a very fast SQL server, that will cater for both transactional as well as intermittent large day time queries. Comments and suggestions most welcome. Tom
Tom
SAS at 3Gbit/sec per port, 8-port controller offers nearly unlimited bandwidth
theoretically 2.4GBytes/sec per controller (in each direction?)
however the PCI-X or PCI-e port might be limited to only 800MB/sec or so. in database apps, random IO is typically 8K per IOP, which will not saturate either U320 SCSI or SAS at the adapter level. only the table scan to a non-fragmented table and disk will saturate U320 SCSI, but not SAS
I can get 250MB/sec (250×10^6, not 250*2^20) per U320 channel, or 500MB/sec per dual channel adapter.
I am getting >800MB/sec on 10+ SAS disks on 1 PCI-e adapter (on the Dell PE2900 with Intel E5000 chipset, not sure how the AMD chipset can do, but it is suppose to be good) so SAS is really better to DW & large queries, no diff on random
but SAS is the right choice for new systems going forward.
SCSI is still OK if interchangeability is desired with existing systems
I do not have any of the new HP systems with SAS controllers,
In which case I will only quote parts listed in both TPC-C and TPC-H reports,
which are the P600 & P800 controllers, and a MSA-60 external enclosure
not sure why HP has not released these yet, considering the P400 was released. a few years back, Dell TPC-C reports used the Mylex RAID controlers instead of the PERC 3 (or 4?)
turns out the PERC had horrible problems depending on the source,
and was still sold to customers,

I recived some information from HP after asking about infor on the P800 and MSA60. "the P800 is a future product due to be launched next month. To a large extent it will be the PCI-Express version of the P600 (which is PCI-x) therefore offering potential increased performance through the storage subsystem. I think it is a great option for the DL585 G2 as it adds significant additional performance potential to the disk subsystem to match the increases in performance in everything else (new processors, RAM , NICs etc etc) The P800 completes the PCI-express Smart Array portfolio – the P600 was the first card designed to support our first SAS drives in the previous generation servers. 15K disks are coming soon in the 2.5" SFF SAS space – probably around the turn of the year – the 10k 2.5" disks currently available sit somewhere between current 10K and 15K 3.5" disks when it comes to I/O capability. (Smaller form factor means less average latency for head movement). I think a single Smart Array controller chained to 2 x MSA is OK unless you have an extremely demanding requirement from a bandwidth perspective (large scale media-streaming?). (And if this is the case, wait for the P800 which offers much much more bandwidth to the processor/RAM)"
Tom
this is really helpful on the P800
as far as I can tell, the upcoming MSA60 holds 12 LFF (3.5in) SAS drives
the upcoming MSA70 holds 25 SFF (2.5in) SAS drives
compared with the current 1U MSA 50 holding 10 SFF SAS drives while 2 MSA to 1 PAx00 RAID controller is OK,
based on the cost of a fully populated MSA, i would recommend starting with 1 MSA per PAx00 RAID controller until the available PCI-X/PCI-e busses/slots are full,
unless more disks are required, then go to 2 MSA per controller hopefully Tom can do the SQL IO tests in the SQL 2005 HW section on getting his new equipment
Got the order in now for 1 x HP DL585 G2 4 x Opteron Dual core 2.8GHZ with 8GB mem
3 x HP HP Smart Array P800/512 BBWC Controllers
3 x MSA50 with 30 x HP 72GB 10K SAS 2.5 Hot Plug Hard Drive 1 x HP MSL2024 with ultrium LTO-3 (to get those backups done in a hurry) connected to a HP DL380 G5 with a MSA50 with 10 x 146GB 10K SAS 2.5 Hot plug drives I was considering the new MSA60 (released 14 November) with the 3.5" drives, there are matching drives (with 2.5" size) but they do come with three times the latency of 2.5" drives so that was not an option. I reckon the MSA60 which offers cascading 3-4 units for a total of 36-48 drives per controller are more geared for slow bulk store than for speed. HP advises that the MSA60 is a "low end" product and under any circumstances that 3.5" technology is coming to an end (in favour of 2.5" technology).
The new P800 controller was also released 14 Nov so that is what the wait was all about. I am throwing some 146GB drives in for bulk storage as well. The system will be connected via multi GB nics to a HP Procurve 5406ZL routing switch and the users to HP 2810-48 switches. Oh and the workstations are mostly HP DC7700 with Core Duo processors with GB nics Will be very interesting to see the speed we will get out of the kit.
Tom
be careful in that sales reps tend to get confused
per our discussion in the other post,
the latest Seagate 10K 3.5in drive has 4.6/5.2 millisec avg read / write seek
compared with 3.8/4.4 for the 2.5 Savio drive. but a 146G 3.5 probably costs about the same as the 73GB 2.5in (please check this)
so if you just used the first 50% of the 3.5in drive, you would probably get similar or better seek, plus you have the other 73GB for misc off hours use also, i am not a big fan of dedicated drives for backup (unless they are SATA),
if you have a mix of 73G and 146G SAS drives
use 2-6 for logs,
the remainder of the disks for data & temp
put the data on the lead partition, the temp on the second part,
then use the remainder of the data disks for backup and misc use this way, as many disks and controllers as possible are available for the surge loads

Hi, I’m working in a french compagny and find your post really intersting. I’m doing benchmarks on two different DL385 servers wich are : server1 : DL385 G1 2xOpteronDC 2,4GHz 4Go_PC3200 default controller + 128MB upgarde and 5 146Go 15K SCSI disk in a Raid 5 array.
server2 : DL385 G2 2xOpteronDC 2,6GHz 4Go_PC5300 P400 controller with 512MB and 5 146GO 10k SAS disk in a raid 5 array. Ther server2 seems more powerfull, but i’ve been testing several SQL2005 request and here are some results Request1:
server1 92 sec
Server2 150 sec Request2:
server1 650 sec
server2 823 sec I’m really disapointed with these results and don’t understand why …. i’ve got an open ticket with hp to try ti understand why, all drivers and firmawres are up to date. I would also be insterested to shared benchmarks with you as i’ve got a few different servers (IBM/HP). Thanks. Thomas
i would first determine whether this is a difference in the SQL execution plan or disk related
try running profiler, or use SET STATISTICS TIME ON to compare both the CPU and Duration, not just the duration, which you cannot attribute a cause
i’m a bit new in the SQL world I noticed SQL could generate graph telling how and where time is spent, but they are a bit difficult to read. Could you tell me more about this "SET STATISTICS TIME ON" and how tu use it ? thanks
Hi again, I manage to use these option (TIME and IO), here are the results : server1 :
(39697 ligne(s) affectée(s))
Table ‘IndexSite’. Nombre d’analyses 5, lectures logiques 647203, lectures physiques 128,
lectures anticipées 568066, lectures logiques de données d’objets volumineux 0,
lectures physiques de données d’objets volumineux 0,
lectures anticipées de données d’objets volumineux 0.
SQL Server endash Temps d’exécution :
Temps UC = 57202 ms, temps écoulé = 90189 ms. server2 :
(39697 row(s) affected)
Table ‘IndexSite’. Nombre d’analyses 5, lectures logiques 649018, lectures physiques 4,
lectures anticipées 640679, lectures logiques de données d’objets volumineux 0,
lectures physiques de données d’objets volumineux 0,
lectures anticipées de données d’objets volumineux 0.
SQL Server endash Temps d’exécution :
Temps UC = 39641 ms, temps écoulé = 133460 ms. so if i undertsand well, server2 spends less UC time, does less physical access but finally is much longer … so where does it spend it’s time ??? (Sorry for the french results, and by the way is this the right topic to post ?) Thanks for the help

the cpu numbers are about right, your new system is approx 20% faster than the old system, some from the higher frequency, the rest from the improved memory (DDR2 vs DDR1) the time numbers could be explained by the disks, the 15K being about 40-50% faster than the 10K, so sys2 runs 20% faster when data is in memory, and sys1 runs approx 50% for disk accesses. I would still check that the disk controller settings are the same,
i suggest trying direct io, write thru
i might also suggest more disks, ie, 2 disk controllers, 2 full racks of external drives to take full advantage of your compute resources, that is with a more powerful disk system, your query would run much closer to the 39sec cpu time required, instead of 133sec

What do you mean by "direct i/o, write thru" ?
What you would recommand is this configuration : ProLiant DL385 G2 $5,139
2 x AMD Opteron 2218 2.6GHz Dual Core
4 GB Memory (4x1GB)
Smart Array P400 SAS RAID controller
8 x 72GB 10K SAS SFF hard drive $329 ea
2 x Smart Array P800/512MB SAS Controller $1,300 ea
2 MSA-60 Storage enclosure $3,250 ea (or $6,850 w/ 12 disks)
24 x 36GB 15K SAS hard drive $300 ea
Notes: 16GB Max mem, 3 x8, 1 x4 PCI-e the System on the internal Raid-5 8 Sas disk
the data bases on one msa
and the log on the second msa is this correct ?
yes,
but your choice of the MSA 60 (12 3.5in drives)
or MSA 50 (10 2.5in drives)
the MSA60 is more economical
the MSA50 is better for physical space contraints i ran the above french message through a translator Table ‘IndexSite’. Number of analyses 5, logical readings 647203, physical readings 128,
early readings 568066,
logical readings of data of voluminous objects 0,
physical readings of data of voluminous objects 0,
early readings of data of voluminous objects 0.
SQL Server endash Times execution: Times UC = 57202 ms, flowed time = 90189 ms. i think "early readings" is read-ahead-reads anyways, your IndexSite table is 647,203 pages, at 8K per page,
thats 5.18GB,
if it takes 90sec (server1), thats 57MB/sec which is pretty poor a proper disk system should do this at 300MB/sec, but i would need to see the query & plan to make proper assessment direct io and write-thru are settings on the raid controller.
most people do not know what they are talking about, and think cached io and write back are good settings, not true for db

Made a new bench with Server1 and a msa 30 SB with a Raid5 o 12 15k scsi hard drive. i put my base on the msa and here are the results (the controller for the msa is a smart array 6400 with 192MB of memory) : I now obtain 61 seconds witch is better, but if i follow your calculation i’ve got 85MB/sec, wich is still far from the "utopic" 300MB/sec. this is the type of request i use : select count(*) from my_base.dbo.indexsite group by site, idsection on a 80GB data base. As my base is on a "good" bench of disk with it’s own controller, do i need to also have a good controler for the local disks (actually a SmartArray 6i with 192MB).

is the MSA30 with 1 SCSI channel or 2?
1 U320 SCSI channel can only do 250MB/sec never mind,
recall on server 1, the cpu was 57 sec, so 61 sec duration is not bad.
on server 2 with this disk array you might be closer to 40sec how many rows are returned by the query?
i am kind of curious why your execution plan is not a parallel plan also try
select count(*) from my_base.dbo.indexsite WITH (NOLOCK) group by site, idsection
"is the MSA30 with 1 SCSI channel or 2?
1 U320 SCSI channel can only do 250MB/sec" the msa is 1 scsi Channel
(but i thought that with two channels, the msa was split in two and I would have two bench of 7 disk instead of 1 of 14 ?) "never mind,
recall on server 1, the cpu was 57 sec, so 61 sec duration is not bad.
on server 2 with this disk array you might be closer to 40sec" So if i understand well, disk/controller optimisation will only reduce the total time but not the cpu time. What can i do to optimize the cpu time ? (thread/cache … ?)
i’ll put the msa on server2 this week, i’ll put the results here. "how many rows are returned by the query?
i am kind of curious why your execution plan is not a parallel plan" 39697 rows are returned, how would you parallelize this query ? "also try
select count(*) from my_base.dbo.indexsite WITH (NOLOCK) group by site, idsection" => no changes. The NOLOCK is used for all the server in production, but as this servers are for testing, i don’t use the NOLOCK (thought it could only slow the query)

improving CPU efficiency is part of my service, as detailed on www.qdpma.com
or
http://www.sql-server-performance.com/qdpma try just
select count(*) from my_base.dbo.indexsite WITH (NOLOCK) see if this is a parallel plan
hi, had other things on fire, will continue my test next week. One good question, i’m wondering a bit about Intel vs AMD for sql server … for what i observe with my test, I would say i’ve got a prefernce for AMD, but is this true, or is it only a question of "what you do" with SQL ?
i will go into this in more detail later,
but without having seen the latest AMD with DDR2 firsthand, at 2 sockets, the Xeon 5100/5300 lines are most impressive and is the clear winner,
only reservation being that the quad core 5300 have not been release at the 2.33GH+ yet At 4 sockets,
the Opteron is probably faster in certain tests, high call volume (30K/sec+)
Opteron also seems to be ahead in DW apps,
while the Xeon wins the transaction processing,
but both are close, so neither choice is wrong not sure what you mean by what to do with SQL,
i suggest 64-bit OS if possible
SQL 2005 64-bit strongly preferred
but if currently SQL 2000, thoroughly test 2005, get a good profiler trace on SQL 2000 and 2005
I’m replacing an existing server and have a few questions: 1) Sample configurations and recommendations start at 2 sockets – I think 1 proc is enough for this box – is there something magical about 2 or more? What would you recommend for a single proc? 2) I see lots of talk about the DL385 but I’m also seeing lots of talk about intel processors – are people avoiding the DL380 for some reason or does it just not happen to make it into examples? 3) HP has a 36GB 15k SFF SAS drive (431933-B21) but it’s not in the compatibility list for the DL380 – is this just because it’s new or are there heat dissipation issues (or something)? Here’s what I’m thinking – anything jump out at you? ProLiant DL380 G5
1 x Dual Core Xeon 5160 3.0GHz
4 GB Memory (2x2GB)
Smart Array P400 SAS RAID Controller
8 x 36GB 15k SAS SFF HDD
Smart Array P600 SAS RAID Controller
MSA-50 Storage enclosure
10 x 36GB 15k SAS SFF HDD
(Redundand power supply, extra 2 port NIC) Backups would go straight to an iSCSI SAN (bad idea?)
This is primarily an OLTP environment (ERP)
2 main DB’s (about 20 GB) and a bunch of smaller secondary DB’s Thanks! MDD

on CPU resources only
given the capability of current proc, 1 socket is probably more than enough for most apps
especially now that quad cores are available. however, there is usually a significant difference between 1 and 2 socket systems in terms of IO capability, memory performance and capacity, and internal drive bays,
all for minimal price difference compare the 2 Dell systems, (check equivalent HP yourself) PowerEdge 840
1 Xeon 3060, 2.4GHz, 4GB mem, 80G SATA $1807
4 DIMM sockets, 1 x8 PCIe, 4 internal non hot swap drive bays PowerEdge 2900
1 Xeon 5140, 2.33GHz, 4GB, 80G SATA $2198
12 DIMM sockets, 1 x8, 3×4 PCIe (?), 8 internal hot swap drive bays so if you really think 1 socket is sufficient, get the 2 socket system with 1 proc
get the new quad core at 2.3GHz if you can on the above, get 8GB mem if you can

concerning the quads: Has anyone thought about 4 sockets vs 1 quad-core socket? The same memory and i/o pins on the QUAD cant go active at the same time for different quads. I mean with 4 sockets, the cores can all be accessing memory and running code
while the quad has to arbitrate access off-chip to the outside world. sure you get 4 cores, but they all share one set of pins. then i think the serverworks or similar SMP chipsets comes into play just something to think about.
it depends on the platform, for the older Intel platforms, (Serverworks chipset, and E7500)
all processors /sockets shared the same bus to the memory controller,
the most recent Intel platforms (E8500 and 5000P) have 2 busses,
on the E8500 – two sockets per bus, for the 5000 – 1 socket per bus on AMD, each socket has it own memory channels, and has provisions for IO,
but i am not sure that vendors attach IO to more than 2 sockets its not possible to do a 4 single core to 1 quad test on the same platform, but it is possible to compare 2 socket dual core (1 bus for each socket) to 1 socket (populated) quad core, but otherwise same chipset in any case, the single socket quad (Core 2 at 2.66GHz) should match or beat the last Intel 4 socket single core (Xeon 3.66GHz) this is based on a published result of 141,504 tpm-C for the 4×3.66GHz Xeon
there is no single socket quad result, but extrapolating from 2 socket dual core (169,360) and 2 socket quad (240,737) and known scaling characteristics, the single socket quad core 2.66GHz should be right around 140K
Hi Joe, We are looking for DL580G3 (4x 3.6 single core) replacement. Currently we consider HP: DL580G4 (3.4GHz DC, 16MB L3, 800FSB) vs
DL380G5 (3.0GHz DC, 1333FSB) vs
DL380G5 (2.66GHz QC, 1333FSB) Our enviroment critical to single thread processing speed. (Few parallel processes and not large cache (2.5GB is enough)) In you opinion, how FSB speed is important. Should we go to more FSB (DL380), or otherwise choose more GHZ and L3 cache and have more processing reserves (4 slots)? TIA,
Ed
JoeChang said:
> direct io and write-thru are settings on the raid controller.
> most people do not know what they are talking about, and
> think cached io and write back are good settings, not true for db I’m guessing you are basing this on experience, but can you account for this claim? Why would write-caching not be optimal for db environments? When *would* write-caching be optimal?

reply to obrp
its not really a matter of core GHz & cache size vs FSB bandwidth
its more a matter of the DL580G4 being based on the last of the Pentium 4 derived NetBurt cores versus the newer Core2 derivates for peak performance regardless of cost
the 4 socket DL580G3 wins in the benchmarks, but benchmark systems benefit from serious expert tuning
big cache also helps high call volume apps, probably more than 10,000 SQL Batches/sec
so does HyperThreading, which the newer G5 systems do not have my own tests of non-tuned systems show the Core 2 architecture better at single complex query performance if you want proper technical analysis, i could it with a Profiler trace of your application and the cloned database (with indexes and statistics) but it can be cheaper to buy the 2nd or 3rd system, if its good enough, stay with it,
if not, buy the 1st one (or then pay for the analysis)
reply to jboarman yes, this statement was made based on hard measurements
it comes from starting with disk controller cache enabled, because it sounded good
then realizing that disk performance really seriously (REALLY & SERIOUSLY) sucked.
and i mean suck really really seriously seriously
if there is any further doubt as to what i mean disabling cache restored performance to what was expected based on theoretical characteristics of the disk rpm, seek, etc this is not just me, look closely at all the TPC-C and TPC-H benchmarks
dig through it and see what the setting for disk controller cache is the really sad thing is SAN vendors do not bother to do a proper test
and the cache enabled is good,
so people buy a really expensive SAN
often end up with worse performance than a notebook 5400rpm drive

Note for HP DL585 G2 and DL380 G5 The DL585 comes with a P400 SAS controller with 512MB battery backed cache by default The DL380 however comes with a P400 with 512MB cache too but there is no battery pack provided by default so write back caching is not possible. Bought three HP P800 cards, they do come with 512MB cache with battry backup by default. Have to figure out what the best split is for the cache strategy
0,25,50,75 or 100% read cache and
100,75,50,25 or 0% write cache for db and log files Note:
when you order MSA50 cages that you do get a nice shiny new SAS cable with the deliver.
Problem is that the cable is one for daisy chaining MSA50’s but it doesn’t fit the P800 external conmnector so you need to get a special cable to connect the MSA50 to the server P800 card. Tom
Joe, I wanted to confirm (and ask) a few things with that you have mentioned throughout this thread: 1. You mentioned earlier about partitioning the array for the database files and the tempdb files. So you would just format the disk into two seperat partitions? Is this correct or would you use the raid controller to create virtual disks? Would both the tempdb data and log file reside on this partition? 2. Previously in my SQL setups I had multiple database files including specific indexes on dedicated raid 1 containers as well as the log file. What you are advocating is a larger raid 0 (or 10) container that hosts multiple databases without the need to split up the databases into seperate files (unless space or distribution across more disks and controllers is required)? Is this correct? 3. Log files should be located in individual raid 1 containers per database? 4. If using raid 10 what stripe size would you recommend when creating the array – for either the log or data files? I have traditionally used 128k but have read it should be 64k in line with the 64k allocation unit formatting. 5. I have just purchased an MD3000 which has 4 ports and the ability to daisy chain MD1000 units (coming soon). I haven’t fully tested it yet (and will report back with results) but dell support has told me that if you have multiple controllers connected from the host server you don’t get coupled performance. The multiple controllers are for redundancy only. According to their documents (and physical connectors) you can connect up to 4 seperate hosts from 4 different servers each with their own 3Gb/s link so I can’t understand why you can’t have two hosts from one server acting independently? Cheers Chris

1. lets say you have 4 external enclosures with 60 disks
of which 56 is allocated to data, tempdb and backup normally, i like to make at least 2 arrays in each enclosure, rather than make 1 big array, or an even bigger array spanning more than 1 enclosure.
the reason for this is if i needed additional capacity, i could add an enclosure, only half populated if constrained by budget, then add 1 data file, and still have a balance configuration (each file spread across identical number of disks). now, on each array, i could just have 1 partition, with data, tempdb and backup files.
the problem with this is that the data and tempdb files could get fragemented as they grow.
so it is highly preferable that each get their own partition now the decision is whether there each array is split into 3 arrays or three arrays are made from each group of 7 disks.
if the raid level is the same, then there is no reason for multiple arrays.
potentially, one might want RAID10 for data & temp, and RAID 5 for backup
which does call for separate arrays 2. data files from more than 1 database can share physical disks, as both are random loads,
just don’t mix OLTP and DW workloads
3. each heavy load logs needs to be on it own set of disks, number depends on behavior.
4. leave it in default if default is between 64-256K, particularly for HP Smart Array, they know their stuff. 5. I am annoyed by Dell’s decision making in these matter,
i suspect their PM’s don’t understand the true objectives, and only get input on what is required to get good TPC-C and H benchmark results
Joe, Thanks for the reply. OK. I have this MD3000 and for my lab setup will have 10 x Fujitsu 146gb 15k SAS drives in a raid 10 container for my data and tempdb partitions. This gives me a potential for 10 x 147MB which has the potenetial to saturate my PCI-X based HBA in any case. I don’t have server with a PCI-e slot unfortunately. I will place the log file(s) on a seperate raid 1 container of 2x73gb 15k. A couple more questions: 1. Where do I place the tempdb log file? In the same partition as the tempdb?
2. I read earlier that you suggest turning of the controllers cache and just doing direct read and writes? Is this right? If so why would direct writes perform better than cached writes?
3. In regards to fragmentation in this setup what is your recommendation for defragging? Do you defrag both from within SQL Server and at the OS level or are they both a waste of time?
Cheers Chris

1. i have never seen lots of tempdb log writes, but i suppose it could happen.
if it does, i think you goofed your db architecture
so it should not matter 2. i have never had some not heavily involved in silicon and system design believe the explanation on this
there are too many idiot writers out talking about how great memory is relative to disk, so caching should great.
just test it for yourself, but don’t waste too much time wondering about it 3. if fragmention is a concern, use raw partitions to elimate the need for OS defrag
for SQL frag, if you have one really big table that is most of the db and that you cannot defrag it in a reasonable time,
then put the data part of it in its own file groups
the indexes and other tables can stay in the primary fg
Joe, One thing you haven’t spoken about is how you go about providing fault tolerance for the database server? I understand the "need for speed", high bandwidth access to storage and that expensive san solutions don’t necessarily provide it but they do provide easy storage access for operating clustered solutions in the cases of failure. The DAS solutions are great but due to small numbers of connections and limitations on the jbods/arays they don’t really lend themselves to connecting multiple HBAs as well as multiple hosts/servers. I know some of the vendor arrays allow for clustering but this usually means a sacrifice on dual channels. So if clustering is not an option (at least to the same storage arrays) then how would you implement database redundancy – ship logging, replication? Chris
what do you mean "a small number of connections"???
did you see the HP Superdome configurations in TPC reports, there are 49 SCSI RAID controllers connected to almost a thousand disks
the HP ML370G5 report has 7 P800 controllers connected to 44 MSA60 enclosures and 528 disks
there is a reason DAS is used for benchmarks, not just cost the old SCSI used to offer cluster support, but i think vendors realized that cluster solutions were an easy up sell to SAN that they stopped offering cluster support. the new Dell MD 3000 does offer cluster support for SAS, HP will at some point. the dumbest thing you could do is put your critical database on a SAN shared with other apps, especially non critical app.
i know someone who had an $800K SAN shared among several apps, including the QA test database on which they ran load tests.
the production db would experience periodic slow downs, until the san engineer finally admitted the prod and QA shared physical disks whatever you do, you must control your own disks, do not let someone else own it, in anycase, with same number of controllers and physical disks, DAS will out perform a SAN long term mirroring is probably the right solution, but we should let MS work out any issues over time
on the matter of SAN,
I will say my main complaint is the ridiculous cost.
DAS has an amortized cost of $500-700 per hard disk, depending on the exact model. by amortized, i mean including the contribution from the external storage unit, and other equipment. so if an external storage unit with 12 disks costs $6000, the amortized cost is $500 per disk for a SAN at list price, this could be $2000 to $4000 per disk
in a database, there is no substitute for sheer number of spindles (physical disks) and IO channels.
further more with low space utilization, you get better random IO,
SAN vendors to you that by sharin space, you get higher utilization, which then kills the benefits of low utilization if you actually open up the SAN, you might actually find a standard Xeon server inside, probably an obsolete one at that, even though you pay less for a new higher performance Xeon/Opteron for your server
you might also get SAN built on a pathetic RISC processor
my second complaint is priority is the horrible configuration advice you get from SAN vendors,
the SAN system itself might not be bad, if over priced,
but the vendor recommended configuration is often the worst possible thing for a database,
that why very few benchmark systems use a SAN
when they do, read the configuration settings careefully
all the features the vendor reps tries to tell you is so good (to justify the cost) are turned off
that’s because the person running the benchmark knows what he/she is doing

Joe, I do actually agree with you in regards to the SAN and cost. I did have a FC SAN and have moved away to DAS simply due to the ridiculous costs. $60000-100000 for a single Netapp Array (15 disks) and this doesn’t include the costs of HBA’s and all the other bloated pricing of FC gear. What a joke! By "small number of connections" I meant per array/enclosure. I realise that you can (and for the benchmark tests in particular) connect vast numbers of arrays (and hence disks) to the database server and to maximise throughput, prevent channel saturation etc., you should connect at least 2 or more channels per array to the host. So while this is great for a tpc test in the real world businesses also need to ensure some level of redundancy for their precious database. My point is (or was) that these arrays usually only have 2 connections/channels per storage unit so that if you want to connect a secondary host in the case of failure or for clustering purposes you can’t, because all the connections on the array have been taken up – even if they aren’t active. In the case of a 15 disk scsi array they usually only come with two pysical channel connections even though, as previously stated, you can saturate the channel with 3-4 disks. This means it should have in the order of 4 to 5 channels to allow for throughput and then that number again to allow for redundant connections. All I was trying to point out is that I can understand why businesses go to a heterogeneous san that allows for multiple connections to access LUNs even if it’s just for fail over, that’s certainly why we did. Otherwise in a serious DAS solution you essentially have to duplicate the same equipment twice and mirror the data in realtime – both problems (and costly) in their own right. Now I have just purchased an MD3000 which has a total of 4 connection ports, which was actually one of the main reasons I went for it. However you start asking questions and digging into the specs and capabilities of the unit and I am now getting back answers from dell which say I can’t team connections and I that the clustering ability is limited. So obviously the vendors don’t seem to be in step with current business requirements and obviously haven’t been for a while as even the legacy gear (such as the dell 220s) didn’t supply enough channels to allow for throughput and redundancy. All of this leads back to how would/do you deal with the the DAS redundancy issues given the restrictions on this type of equipment? I completely agree with you on the price and perfomance advantage of DAS but what about real world failure.
And while we are on this topic how do they actually distibute the load across multiple HBA’s and arrays in these tpc tests from a single server/host? Do they just simply split the database up into lots of files and file groups that get placed individually on each of the array segments? (Which is fine for a one of test not so great for management purposes of a production database) Do they have some other sort of mechanism such as a software raid solution (Windows dynamic disk) running over the top of the hardware raid HBA’s which heterogeneously distributes the load across the arrays, such as in a raid 0 configuration? Or do they use a multipath solution for such as Windows multipath I/O? What is your opinion on SSD (solid state disks) such as the RamSan and the supposed numerous advantages it can offer over physical disks?
yes but it so happens that 1 connection (SAS x4) from server/host to 1 storage unit (12-15 disks) just happens to be about right for DW
in simple tests, you would like 2 x4 SAS per 12-15 disks, but most SQL ops cannot sustain this if you are OLTP, you might go 1 x4 SAS to 2 storage units
so the MD3000 is about reasonable, supporting 2 hosts in fail over, + 1 MD 1000 if desired for DW, skip clustering and stick with the MD1000 for OLTP, start with 1 MD3000, add up the 3 more for a total of 4
each connected to it own SAS HBA on each of 2 servers
then add 1 MD1000 per MD3000 i think i explained about filegroup theory somewhere
split each major filegroup into multiple files
1 file per data array
it works pretty good, especially after an index rebuild

in general, SSD does not make sense for DB because because the system memory is a data cache, so why duplicate memory for permanent storage
the one area i think it makes sense is in database consolidation, where you have multiple databases but cannot allocate dedicated disks for the logs of each with a good understanding of disk performance characteristics, it is possible to hit desired performance goals at costs below that of ssd for now,
that is: 10-30K random IOPS, 2-3GB/sec sequential
HI All, I got Compaq ML570 G2 4 x 2GHZ Xeon, 4GB RAM, Smart array 6402/128 and 8 x 36GB 15K u320
and 3 x 136 15K U320 drive. All are in 3 X RAID 5. I got few main DB’s 45GB, 25GB and few DB’s. running Epicor. Somehow reason performance are so bad and thinking moving bigger DB to new Server. Please tell me if i got Proliant ML570G4 4 x Dual Core Xeon 7140 3.4GHz/16M L3
16 GB memory (8x2GB),1 x Smart Array P600 SAS RAID controller
10 x 72GB SAS SFF 10K hard drive How do i config the HDD and RAID?
while it is reasonable to replace an old system for the main line of business app
it is still very important first the determine the reason for any performance issues before blindly specifying a system and talking about disk configuration follow my directions below in collecting a perfmon and profiler trace
http://www.sql-server-performance.com/qdpma/
http://www.sql-server-performance.com/qdpma/instructions.asp to do a proper analysis
Hi Joe, Thank you so much. As i said we run Epicor backoffice, frontoffice and edistribution sytems plus OLAP in same server. That’s why we need to move ebackoffice ( 45Gig DB) to new server. Do we need SQL Ent or stay with std. With this sort of database is that enough to run SQL 2 GB RAM? And What is the best HDD and RAID configuration for my new Server?
I do not know how you expect an intelligent recommendation without any performance data.
The really big price key is SQL Server Standard Edition or Enterprise Edition
The next big price key is a 2 or 4 socket system, which also impacts the above on per processor licensing cost. The SQL Server per processor license cost are approx:
SQL Std Ed., Ent Ed.
2000 $5,000 $16,000
2005 $6,000 $23,400 (discounted prices, some one should check these) If you are stuck on SQL Server 2000, then Std Ed limits you to 2GB,
If you can move to SQL Server 2005, then Std Ed probably works for you.
Figure the ProLiant ML370G5 with 2 Quad Core Xeon E5345 is 4-6X more powerful than what you currently have, i find it hard to believe you will need more. so a for a 4 socket SQL Server 2000 Ent Ed, you are looking at $64K for SQL license
plus $25K for a 4 socket system w/16G + more for storage, probably $100K total for a 2 socket system on 2005 Std Ed, you are looking at $12K for SQL license, $10K for the server plus $10K for storage or $32K if none of the above matters to you, sent me a check for $100K, I will make sure you get something that works really well from one of the above options of course, if these budget numbers are too low for you, get a SAN vendor involved, they will take the rest of your money

Hi Joe, Thanks,As per your reply i think its better to go with ProLiant ML370G5 with 2 Quad Core Xeon.
E5345. Again, For this above server what sort of HDD i should buy and How maney? And which RAID configuration?
Sorry, With 2 x Quad Core Xenon, DO we need to buy only 2 Processor SQL Server license?
my preference, without being able to see any meaningful numbers
except the size of your db 1st preference
4 racks of external storage,
your choice of SFF (2.5in) or LFF (3.5in) drives,
LFF is cheaper, unless you are almost out of space
2nd choice: 3 racks of 12 disks min i don’t like to go in detail on the exact raid config
i think too many people focus too much this when they only 8-10 drives
when you have 4 racks, its harder to screw it up yes, you only need 2 proc licenses on SQL
and you already have 4, sell the other 2 on ebay, and it may pay for your system
We are getting close to finalizing the hardware spec for our new servers. The latest white box spec is the supermicro board, quad core xeons, 3 P800 16 port SAS HBAs, 32gb RAM and 48 74gb 15k SAS drives in the 5U chassis – total cost around $33k. ~33k is the sweet spot for us on the budget. To be diligent we looked at the DELL PowerEdge 6950 with quad dual core opterons, 32gb RAM, and 3 MD1000 PowerVaults with 15 74gb 15k SAS drives each – total cost right around $34k. The DELL solution appears to be very competitive price-wise. Another interesting thing is that the motherboard has 7 PCIe slots! We are very close to just going with the DELL solution after pricing it all out. Being able to plug another PowerVault into one of those extra PCIe slots at some point in the future is very attractive. My only concern is how the the PowerVaults performance compares to true directly-attached drives. Are there bottlenecks to worry about with the extra adapters and hardware involved with external storage like a PowerVault? I imagine it would take 2 SAS cables or 4 Infiniband cables to put each of the 15 drives on a single SAS port yet the PowerVault is connected with just one SAS cable? Does is use SAS expanders to connect 2 or more drives per port? Anymore than 2 would be a potential bottleneck, right? These are my only concerns with the DELL setup. With the white box setup I know exactly how the IO is configured.

I have used Supermicro frequently in the past with SCSI
When SAS parts became available, i tried cabling a supermicro box to an external chassis with SAS drives, but had a real hard time getting exactly the right parts since Dell occasionally offered 25% discounts (including right now, 30% on servers, 25% on PowerVault MD1000/3000),
this negated the price advantage of whitebox I am using 1 PE2900 with 10 internal SAS drives on the PERC5i + MD1000 with 15 drives on 1 PERC5E I think I am getting 700-800MB/sec out of the MD1000 and about the same out of the 10 internal (or 1500MB/sec combined) but i think 80MB/sec should be possible, meaning 1200MB/sec
I don’t know if I am limited by the PERC, the PCI-e slot or the MD1000 assuming it is possible to get 3GB/sec out of 3-4 MD1000’s, I think I would be happy The current PERC5E is based on the Intel IOP 80333
I am waiting for a IOP 8034x card, but I would still buy the system as is now, then get new cards when I can
of course, switching cards in a current system is very involved how many servers are you buying?
I think Dell might have some promo’s to drive PE6900 sales
talk to a Dell rep,
tell him what random and sequential IO you want to hit
and see if they can provide technical assistance
quote:Originally posted by joechang
The current PERC5E is based on the Intel IOP 80333
I am waiting for a IOP 8034x card, but I would still buy the system as is now, then get new cards when I can

Joe, Do you know anything about the newer Areca SAS controllers (based on Intel 800MHz IOP341 I/O processor)?
http://www.areca.us/products/sas_to_sas.htm What I am specifically looking for is a controller that can allocate volumes of greater than 2 TB. This Areca controller claims it can. The only prob I have is trying to find a supplier. It’s either too new or they are having production problems. So, does anyone know about this or another SAS controller that can allocate volumes greater than 2 TB (per 2004 specs: SCSI Block Commands-2 [SBC-2]). -Jonathan
The nice thing about this white box spec is that the chassis is a 5U 48 drive all internal. So, going white box each SAS drive gets one SAS port. Each 16 port HBA connects directly to 16 drives. It’s true DAS. With the PowerVaults, something just doesn’t seem right about connecting 15 SAS drives with a cable that should only support 4. I also worry about the other pieces of the system being potential bottlenecks.
Your numbers are very interesting, though. 700-800mb/sec is an incredible number. Higher than anything I thought possible on a 333 based IO controller. All of the benchmarks I’ve seen show the 333’s maxing out around 500mb/sec (give or take 10% w/differences in firmware etc). The only thing I’ve seen exceed that is the 341 based Areca sata controller that’s hitting ~800mb/sec. Are these numbers that you got from benchmarking your system or are they estimated based on other tests? One we finalize the spec we’re going to be getting 4 or 5 of these. Our datacenter has a relationship with Dell so there are some other benefits in going with the Dell solution over the white box.
I am aware of a Supermicro SC836 holding 16 drives
not one for 48 drives?
is this 3.5in or 2.5in drives? my testing is actual, but no RAID, not even 0,
just JBOD
one data file per disk
so in a RAID 10 or 5 array, 500 MB/sec might be reasonable each SAS port is 3Gbit/sec, but PCI-e is only 2.5Gbit/sec
so 1GByte/s is max signalling bw on x4 in one direction
(it is very hard to get simultaneous traffic in both directions,
except for a DB backup)
figure 800MB/sec is realizable for each x4 since each disk can do about 80MB/sec, so 3-4 disks per port is not unreasonable
considering that it is difficult to use the full max of each disk in many situations
the new Seagate 15K.5 disks are rated at 125MB/sec (?) but I don’t think Dell uses those yet if your budget allows
perhaps consider 10 disks MD1000,
but considering that you have already paid for the MD1000,
the last 5 disks do not add alot of cost,
and you will probably want the random IO capability for low incremental cost if you are set on the Dell 6950
I would like to suggest that you throw in one Dell PE 2900 with quad cores (or you could ask Dell to loan it for comparison testing) before configuring 3 MD1000 per each system
get 7 PERC5s and 7 MD1000s
see how much sequential and random IO you can get out of the PE6950 then do the test with the PE 2900 with 4 PERC5’s send me an email when you get a chance
This is the chassis I would pair the supermicro board with:
http://www.rackmountnet.com/Rackmou…1350W-redundant-power-supply-RSC-5D-2Q1-RMC5D If we go Dell I definitely won’t miss the opportunity to hook all of the MD1000s up to one server and get some benchmarks.
I should be able to attach 14 of them.

Jonathan
sorry i missed your post, probably saw gordy as last post and went straight to the end, not realizing you had posted in between
i guess this post is getting pretty long the areca is SAS to SAS
meaning your system has a SAS HBA connected to a storage box
which has this card, connected to the disk
so it is really an product meant for other storage vendors right now, their PCIe controllers do not support SAS
i think people want to do more testing with SAS Gordy:
the rack mount is only listed as SATA, not SAS
you can put SATA drives on a SAS controller
but not SAS drives on a SATA controller my guess is when they say SATA disks,
they don’t have the extra pins for SAS dual port
Gordy
one more matter rack mount didn’t say how much their box weighs
but i am thinking approx 40-50lbs for the empty chassis
another 50lbs for the 4 1350W power supplies maybe 10lb for motherboard and cpu heatsinks then each 15K drive weighs 1.6lb, much more than a SATA 7200rpm drive
bigger magnets i think i am guessing on most except the drive
but you are looking at approx 200lbs for the complete unit do you recall the 8-way Pentium III Xeon boxes?
probably over 150lbs,
it was a real back breaker to lift to get into the rack
then the stupid jerk offs didn’t give me clearance in the rack
so i have to pull out the entire system to change a card my PE 2900 with 10 drives + shipping box was listed at 153lbs
i don’t enjoy picking that up either
i do not have a young guy working for me i can delagate this to stick with the 15 or so disks per chassis,
even if it costs more, your company is paying
who will pay when your back is broke
Great topic Joe. Thanks for providing so much useful information. I have 2 questions. First, we’ve been using all Dell servers and want to give HP a try. However, I can’t seem to justify spending the extra money for HP. I have two quotes here for storage arrays. The Dell MD1000 with a Perc5/e Raid card, riser, rails, 2m cable, and 15 36GB 15k SAS drives is $5642 ($6080 including tax and shipping). The HP Smart Array 60 with the P800 controller, 2m cable, and 12 36GB 15k SAS drives is $8256. With HP you get fewer drives and have to pay 46% more. I don’t get it, is HP that much better? Also, I’d like to get a reality check on our plan to boost I/O performance. We have SQL 2000 Enterprise and one heavily used database that is currently set on three internal raid 1 arrays on a Dell PowerEdge 2850. One array is used for the log file, one for the data and one for the OS and program files. The plan with the MD1000, which would give us a total of 21 disks (6 internal + 15), is to create four raid 10 arrays, one each for the log, tempdb, data, and non-clustered indexes. Then we would do two raid 1 arrays, one for the OS and Program files and one for backups. Since our DB is only 5-10GB, we’re planning to use 36GB disks for speed. Does that sound like a good plan? Thanks in advance,
John

Hello, Thanks to Joe and all the other ones in this forum at first. I really enjoyed reading it and it provided quite a lot of information. I will be configuring an MSSQL 2005 server very soon and was wondering whether it would be better to go with (A) 2x 5160 Dual-Core 3.0GHz (4 cores @ 3.0GHz) or with
(B) 2x E5335 Quad-Core 2.0GHz (8 cores @ 2.0GHz). Does anyone of you know which of these two configurations would be better for a standard MSSQL database (not specifically optimized for multiprocessor usage)? I am well aware that 2x E5355 Quad-Core 2.0GHz would outperform the configuration in (A) quite easily. But I am not sure about the comparison of (A) to (B). The problem is that we are tight on budget and (A) and (B) are available in an IBM x3650 package for almost the same price. It would be really really great, if anyone of you could comment on that, as this would help me a lot to make a decision within the next few days. Thanks a lot in advance! Best regards,
Kar-Wing
In a well tuned OLTP app, consisting of high volume small queries
one expects 4->8 core scaling of approx 1.6X,
alternatively, a 50% frequency scale (2.0->3.0GHz) should yield 1.4X so in this case, B is somewhat better than A
in a poorly tuned app, potentially you could get zero gain from 4 to 8 cores. so there you have it, yes and no.
if you can wait until July, Intel has a significant price drop on the quad cores. for a certain answer, you need expert analysis now, if you control all code, on the db and the client
I would go with the quad because you should be working on getting your code to work better with multi-cores, even if it does not now
Thanks a lot for the very quick and informative reply! If I understood it correctly, then there’s potentially an around 14.3% performance increase with (B) 2x Quad-Core 2.0 GHz over (A) 2x Dual-Core 3.0 GHz (1.6 / 1.4 = 14.3). As long as there is no (significant) performance loss with (B) even with a totally untuned database, then this would give us the greenlight to proceed. We only would like to avoid to experience significant performance loss in comparison with (A) if we were using an untuned database at the beginning. I totally agree that the database needs some re-working to work better with multi-cores. The budget is quite tight on the hardware, but maybe I can convince the IT Department of getting some consultation onboard, if the system is not performing as expected. One more question: The original configuration was to setup a RAID-5 array, but knowing about the write penalty I suggested to go for a RAID-10 array instead. So the idea was to setup 6 HDDs into following configuration: RAID-1 array (2 HDDs)
– C: for System and Programs
– D: for Swap-file
– E: for *.ldf Log-files
– F: for tempdb RAID-10 array (4 HDDs)
– G: for *.mdf I know that it would be much better to split it up into more arrays to distribute the workload further to stream from as many physical disks as possible. But budget constraints do not allow more than 6 HDDs at this point of time, as it is not a real core system with ‘only’ around 10 GB data at this moment. Do you think above configuration would be OK, or would you change it to something else (e.g. 3x RAID-1 arrays to have one separated array only for tempdb and/or Log-files)? I am not sure as well how critical it would be to put the tempdb on the same array like system and swap-file, but at least on a dedicated partition to minimize fragmentation. I do not expect any perfect advice without knowing our database system in detail yet, but any advice out of experience would already help a lot. Thanks!
I might consider moving the tempdb over to the other array. I think it’s generally good to have the log separate from all the random activity of both the user database(s) and tempdb.
Hi Jo, I want to buy a new SQL 2000 Server and i got 2 X 40GB & 2 x 15GB Databases to run on this Server. Please help me to get the proper server. If i Go with a HP ProLiant ML370 G5 Quad 4GB RAM /HP ProLiant DL380 G5 Quad 4GB RAM or recomend better system
1. How do i config my storage for best performance? Data: RAID (?)
Log: RAID (?)
Backup:RAID (?) OS/Application/tmpdb: RAID (?) 2. How big HDD i should select to buy for above configuration? 3. How maney HP Smart Array P400/512 Controllers i need? 4. Can i install Windows 2003 64Bits and SQL 2000 Std 32 bit on this server? 5. I got 100 Users are connecting to this server so what sort of license i should use?
my standard recommendation, which means without technical analysis,
I prefer the ML370G5 over the DL380G5 because the ML370 has 16 DIMM sockets, meaning you can hit 32GB with 2GB DIMMs ($200 each from Crucial), and the ML 370 has 6 x4 PCI-E slots, over 3(?) slots now, with SQL 2000 Std Ed, you will not need this memory,
but you should have a plan for migrating to SQL 2005, or directly to SQL 2008 in the next year or so yes, Windows Server 200x 64-bit is preferred even for 32-bit SQL
but you need to test this first if this is non-clustered, definitely go with the ML370G5
fill it with 16 15K 36G disks
(unless you are a large customer, I am not inclined to think an HP reseller will given much of a discount unless you get a competitive quote) also buy 1 additional P400 SAS RAID controller,
8 disks on the included SAS controller
8 disks on the second SAS controller
if budget allows, 1 more SAS controller + 1 MSA 50 with 10 36G 15K drives for a total of 26 disks
2 disks RAID 1 for the OS
2 disks RAID 1 for the main db logs on the remaining disks, create RAID 10 arrays on each of the SAS controllers
On each array, create 3 partitions, 1 for data, 1 for tempdb, 1 for backup

HI Jo, Thank you so much. 1. Yes I can buy one additional P400 SAS RAID controller
8 disks on the included SAS controller
8 disks on the second SAS controller 2. if budget allows, 1 more SAS controller + 1 MSA 50 with 10 36G 15K drives for a total of 26 disks Why we need this please? 3. 2 disks RAID 1 for the OS – In SAS Controller A or B ?
2 disks RAID 1 for the main db logs -In SAS Controller A or B ? 4. on the remaining disks, create RAID 10 arrays on each of the SAS controllers
On each array, create 3 partitions, 1 for data, 1 for tempdb, 1 for backup" Controller A: 2 for OS (RAID1) then 6 drive (RAID 10)-> tempdb or backup Controller B: 2 for Logs(raid1) the 6 drive ( RAID 10)-> Data is that correct?
5. SAS or SATA II is faster or recommend? Thanks Again

lets assume the following
controller A, 2 disks raid 1, Array 0
controller A, 6 disks raid 10, Array 1
controller B, 2 disks raid 1, Array 2
controller B, 6 disks raid 10, Array 3 OS – array 0
Log – array 2 on each of array 1 & 3
create 3 partitions
data will go on array 1 & 3, partition A
temp will go on array 1 & 3, partition B
backup will go on array 1 & 3, partition C

Hi Joe. I have 2 questions. We just bought a Dell MD1000 with 15 36GB 15k SAS drives to go with our Dell 2850 which has 6 internal drives and 3 RAID 1 arrays. Our current SQL Server 2000 Enterprise setup has the OS and Backups on one RAID 1 array, the log file on one RAID 1 array, and the data file on the third RAID 1 array. We have 1 database. Our plan was as follows:
On the 2850 (Controller 1):
Array 1 – RAID 1 (2 disks), OS, SQL and Backups
Array 2 – RAID 10 (4 disks), Log file On the MD1000 (Controller 2):
Array 3 – RAID 10 (4 disks), Data 1
Array 4 – RAID 10 (4 disks), Data 2 (perhaps the non clustered indexes)
Array 5 – RAID 10 (4 disks), TempDB (log and data)
Array 6 – RAID 5 (3 disks), Spare Question 1. Does that plan make sense? Reading your post above, I think your answer will be no. Should we split the data files between the 2 controllers. Should we use partitions? Question 2. Once we get the MD1000 setup with the appropriate arrays and partitions, how hard is it to reconfigure the database so that it uses all the new arrays? I did some searches for "reconfiguring sql server for multiple raid arrays" and didn’t find any useful results. Thanks in advance for your help.
John

Q1, it is not right or wrong
it is: what is important to you?
if you consider your users to be a**holes, then let them suffer for best data performance, get the data and temp across as many disks as possible 2.
ALTER Database ADD FILE then:
DBCC SHRINKFILE
or
reindex table
Hi Jo,
Please have a look which option is more appropriate and i need one for SQL and One sever For Virtualize as well.so need to buy 2 systems.
Option 1Config 1
x3850, Xeon Dual Core 7130N 3.16GHz/667MHz/4MB L3, 2x1GB, O/Bay HS SAS, DVD-ROM/CD-RW, 1300W p/s, Rack
Memory Expansion Card Option
2GB (2x1GB) PC2-3200 CL3 ECC DDR2 SDRAM RDIMM Memory Kit
IBM Server 73GB 15 K SFF HS SAS HDD
ServeRAID 8i SAS Controller
IBM MegaRAID 8480 SAS Adapter
NetXtreme 1000 E Single-Port PCI-E 1GbE
1300 Watt Power Supply Option
IBM SlimLine USB Portable Diskette Drive
3 Year 24×7 4 Hour Response
3850 Basic Hardware Install
External Storage
IBM System Storage EXP3000
IBM 73GB 3.5in 15K HS SAS HDD
IBM 146GB 15K 3.5in HS SAS HDD
3m MegaRAID SAS cable
3 Year Onsite Repair 24×7 4 Hour Response
Install Direct Attach Storage (EXP3000)Option 2 x3650, Xeon Quad Core X5355 120W 2.66GHz/1333MHz/2x4MB L2, 2x1GB ChK, O/Bay 3.5in HS SAS, SR 8k-l, CD-RW/DVD Combo, 835W p/s, Rack
Intel Xeon Quad Core Processor Model X5355 120w 2.66GHz/1333MHz/8MB L2
2GB (2x1GB) PC2-5300 CL5 ECC DDR2 Chipkill FBDIMM Memory Kit
IBM 73GB 3.5in 15K HS SAS HDD
ServeRAID-8k Adapter
IBM MegaRAID 8480 SAS Adapter
Remote Supervisor Adapter II Slimline
NetXtreme 1000 E Single-Port PCI-E 1GbE
xSeries 835W Redundant Power Option
IBM SlimLine USB Portable Diskette Drive
3 Year 24×7 4 Hour Response
3650 Basic Hardware Install (IBM)
External Storage
IBM System Storage EXP3000
IBM 73GB 3.5in 15K HS SAS HDD
IBM 146GB 15K 3.5in HS SAS HDD
3m MegaRAID SAS cable
3 Year Onsite Repair 24×7 4 Hour Response
Install Direct Attach Storage (EXP3000)Option 3
x3850, Xeon Dual Core 7130N 3.16GHz/667MHz/4MB L3, 2x1GB, O/Bay HS SAS, DVD-ROM/CD-RW, 1300W p/s, Rack
Memory Expansion Card Option
2GB (2x1GB) PC2-3200 CL3 ECC DDR2 SDRAM RDIMM Memory Kit
IBM Server 73GB 15 K SFF HS SAS HDD
ServeRAID 8i SAS Controller
QLogic 4Gb FC Single-Port PCIe HBA for IBM System x
NetXtreme 1000 E Single-Port PCI-E 1GbE
1300 Watt Power Supply Option
IBM SlimLine USB Portable Diskette Drive
3 Year Onsite Repair 24×7 4 Hour Response
3850 Basic Hardware Install
External Storage
IBM System Storage DS3400 Dual Controller
DS3000 1GB Cache Memory Upgrade
IBM 73GB 3.5in 15K HS SAS HDD
IBM 4-Gbps Optical Transceiver – SFP
DS3400 Software Feature Pack
DS3000 FlashCopy Expansion License
IBM 146GB 15K 3.5in HS SAS HDD
5m Fiber Optic Cable LC-LC
3 Year Onsite Repair 24×7 4 Hour Response
Install DS3000 and 2 Host Servers No EXP’sOption 4
x3650, Xeon Quad Core X5355 120W 2.66GHz/1333MHz/2x4MB L2, 2x1GB ChK, O/Bay 3.5in HS SAS, SR 8k-l, CD-RW/DVD Combo, 835W p/s, Rack
Intel Xeon Quad Core Processor Model X5355 120w 2.66GHz/1333MHz/8MB L2
2GB (2x1GB) PC2-5300 CL5 ECC DDR2 Chipkill FBDIMM Memory Kit
IBM 73GB 3.5in 15K HS SAS HDD
ServeRAID-8k Adapter
QLogic 4Gb FC Single-Port PCIe HBA for IBM System x
Remote Supervisor Adapter II Slimline
NetXtreme 1000 E Single-Port PCI-E 1GbE
xSeries 835W Redundant Power Option
IBM SlimLine USB Portable Diskette Drive
3 Year Onsite Repair 24×7 4 Hour Response
3650 Basic Hardware Install (IBM)
External Storage
IBM System Storage DS3400 Dual Controller
DS3000 1GB Cache Memory Upgrade
IBM 73GB 3.5in 15K HS SAS HDD
IBM 4-Gbps Optical Transceiver – SFP
DS3000 FlashCopy Expansion License
DS3400 Software Feature Pack
IBM 146GB 15K 3.5in HS SAS HDD
5m Fiber Optic Cable LC-LC
3 Year Onsite Repair 24×7 4 Hour Response
Install DS3000 and 2 Host Servers No EXP’sOr Still go for HP ?

The X5355 is a much better choice than the Xeon 7100 line
but later in September or October you should be able to buy the new quad-socket quad core servers based on the Xeon 7300 line
of course, by then, the Xeon 5300 line will probably have the 45nm processors with 6M cache per dual core,
I do not know anything about the recent IBM storage options
most of what I see are HP and Dell

HI Jo,
By looking the Price HP got the better price. Still thinking which way to go i just got the spec from HP . But i can’t go for 15K drives with ML370G5 is that ok?
Or going with IBM X3650 from above?
DescriptionQuantity
HP DL380G5 – SQL Server Option A
HP DL380R05 E5345 Performance AP Svr1
HP 2GB FBD PC2-5300 2x1GB Kit1
HP Smart Array P800 Controller ALL1
36 GB SFF SAS 10,000 rpm Hard Drive (2.5")2
146GB 10K SAS 2.5 HP HDD6
Integrated Lights-Out iLO Advanced Pack1
HP 3y 4h 13×5 ProLiant DL380 HW Support1
HP StorageWorks Modular Smart Array 50 Enclosure1
146GB 10K SAS 2.5 HP HDD8
HP 3y 4h 13×5 MSA30 HW Support1
HP SAS to Mini 2m Cable ALL1
Total
HP ML370G5 – SQL Server Option B
HP ML370R05 E5345 SAS HPM AP Svr1
HP 2GB FBD PC2-5300 2x1GB Kit1
HP Smart Array P400/512 Controller1
36 GB SFF SAS 10,000 rpm Hard Drive (2.5")2
146GB 10K SAS 2.5 HP HDD14
Integrated Lights-Out iLO Advanced Pack1
HP ML370 G5 SAS SFF DRV Cage Kit1
HP 3y 4h 13×5 ProLiant ML370 HW Support1
Total
Hi Jo,
I just thought of looking for Blade Solution For our SQL Server. Bcause i need 3 servers soon so what do you think about IBM HS20-40 with DS3400 or DS4000 and FC connection. Or New HP Blade Servers?

blades are for web and stuff like that
stay away from blades for db

HI,
Thanks, My current server do 600 Transaction Per mint?

Any specified performance issues with the current configuration & hardware?

HI Satya,Jo,
First i thought of buying a 1 Rack Server and SAN Solution for SQL and 1 rack Server For Vmware ESX for rest.
Again thought of going for Blade Solution + SAN ? Thant’s why i said my current SQL Server doing 600 Transaction per mint’s. and ok to go head with Blade centre solution. We got only 125-150 Users
Anyway i think i’m going to go with x3650 Rack Server and 1 External Storage with FC
IBM DS34000 got 12 drives and Server got 6 drives. I’m going to buy 1 server for vmware ESX and settup Physical to Virtual Clustering with Vmware.
So how do i config the RAID for SQL ? And also Can i use this for vm disks?

]]>