SQL Server Performance
  1. merrillaldrich New Member

    Just got my new HP ML370 G5 2 x 4 core 2.33 gHz machine. I'll post my progress testing it here.
  2. joechang New Member

    quick question: was the Quad Core 2.66GHz available?

    not saying you should have got it
    but rather that HP is obligated to make it available,
    and i did not see it on their web site

    what will you do with all them cores
    (note: if you are still on SQL 2000, beware of the parallel merge join)
  3. merrillaldrich New Member

    No, we didn't have the option of 2.66 gHz (or 3); we bought the fastest procs we could. I saw your other post about this issue, so I was looking <img src='/community/emoticons/emotion-1.gif' alt=':)' />.<br /><br />We won't be using all the cores in the short term (SQL Server Std 32 bit is what I am aiming for) but we will save a processor license by using only one socket, which I plan to repurpose. <br /><br />Otherwise this is just room for future growth, maybe SQL Server 2005 64 bit. It's almost a pity not to crank this whole box up to its full capacity right away. I would have gotten just one proc, but its a lease, and so our options to reconfigure it during the lease term are limited.
  4. merrillaldrich New Member

    OK, machine is built and configured. This is fun!

    Here's the setup:

    2 x 2.33 gHz Quad-core Xeon procs, 8GB RAM

    14 drives total, split this way:
    4 pairs 72 GB 15k rpm Raid 1+0 as Data volume, holding data files and tempDB
    1 pair 72 GB 10k rpm Raid 1 for OS and Apps
    1 pair 72 GB 10k rpm Raid 1 Log
    1 pair 72 GB 10k prm Raid 1 Backups

    2 controllers:
    1 P400 controlling the 8 Data drives
    1 E200 controlling the 6 other drives

    Windows 2003 Server Std. 64 Bit
    SQL Server 2000 Std. 32 Bit


  5. merrillaldrich New Member

    Anecdotal testing gives me 400mb/sec sequential read on the data volume (8 drives)

    Here are the test results I'm getting with Joe's handy test scripts (posted elsewhere)

    "In Memory Test"



    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec DWritesPerSec D Write MB/sec Avg-sec/IO LWritesPerSec L Avg-sec/IO
    ----------- ----------- ----------- ----------- ------------- ------------- ------------- ------------- ------------- ------------- -------------- ------------- ------------- -------------
    52 448434 30000 937 14947.8 10 149478 0 0 0 0 0 0 0
    52 169540 30000 937 5651.333 30 169540 0 0 0 0 0 0 0
    52 54845 30000 938 1828.167 100 182816.7 0 0 0 0 0 0 0
    52 5390 30000 938 179.6667 1000 179666.7 0 0 0 0 0 0 0

  6. merrillaldrich New Member

    "Random Read Test"



    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec DWritesPerSec D Write MB/sec Avg-sec/IO LWritesPerSec L Avg-sec/IO
    ----------- ----------- ----------- ----------- ------------- ------------- ------------- ------------- ------------- ------------- -------------- ------------- ------------- -------------
    55 425 30046 1 14.14498 10 141.4498 250.3162 1.958975 0 0 3.978593 0 0
    55 1291 30000 14 43.03333 20 860.6667 1129.1 8.836198 0 0 0.8624864 0 0
    55 1141 30000 13 38.03333 30 1141 1208.2 9.459896 0 0 0.8078133 0 0
    55 835 30013 16 27.82128 50 1391.064 1396.228 10.93406 0 0 0.6931393 0 0
    55 512 30000 19 17.06667 100 1706.667 1687.6 13.21771 0 0 0.5707119 0 0
    55 196 30016 29 6.529851 300 1958.955 1894.523 14.83636 0 0 0.5009672 0 0
    55 62 30016 30 2.065565 1000 2065.565 1991.205 15.59377 0 0 0.4764088 0 0
    55 22 30750 30 0.7154471 2999.818 2146.211 2028.976 15.89024 0 0 0.4697793 0 0
    55 12 31406 54 0.3820926 9998.25 3820.257 3699.484 28.9664 0.0636821 0.0004975164 2.022481 0 0

  7. merrillaldrich New Member

    "Table Scan Test"


    spid Calls ms CPUms dpages/s MB Table MB/sec ReadsPerSec Disk MB/sec Avg-sec/IO
    ----------- ----------- ----------- ----------- ------------- ------------- ------------- ------------- ------------- -------------
    56 1 137330 429 30249.49 32454.39 236.3241 963.2491 239.4274 0.9356037
    56 1 120170 568 34569.04 32454.39 270.0706 1099.035 273.6235 0.8090237

  8. merrillaldrich New Member

    Firing on all 8 <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />Here's some good<a target="_blank" href=news:>news:</a> this machine has 8 cores and 2 sockets; I had some conflicting information about whether SQL Server Std would use all 8 cores, or whether that edition would limit itself to 4. But it's firing all 8. Nice!
  9. joechang New Member

    the number look reasonable
    for in memory, a Xeon 5150 at 2.66GHz did 220K/s
    so I would have expected 190-195K/s
    i think that was SQL 2005,
    so we could just be looking at the overhead of S2K 32-bit on a 64-bit OS
    but since you have 8 cores, lets not worry on this

    the random read test ranged from 860-2146 (excluding the high count run)
    this reasonable for 8 disks in RAID10,
    at low que 100+/disk, 250+ at higher queue

    the table scan of 273MB/sec is low for 8 disks, but good for 4 disks

    In theory, RAID 10 should not impose any read penalty,
    a large block read can be issued to either of 2 disks that has the data,
    in practice,
    the controller is not smart enough to recognize a sequential read,
    issue half of one read to one disk, and the second half to the other disk


    I think you could have followed my rec for disk lay out
    2 disks - OS & log
    12 disk - 3 partitions, 1 for data, 1 for temp, 1 for backup
    this should give you 75% more disk perf
    but oh well
  10. merrillaldrich New Member

    Joe - just for my education, how would I set up 12 disks in the two drive cages internal to the ML370? Can one controller hook into 8 from one cage and 4 from the other, and run them all together?

    (And thanks for taking the time to look at these numbers -- your insights are always helpful)
  11. joechang New Member

    hmm,
    I am kind of suprised that HP does not offer a drive mounting kit for the 2 open Removable Media Bays
    its a nice place to stick the 2 drives for the OS
    or a couple of big 3.5in 500-750GB SATA drives for backup

    oh well

    lets see, you said 14 drives total.
    so 8 in one bay and 6 in the second?

    1.
    if the E200 connects to the 8 disk bay
    and the P400 connects to the 6 disk bay

    make LUN 0 on the E200 with 2 disks for OS + logs
    LUN 1 on the E200 with 6 disks
    LUN 2 on the P400 with 6 disks

    on each of LUN1 and 2, make 3 partitions, first for data,
    second for temp, third for backup
    be sure to use diskpart or whatever its called (is this default in W2K3 R2 now?)

    the main database will now have 2 data files, one on each of LUN 1 & 2
    same for temp db data,
    and backup to 2 files, not 1, one on each of the 3rd partitions of LUN 1 & 2

    2.
    6 disks in bay 0, on the E200
    8 disks in bay 1 on the P400

    LUN 0 on 2 disks of E200 for OS + logs
    LUN 1 on 4 disks of E200
    LUN 2 on 4 disks of P400
    LUN 3 on 4 disks of P400

    LUN 1, 2 & 3 are now for data, temp and backup
    so 3 data files for main db, temp data, and backup

    Option 2 has 8 disks on the P400 instead 8 on the E200

    better would be if you could shove 2 disks in the removable bays attached to the E200
    then have the data split between the 2 8 disk bays
    one on each channel of the P400
  12. merrillaldrich New Member

    Ah, got it. Thanks for the tips!
  13. merrillaldrich New Member

    Joe, if you are still watching this thread, for a different server:

    I see on HP's web site these two options for 24 disks:



    MSA 70 1 $3,199.00 $3,199.00
    24 of 25 Drives 24 $369.00 $8,856.00
    Total: $12,055




    2 x MSA 60 2 $2,999.00 $5,998.00
    24 Drives 24 $269.00 $6,456.00
    Total: $12,454


    Would I be missing anything by saving the $400 and going for the single 24 disk enclosure?
  14. joechang New Member

    My strong preference is to configure 1 external storage unit per x4 SAS port, and also to start with 1 PCI-E SAS adapter per external unit even though the adapter has 2 x4 SAS ports.
    So the starting point is to have 1 SAS adapter for each external storage unit
    until the PCI-E slots are filled (4-7 with the recommended systems)
    Then you have a second external storage unut on each PCI-E SAS adapter

    Since typical storage units are 10, 12, or 15 disks,
    I really consider the MSA70 a double unit

    So I would rather have 2 P400, 2 MSA 60, one on each P400
    instead of 1 MSA 7- connected to a single P400

    its too bad the MSA 70 has only 1 x4 SAS in port, and 1 out port
    The HP ProLiant group is normally very asute as to how real systems should be configured that they did not offer 2 x4 SAS in ports
  15. merrillaldrich New Member

    Ah, so the combination P800 controller + 24 SAS disks in the MSA 70 + its one port would cause a bottleneck because of the port itself and hence not perform as well as the two MSA 60's? <br /><br />In that case, you've got an excellent point - it's surprising that HP would not provide two ports. Seems obvious. Maybe they didn't imagine this in a max-throughput scenario? Strange.<br /><br />They also say that these can cascade, which I guess would aggravate that issue:<br /><br />"The MSA70 supports the cascading of shelves in a 1+1 configuration to allow a maximum of 50 drives in a 4U configuration behind each port on the Smart Array P800 or the Smart Array E500 controller for a total of 100 drives in 4 enclosures. (this single controller port incorporates four lanes for a total max throughput of 12Gb/s for SAS)"<br /><br />Still, 12 Gb/s is a lot of throughput <img src='/community/emoticons/emotion-1.gif' alt=':)' />.
  16. joechang New Member

    thats 12Gbit/sec which is just the signaling rate
    figure max of 300MBytes per SAS 3Gbit/sec port
    then its 1.2GB/s theoretical

    can a single controller drive that?
    there is no publish report on this matter

    then if your controller is in a x4 PCI-E slot, figure 700-800MB/sec realizable

    just assume most press numbers are selected by idiots who do not look at the complete environment

    stick with 2 P400 + 2 MSA 60
  17. brentonbrown New Member

    Joe, you said, 'My strong preference is to configure 1 external storage unit per x4 SAS port.."

    I have a 2950 connected with one x4 SAS to a MD1000. My arrays on the MD1000 are 8disk Raid 1+0(OLTP) and 7disk RAID 5(Reporting). Should I run the MD1000 in split mode and utilize the second x4 SAS connector on the single PERC5e?

    Any response appreciated!

    Brent
  18. joechang New Member

    i noticed that button on the MD1000
    i just never get myself to read the Dell manual on this

    while I like Dell servers, especially the pricing
    if it is one Dell stinks at, its putting out a manual the gets to the point

    I will say this
    I ran a few tests where the logs were on 2 disks
    data on the other 13
    1 PERC5 using 1x4 SAS connector to the MD1000

    a huge dump on the data appeared to severely depress the throughput on the log

    i do not think the SAS x4 pipe was overloaded
    i do think it was the controller that was overloaded

    just guessing, that 8 + 7 disks will not overload the x4 SAS channel,
    but you could look into it,
    but I would look for surges to the reporting disks
    if it does depress IO to OLTP

    then read the manual to see if it can be configured for split channel
    and consider if a second controller is appropriate if the above turns out to be relevent
    its also posssible i was heavily distracted when doing the above tests (not to be discussed here)

    I am also anxiously awaiting the second generation PCI-E SAS adapter based on the
    Intel IOP 8134x

    be sure to bug Dell when PERC6 ? comes out

Share This Page