SQL Server Performance Forum – Threads Archive
HP DL585 W/ MSA1500 and 2 MSA30Hi everyone, The server is finally bolted in and the storage is ready to be configured. As usual some advice is more than welcome.
Yes, 146GB is not optimal but at least they’re not 300s… Hardware list:
DL585 32GB RAM (16X2GB sticks equally allocated per CPU) PC2700
Quad Dual core 2.2 GHz CPU
2 X 146GB U320 15K in the main chassis for the OS in RAID 1
28 X 146GB U320 15K in the two MSA 30
512 Cache for MSA 1500 (256 Read and 256 Write for now)
Split bus on both MSA30 U320
2 X HBA FC 2Gb Software:
Windows 2003 R2 X64 Enterprise
SQL 2005 64 Bit Enterprise Memory consideration:
After a long debate I decided that the happy medium was going to be 32GB running at 333MHz with potential to go up to 64GB maintaining the 333MHz capability. Beyond that the clock speed drops to 266MHz (128GB)
Here you can read about it:http://h18004.www1.hp.com/products/quickspecs/11902_div/11902_div.html Physical Connections:
Each HBA is connected to the MSA1500 via FC (5 meters)
4 SCSI cables connect each of the four I/O modules of the MSA1500 to the 2 split bus behind the MSA30 Database Size:
500GB growing to 700Gb in the next 1Yr
Backup required on disk. Potential Configuration 1:
MSA30 1 > 6 drives on CH A + 6 drives on CH B in RAID 10 plus 2 Hot Global spare Database DATA (876GB)
MSA30 2 > 4 drives on CH A + 4 drives on CH B in RAID 10 Database Tlog, tempdb (584GB)
MSA30 2 > 3 drives on CH A + 3 drives on CH B in RAID 5 Database Backup (730GB)
Note that an additional MSA1000 is available and will be added to handle the backup thereby freeing up space for Tlog, tempdb as well as being capable of backing up the full amount of data (876GB)
The above configuration achieves full redundancy except for the RAID 5 drive where if one bus of the MSA30 were to fail three drives would be lost at once with loss of the array.
Theoretical performance should be equal for read and write (RAID 10 no parity needed)
but limited to 320Mb on the single output of one MSA30 bus (the other is just a mirror and I believe it would just write and read the exact same data as the other side) Potential Configuration 2: Database DATA (1,606GB)
MSA30 1 > 3 drives on CH A + 3 drives on CH B in RAID 5
MSA30 2 > 3 drives on CH A + 3 drives on CH B in RAID 5 Database Tlog, tempdb (438GB)
MSA30 1 > 1 drive on CH A + 1 drive on CH B in RAID 5
MSA30 2 > 1 drive on CH A + 1 drive on CH B in RAID 5 Database Backup (1,022GB)
MSA30 1 > 2 drive on CH A + 2 drive on CH B in RAID 5
MSA30 2 > 2 drive on CH A + 2 drive on CH B in RAID 5 2 Hot Global spare one on each MSA30 Note that an additional MSA1000 is available and will ba added to handle the backup thereby freeing up space for Tlog, tempdb as well as being capable of backing up the full amount of data (1606GB) Considerations:
The above configuration (2) does not achieve full redundancy as if one bus of the MSA30 were to fail the DATA and the BAckup would be lost thereby having to resort to a tape backup.
Theoretical performance should be slower for writes compared to RAID 10 but by pooling the four I/O buses on the two MSA a theoretical bandwidth of 1,280 Mb could be achieved provided that there were enough spindles to generate such output.
While 1,280 Mb is only 160 MB per second and far from the recommended 400MB I feel that this is the best we could do at this point.
I also understand that on SQL 2005 there is no more limit to the amount of throughput attainable and such throughput is only limited by the storage subsystem. Now please calmly explain where I’m wrong and what I should do instead.
i have some complaints on the MSA1500, which has only a single 2Gbit/sec FC port(?)
and the backend using U160 SCSI – not just a matter of bandwidth vs U320, but U320 has better protocols. I think the MSA500G2 would have been a better choice,
or consider ditching the MSA1500 and have the SA 6402 RAID controller connected directly to the MSA30’s, which is a minimal equipment change Anyways, assuming we are stuck with the current equipments.
Data & Logs definitely must have separate physical disks,
in fact each very active DB must have its logs on its own set of physical disks,
if you only have 1 heavy activity db, then 1 set of physical disks for the logs is ok. Logs
2 disks in RAID 1 (on DA, not SAN) can handle 50MB/sec+ in sequential writes, which more than most apps will generate. however, when you backup the log, throughput will drop to approx 150 writes/sec & 150 reads/sec.
so determine the typically write rate to the log for your app and use sufficient disks so the T-log backup is reasonable short and non-disruptive, 4-6 disks may be adequate,
Also consider two sets of 2+2 for the logs, and have 2 log files, Data, Temp & Backup
while it is definitely a good idea to have Data, Temp and backup on different logical drives,
there no strong reason for these to be on separate physical disks consider 2 RAID groups, perhaps 5+5 disks each, one on each MSA
(if you were planning on a hot spare in each channel, my preference is to have a spare disk on hand in a nearby storage bin, if a disk fails, have a tech replace it.
i really don’t see the point of a hot spare in the unit itself wasting a perfectly good slot.
disregard if you don’t trust your tech.) on each RAID group, carve out 3 or more LUNs, potentially at different RAID levels if you desire.
or make on big LUN, use OS disk administrator to make 3 or more logical disks 1 logical disk for data (this should be the first LUN, which should reside on the fastest portion of the disk).
1 logical disk for tempdb data (temp log could reside here or with the main app log)
1 logical disk for backup – use compression software to reduce backup load. (I do work for Quest LiteSpeed on performance issues, I am claiming it has the best performance characteristics of the various products, of course this could be seen as a biased opinion – I worked on it so it has to be better than anything else) if your db has one or two very large tables, potentially you could have 2 or more file groups,
put each of the very large tables & its indexes in its own file group,
or even put the large table in one file group, its nonclustered indexes in a second
and all the other tables and indexes in the primary filegroup. the purpose of this in not performance, but rather to eliminate the need to defrag the very large table.
each file group of course gets its own logical disk the reason i suggested 2 RAID groups for this is that each filegroup has 2 files, one on each RAID group
should you need to expand capacity, get another MSA,
creeate a RAID group, add 1 file to each filegroup,
this is more granular
one more item
i don’t view anything as right or wrong, except for the claims and recommendations of many SAN vendors its really a matter of strategy on separately random & sequential loads, isolating each sequential load,
how you want to administer your storage, planning for growth, etc considering that you have very powerful CPU capability in your system, and are on 64-bit & SQL 2005, if you are going to be doing large queries, reports, DSS etc, I would push for more sequential bandwidth to be added when budget allows,
Thank you very much for your thorough insight on this configuration.
Of course I’m not happy to find out that the SCSI I/O of the MSA1500 are U160 and while I am kind of stuck, I’m considering a different solution.
For sake of conversation however let’s assume this is what we have to work with. So you’re ok with data and backup on the same array. I assume that you have no problem in loosing the array and having to go back to a tape. The online backup is only good if the array is still alive and you only want to recover or restore some data.
I’ve always felt safer with the two on different arrays but I see your point and considering that I could eventually attach the spare MSA1000 this solution would make sense as it would be a very easy transition. I only have 1 very active DB so 1 set for data is ok. T-Log
My log file hovers around 10GB max and I’ve seen it hitting 40GB under heavy re-indexing (I used to leave the DB in FULL but now I switch to bulk logged before re-indexing) and only once it got out of hand with an accidental reindex of the largest table of the DB on a clustered index which won’t (should not) happen anymore.
Having said that 4 drives one per channel in RAID 5 should work fine. Temp Db
I prefer your second option where Temp shares the T-Log space as tempdb is always small and hardly utilized. As a recap:
Assuming hot spare on main chassis
Database DATA (1,606GB each array 3,212GB total)
MSA30 1 > 6 drives on CH A + 6 drives on CH B in RAID 5
DATA Filegroups, 1 out of 2 files for each FG
MSA30 2 > 6 drives on CH A + 6 drives on CH B in RAID 5
DATA Filegroups, 1 out of 2 files for each FG Database Tlog, tempdb (438GB)
MSA30 1 > 1 drive on CH A + 1 drive on CH B in RAID 5
MSA30 2 > 1 drive on CH A + 1 drive on CH B in RAID 5 1 Hot Global spare in the DL585 If hot spare has to go in the MSA30 do you suggest having 4 hot spares? (that seems like a wasteâ€¦) Thank you very much once again, yours are all good points.
simulatneous multi-disk failures are rare, but check with HP on which disks are mirrorred with which in RAID 10, which has more ability to tolerate multi-disk failures.
once, there was a variation of RAID 10 in which half of each disk was mirrored on the disk before and after it, so it could tolerate 2 consecutive disks failing, but i don’t know if any one implements this today. consider getting extra hard drives mounted in the DL585, so it can also get one copy of the backup.
i am also assuming you will make daily backups, copy to a QA/reporting system on the same LAN, or have a remote location, so restoring from tape will be rare, but still do this for archival purposes i think if you have 2 disks on each MSA for logs, that would be RAID 1,
you could go to 3 disks in RAID 5 plus 1 for the hot spare in each MSA
but still, i prefer to not waste slots with hot spares, just keep extra disks for manual replacement. try to get a Smart Array 6402 and 1 MSA30 for the QA/reporting system, you will see a difference in the table scan performance (500MB/sec for the SA6402 + 1 MSA, vs 320MB/sec for the 2 MSA1500 + 2 MSA30)
additional note on the log file size.
every time SQL Server starts, this includes a cluster fail over,
the log file must initialized to zero (presumably after commited tx are rolled forward and uncommitted are rolled back) (also check the validitiy of this) if the log file is very large, example 40GB, suppose 1 pair of physical disks in RAID 1, direct attached, will have an average init rate of 50MB/sec,
then thats 800 sec to init the 40GB log, so keep the log as reasonable as possible, do make sure it has plenty of room so that it does not have to grow during the day, but switch to bulk log for reindex as described above.
do periodic backups of the log so that it can rollover.
if the log must be large, then have a sufficient number of disks for fast init.
note SAN drives will have lower init rates per physical disk
Joe, When you suggested 5+5 on each MSA I thought you meant in RAID 5. It is now clear that you intended RAID 10. However I think that by pooling the resources of all 4 buses we would overcome the RAID 5 slow write effect and still double the speed on reads.
Just to be clear, the way I see it is that in a 5+5 MSA 1 and 5+5 MSA 2 RAID 10 with a FG with two files I ‘m only using two I/O cards as the other one is just a mirror and does not do any active work.
In a 6+6 MSA 1 and 6+6 MSA 2 RAID 5 with a FG with two files I ‘m going to be able to use all four I/O cards therefore theoretically doubling the throughput.
With regards to redundancy as I understand it, in the first case (RAID 10) one could loose all of the drives (6) from the same bank without any data loss but once a drive is lost on one bank no other drives can be lost on the other bank.
In RAID 5 after one drive is lost no more failures are allowed until the array is properly rebuilt. With regards to getting more drives in the DL585 I sure can but I only have two more slots available as the first two are now used for the OS in RAID 1 and I could not fit the entire backup database unless I use some compression programs like you suggested. I’m going to check LiteSpeed out and WOW I just did…I hope I can afford it!
My current backup strategy is full backup weekly to disk; daily differentials and T-Log overwrite after the differential with appends every hour.
All of those files are then backed up to tape.
BTW what is a "QA/Reporting system" Quality Analysis? Like keep the OLTP separate from the analysis? Sorry but not…We feed new data hourly and query against the same DB With regards to log, I was thinking about 1+1+1+1 in RAID 5 with one drive on each of the buses of both MSA. This way all four I/O cards contribute to a fast throughput and with 438Gb of space I have plenty for T-Log and Tempdb plus some. About the init time, I understand but thankfully we don’t reboot very often. PS. I’m working with the vendor to see if I can return the MSA1500 or if they have plans to release a U320 I/O card any time soon.
i left the RAID level up to you,
The MSA should let you mirror 5 drives in one channel with 5 from the second.
on raid controllers, i am pretty sure that the order in which select the drives for the array affect which drives mirror which,
so selecting 5 drives on one bus, then 5 on the second is different than select 1 drive on bus 1, 1 on bus 2, then a 2nd from bus 1 etc?
check the hp web site for all the recent articles on RAID with MSA’s,
they have many options. the swap options are:
1. replace just the MSA1500 with the Smart Array 6402, both can connect to the MSA30
2. replace each MSA1500 & MSA30 with the MSA500G2 are you dealing with HP directly? are you in the NYC area? if so, ask them to get advice from North Hinkle, he is an infrastructure architect at HP, and really knows his stuff,
i seriously doubt he would have recommended the MSA1500 if the dl585 currently have 2 drives in RAID 1, add second set in RAID 1, each set should have 100GB+ free, move the page file or something to the second set to balance out if you want. depending on the compressibility of your data, i am inclined to think your 500GB DB should compress down to at most 200GB, which would fit, so just do a backup to 2 files. also, be ready to try a couple of different configurations.
i am putting together a set of scripts to stress the disk system
what is your time frame for going live? whatever you do, set the configuration, before putting your SQL Database on it, use IOMeter to test the configuration, if you are no happy, try alternate configs,
then let it run at high load for a few days,
that should take care of infant mortality problems in the disk another good test is to repeatly reboot the system, which should shake out any flakey configuration issues
All good Joe. Let me know if the scripts you’re preparing we’ll be available for me to try out.
Any I/O Meter program you reccomend? it can be from Quest…don’t worry… Andrea
the downloads will lead you to Source Forge
it is somewhat complicated to learn, but worth it
the MS sqliostress and other is old, and needs to be updated, watch for new updates. if you have a Quest rep, complain that there is not a simple (push 1 button) to find out what your disks can handle
do you do Exchange admin, it has a disk tester, why can’t there be one for SQL
Quest does have a complicated tool, Benchmark Factory, be sure to ask Quest for a discount on LiteSpeed.
then ask what the extra discount is because i recommended it,
i want to see if the rep comes back to complain to me
Eheh…ok will do!
Hi Joe, I have read the other thread with a similar topic above this one and I too invite markmin to read this thread and contact me at any time to share findings. I have gotten a new word from HP saying that the backplane on the 1500 is definitely U320; I wonder is the backplane is U320 but the I/O cards are U160
This is the only reference I could find:
which indeed says U320 but it is not clear to what it refers to.
The I/O cards are part number AA988A Whether that is the case or not performance testing is so far disappointing and I’ll be posting the results in a day or so.
The news from HP now is that I should DEFINITELY have a 4 U320 cards and they should be marked as such right on top of the connectors. Since I do not have such U320 logo, they’re now investigating what was exactly shipped to me and why the Compaq Insight manager reports the I?O modules as being Ultra3 (which would be 160). To add salt to the wound I found out that the MultiPath software has not been released for X64 therefore I should purchase Secure Path or I have to eliminate the redundant HBA. HP is now saying that they do have MultiPath software available for X64 but so far I have gotten no links to download it. Meanwhile I have solved the problem with the "phantom LUN" by disconnecting one of the Fiber Optic cables off the second HBA. Lastly it has been confirmed that the maximum throughput of the MS1500CS is 200MBps so the 175MBps I have achieved in test environment is pretty good. However I’m exploring other solutions like you suggested where I would be able to achieve 400MBps or more. More to come on this screen…
It is possible to get approx 200MB/sec (decimal, 1MB = 1,000,000B)
or 190MB/sec (1MB = 1024*1024) over 2Gbit/sec FC.
but i think you will only get this from server to SAN cache,
when you are transferring from server to SAN to disk, this drops to the 165-170MB/sec that you are seeing. please ask your HP rep if a Windows cluster is supported with the Smart Array 6402 connected directly to the MSA30,
it used to be the case that you could do clustering over SCSI, but i do not seeing any recent HP documentation on this.
it would be really annoying if they removed clustering support with direct attach storage
I’m now looking into two 6402 dual CH or 4 6402 (which supports 512 cache instead of 256). But I wonder how I would need to configure them to get more throughput
I know that for SQL server you recommend reaching 400MBps or more but it does not seem easy to do that…
each 6402 should be able drive 250MB/sec on each of the 2 U320 channels, assuming 4+ disks per channel.
this is with direct IO, not cached IO,
with cached IO, it should get 40MB/sec, so it will take 6 disks per channel to get 250MB/sec
Joe, does it mean that if I have 4 6402 or two 6402 DC and I create 4 arrays of 6 drives each I would have 240 MBps on each Array and therefore if I split my data with two files, one in array 1 and one in array 2 I can get upwards 500 MBps? (Theoretically)
my thoughts are to create 1 array out of 10-12 disks on each controller, (5-6 from 1 channel and 5-6 from the other)
try both RAID 5 and RAID 10 The only recent tests I have are for RAID 0, being it was a performance test only, demonstrating 250MB/sec per channel was possible. In theory RAID 5 & 10 should have no read penalty, but this is not always the case. if you can return the MSA 1500’s, try to a get a total of 4 6402’s, and 4 MSA30’s,
otherwise 2 6402’s and 2 MSA30’s are marginally adequate for now.
Joe, I maybe able to return the MSA1500 as the vendor suggested a configuration which is not supported by HP. However, if that is the case, I won’t be able to purchase two more MSA so with two of them with the split bus I would get two 6402 dual channel and use one card per MSA and split left and right side.
I’ll have to "waste" four slots if I go RAID 10 for the online spares or risk it by having cold spares handy.
Either way I would have 6+6 on each MSA minimum which is about 850GB each LUN.
One I use for data and the other for log and tempdb.
Of course if I’m happy with RAID 5 I may just get a lot more space and for the sake of it I may even try ADG which writes double parity and can sustain a loss of two drives but I fear that write performance may be abysmal.
Thanks again for the advice I’ll keep you posted. PS. I got the quote for Litespeed..and asked for a discount and for a 1 click speed check just like you said…I’ll let you know the answer.
the idea is to make 2 arrays groups out of 10-12 disks each for data, temp and backup,
make 1 array group out of 4 disks for logs. create 3 logical drives on each of the big arrays, the 1st part for data, the second for temp, and the third for backup for testing purposes only, make 1 12 disk array RAID 5, the second RAID 10, then compare the performance, to decide whether you want 5 or 10.
Joe, I’m totally on board with the plan but if I go RAID 10 I cannot make an array with more than 7+7 drives.
Remeber I only have 28 drives total and with two 6402 I cannot "merge" the two MSA30 as they’ll be totally separate.
Let me give it some more thought and get back to you after some more testing.
Joe,<br /><br />This is what I’m leaning towards:<br /><br />First return the MSA1500 and get to 6402 dual channel with upgraded cache to 512. I assume that the 512 is split in 128 read and write per channel. While this is less cache than the MSA1500 I will have two cards so really a wash.<br /><br />I’ll get three 300GB drives. Two of them will go in the server attached to the 5i controller for backup and one will be spare on line. This is because after testing the Litespeed I was able to compress a 120GB db down to 20GB. Yes it took 13 hours but the test server is a dual xeon 700 with 512 RAM and a single disk. So I’m hopeful that with faster CPUs the compression time will take much less and the writing to the drive will be better than 2.5MBps <img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /> at 50MBps it should take 39 minutes.<br /><br />Now for RAID 5 vs. 10. No questions that RAID 10 is faster in write and perhaps marginally also in read. My tests show RAID 10 to be 15 to 25MBps second faster than RAID 5<br />So while I don’t want to drop any ounce of performance I think it would be wise to use RAID 5 with the exception perhaps of write intensive portion of the DB like the log.<br /><br />I’ll post three possible scenarios with RAID 5; RAID 10 and hybrid.<br />
I just want be sure that you are clear on the use of multiple files. I am assuming there will be 2 arrays on each RAID controller.
the first will have 10-12 disks in RAID 10 or 5
the second be RAID 1 with the 2 remaining drives on that controller. the OS will then see 4 physical drives (not counting the internal drives)
2 of the big ones (RAID 5 or 10 arrays) to split into 2 partitions, one for data, 2nd for temp.
2 of the small ones, for the logs in this case, your database will have 1 filegroup with 2 data files, and possibly 2 log files, one one each of the RAID controllers another option is as follows:
one each controller, make 1 RAID 10 array out of 50% of the space on 10-12 drives, then make a second array on the same set of drives in RAID 5, (you can split it in any percentage)
is this what you meant by hybrid? in this case, your database will have 2 filegroups, each file group will have 2 data files, one on each RAID controller. put the write intensive tables and indexes on the RAID 10 partitions, and the more static tables/indexes on the RAID 5 array
Thanks Joe, yes I’m clear on how to use multiple files, thanks. What I did not get until now is that you were considering RAID 1 for the log files. What I had in mind was 2+2 drives on one MSA in either RAID 5 or 10. Of course I could not really have two files in either of those configurations. With regards to "hybrid" I meant to create the data drives in RAID 5 (4+5 drives) and the log in RAID 10 (2+2 drives) but what you’re saying is something totally new to me… QUOTE another option is as follows:
one each controller, make 1 RAID 10 array out of 50% of the space on 10-12 drives, then make a second array on the same set of drives in RAID 5, (you can split it in any percentage) END QUOTE Are you sure that this can be done???
Joe, I just answered my own question…and yes of course I can make a large array and then subdivide the space in two or more logical disks (appearing physical to the OS)
Joe, Now that I understand better the possible segregation of disk space, to finally piece all together I need to pose you another question. If you look at the first post, you were mentioning the possibility of spitting the database data between large table(s), indexes and rest of data and you also added that each filegroup should have its own drive.
In my database I really have three very large tables so the way you suggested I would need to have 7 File groups. 1) Large table 1
2) Large table 2
3) Large table 3
4) Non Clustered indexes for large table 1
5) Non Clustered indexes for large table 1
6) Non Clustered indexes for large table 1
7) Rest of tables and indexes or like in your other suggestion I could do 1) Large table 1 and its indexes
2) Large table 2 and its indexes
3) Large table 3 and its indexes
4) Rest of tables and indexes Which would be simpler to manage. I think I’m leaning towards this final configuration: OS RAID 1 2 146K Drives on 5i Controller
Backup RAID 1 2 300K Drives on 5i Controller Using Quest Litespeed to compress it
Array 1 LUN 0 5+5 Raid 10 146K Drives 730GB Divided into 3 logical drives at the OS level One FG each
Array 2 LUN 1 5+5 Raid 10 146K Drives 730GB Divided into 3 logical drives at the OS level One FG each
Array 3 LUN 2 1+1 RAID 1 Logs 146K Drives 146GB 1 Logical drive
Array 4 LUN 2 1+1 RAID 1 Logs 146K Drives 146GB 1 Logical drive 2 files for each file group 1 on each MSA
2 Files for Logs 1 on each MSA Other considerations:
Two 6402 Dual Channel connected to two dual bus MSA30
Two spare online on each MSA I have considered other three options but for this environment I think this achieves the perfect balance.
i don’t think you should go overboard with file groups,
one reason for a table to get its own filegroup, with indexes on another is if you know that fragementation is a problem and that a defrag takes too long.
but that is rarely the case
the other reason is to have different RAID levels,
but still, each data file group should be spread over all data disks. not one set of disks for each filegroup.
only very active logs must have separate disks. so, for example, if you needed 3 filegroups. and your disks are on 2 Smart Array controllers.
then each filegroups is split into 2 files, one on each controller.
also, if you want RAID 10 for one and RAID 5 for another,
use all data disks, just don’t use the entire space for each RAID level,
Joe, looking all members who post here, you still asking why I said "wait for Joe"?[<img src=’/community/emoticons/emotion-5.gif’ alt=’‘ />]<br /><br /><br />Luis Martin<br />Moderator<br />SQL-Server-Performance.com<br /><br /><font size="1">Although nature commences with reason and ends in experience it is necessary for us to do the opposite, that is to commence with experience and from this to proceed to investigate the reason.<br />Leonardo Da Vinci<br /></font id="size1"><br /><font size="1"> Nunca esperes el reconocimiento de tus hijos, eso ocurrirÃ¡ luego de tu muerte<br /></font id="size1"><br /><br /><font size="1">All postings are provided â€œAS ISâ€ with no warranties for accuracy.</font id="size1"><br /><br /><br /><br />
i’d like to know what other people are thinking and why (which is always important) very few people seem to follow the strategy of spreading each file group across all data disks,
its an adaptation of one of the strategies used in TPC-C & H benchmarks,
the better groups that do their own benchmarks (insteead of getting one handed to them) have excellent analysis to support a particular configuration, but don’t bother to explain it, especially with adjustments for normal production environments note that the HP TPC-H benchmark on the DL585 used 8 P600 RAID controllers, i recommended 4 as a base in my article, but i did not explain some of the file placement strategies mentioned in this thread.
i suspect that it is not entirely clear, so i will try to add some pictures to my article I am also trying to convince some people at HP to do a technical configuration guide for SQL Server,
so any one with HP account reps should bug them for this