New Server for SQL 2005/AS/RS | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

New Server for SQL 2005/AS/RS

Hello,<br /><br />I am looking at purchasing a new server (SQL 2005 Enterprise x64) in the next two weeks. I have been busy reading all the forums and the many excellent posts. I am hoping by posting my plans here I might get some feedback/suggestions where my planning may not make the most sense.<br /><br /><br /><b>History:</b><hr noshade size="1"><br /> <ul><br /><li>Right now our production system is an in-house developed OLTP/reporting system with a Access ADP front end. It is roughly 25 GB and runs on a Dual Xeon 2.8 GHz with just two RAID sets build on a total of 6 physical drives, with the first set being RAID 1 for System disk, and the balance RAID5 for virtually everything else. <br /></li><br /><li>The application was developed over 5 years and the server above was fine for the first 3, and its been slower than desired for the last 2 years.<br /></li><br /><li>Many of the activities that this application demands is heavy reporting and heavy order calculation based on intensive table scans/joins/etc. <br /></li><br /></ul><br /><br /><br /><b>Today:</b><hr noshade size="1"><br /><ul><br /><li>A year ago we purchased a blade server as a development server and its running Win 2003 x64 with SQL Server 2005. <br /></li><br /><li>We developed a replacement application (still a work in progress) based on 2005 SQL Analysis/Reporting/SQL Services as well as Proclarity (now Microsoft Performance Point Server). The front end is a .Net 2.0 application using LLBLGen ORM Business/Data Layer. <br /></li><br /><li>The new application was built to leverage the capabilities of the new platforms as well as the huge advantage to profile and review the existing database and redesign the schema accordingly.<br /></li><br /><li>I am starting to go into production with pieces of the new system and need to get off the development server and onto our production server<br /></li><br /></ul><br /><br /><b>Important Points</b><hr noshade size="1"><br /><ul><br /><li>The new server will be SQL Server 2005 Enterprise Edition on Windows Server 2003 R2 Enterprise. Because it will be Enterprise Edition, I will now be able to do parallel index scans, and use real partitioned tables, as well as partitioned cubes (to reduce the amout of data that has to be updated when new data comes in) so I am looking at leveraging multi core technology.<br /></li><br /><li>I am looking to purchase a eight core server and lots of disks.<br /></li><br /><li>Initially the entire SQL Server 2005 Suite will be on the new server [SQL/Analysis/Reporting, perhaps Notification Server too]. ProClarity will be on another server. I completely realize that a year from now I will likely have more than one production server to host all these various services, but as we are still under development and training users on the new BI tools that we are not sure how many reports/application requirements will go on which service. It seems a high percentage of our canned reports on the old application are not even necessary with ProClarity and well trained users (my goal). Therefore, I don’t want to over purchase on additional servers yet, I would rather have one fast server to host the new services to start and then scale out as the performance issues are identified.<br /></li><br /><li>There are numerous tables (10ish) that currently have 50 million+ rows and add roughly 15 million per year at our current rate. We are a merchandising/wholesale company and they are all transactional by date (sales, returns, shipments, tracking, servicing), and mostly only reported on for the current and previous year, thus are good candidates for partitioning (something we don#%92t have in our old app).<br /></li><br /><li>The new design of the application keeps much more intermediate history due to new accounting requirements. Much of this history is not necessary for the OLTP application or normal reporting, it is only needed in research problems with imported 3rd party data where we have to do extensive matching. Thus in addition to the main database, there are roughly 5 import history databases that may be 50-100GB each, do a ton of work when 3rd party data comes in (a few times a day) but then sit idle the rest of the day. One of them does get a continuous stream of very small transactions (return scanning in the warehouse). These used to all be handled by tables in the main application database, but are now in separate databases to make them easier to manage and allocate resource for (including moving to other servers or VM#%92s on our ESX VMWare servers).<br /></li><br /><li>I also would like to do online backup to disk and then off to tape from the online disk backup (litespeed has caught my eye for this to reduce my disk requirements for online backup).<br /></li><br /></ul><br /><br /><b>Current Thinking for new Server:</b><br /><hr noshade size="1"><br /><ul><br /><li>I want a lot of RAM 32GB to start with the option of going up to 128GB when 4GB memory prices come down in the future.<br /></li><br /><li>I am currently leaning towards a quad-socket, dual core AMD solution with the HP DL585 G2 and IBM x3755 as the candidate systems. I would consider a Dual Socket, Quad Core solutions but they all appear to actually cost more to do this level of RAM as they have half the DIMM slot count and often don#%92t have as much PCI-Express slots to provide the I/O levels I may need to hit.<br /></li><br /><li>Between the IBM and HP options the advantages for HP are 8 SAS SFF drives and more front accessible components, but memory speeds drop to 533MHz from 667Mhz if you use more than 4 sockets per memory board, The IBM has only 4 3.5” SAS drives, but does have the ability to drive 128MB at 667MHz with 8 DIMM sockets populated per board. Other than that they seem fairly similar.<br /></li><br /><li>Price wise the HP and IBM are pretty close, I am still waiting on final special bid pricing, but assuming they are within 5%, I am currently leaning towards the HP as there just seems to be much better technical info on it than the x3755 which I can find nothing significant on and it seems to be marketed as a HPC vs. SQL OLTP/OLAP platform (likely to keep Intel/IBM#%92s relationship happy in my opinion).<br /></li><br /></ul><br /><br /><b><u>To keep this simple, lets assume I go with HP DL585 G2:</u></b><br /><ul><br /><li>The base server will have (7) 72GB SAS SFF 10K Drives, + (1) 144GB Drive<br /></li><br /><li>For external storage I am looking at 2-3 P800 SAS Controllers, driving (4) MSA50 (wish the MSA70 was out) and (2) MSA60 storage shelves. Currently thinking 2-3 shelves per P800 depending on if I get 2 or 3 P800s.<br /></li><br /><li>For drives, I will have (40) 72GB 10K RPM SFF SAS [10 per each MSA50] (Wish 15K RPM SFF drives were out), and (20) 72GB 15K RPM 3.5” SAS [10 per each MSA30]. This leaves (2) open bays per MSA60 for possible future SATA large drives for online backup which I will purchase a month or two down the road. <br /></ul><br /><br /><u><b>Based on all I have read here:</b></u><br /><ul><br /><li>System Disks: RAID 1 (2 internal drives 72GB), Partitioned as C: and D:<br /></li><br /><li>Paging File: dedicated 144 GB HD (32 GB RAM * 1.5 == fits fine, will still fit when 64 GB of RAM)<br /></li><br /><li>5 remaining drives carve up for LOG disks (not sure if I should do RAID 10 with 2+2, or two RAID1 arrays and split log files.<br /></li><br /><li>The 60 drives I have in the external storage (40 10K RPM, 20 15K RPM) are my big question on how to setup for maximum performance.<br /></li><br /><li>I have 8 cores, so I should have 8 temp data files.<br /></li><br /><li>The main application database server should have roughly .5 data files per core, so that is 4 data files<br /></li><br /><li>I have the system databases and the reporting services database, assuming those can go on the physical disks that run the operating system (D<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /></li><br /><li>Then there are roughly 5 of those special databases that I have to support, I figure their data files can all share the same array.<br /></li><br /><li>Finally, I will have the analysis server cube data I have to put somewhere too.<br /></li><br /></ul><br /><br />Right now as a starting point for discussion, I figured all the external storage would be RAID 10, in the following configuration:<br /><ul><br /><li>MSA50 Shelf A+B = RAID Array 1 (20 Drives total, for 720GB space) – On one P800<br /></li><br /><li>MSA50 Shelf C+D = RAID Array 2 (20 Drives Total, for 720GB space) – On another P800<br /></li><br /><li>MSA60 Shelf E+F = RAID Array 3 (20 Drives Total for 720GB space) – on another P800 [Note this one is 15K RPM]<br /></li><br /></ul><br /><br /><b>Questions:</b><hr noshade size="1"><br /><ul><br /><li>Somebody else asked this too… Everything I read is more spindles are more IOPS, but on this forum there is a lot of recommendations for 1-2 arrays per shelf. In my case I took 60 disks and made 3 arrays, but its one array per two shelves. Given the large number of simultaneous things going on this server, should I create smaller arrays and have more physical arrays to spread my data out but with less IOPS per second per array? <br /></li><br /><li>I assume I should put my main application database on the 15K RPM array.<br /></li><br /><li>What file layout strategy given all the files I have above would benefit performance the most? I suspect I#%92ll have a number of partitions per physical array too, so I can keep files contiguous and easily managed. For the less used databases should I bother splitting them into a file per every .5 cores initially? I would lean towards monitoring their performance with just one or two files each and if they need more, create more.<br /></li><br /><li>Any thoughts on if I need three P800#%92s, or can I just do two P800s.<br /></li><br /><li>Should I bother with the MSA60s as they give me 15K RPM, but take up twice as much room/power and generate considerably more heat. Is the 15K RPM worth it.<br /></li><br /><li>When Analysis Server is rebuilding its cubes it will be going against the main database on the same server so I assume the arrays used by those two processes should be different. Further, should I consider using processor affinity to perhaps carve up the 8 cores so that analysis server and SQL server have their own? <br /></li><br /><li>There are many Stored Procedures that make heavy use of temp tables that have millions of rows (thus we do not use RAM tables). However, we have considered purchasing enterprise solid state disk (the ones with hard drives in them for immediate backup of the data and batteries too to keep the ram live long enough to backup to the hard drive in case of power fail). The access/bandwidth on these devices are slower than server RAM but hundreds of times faster than disk (microseconds vs. milliseconds). Any words of wisdom on using these for temp?<br /></li><br /><li>One thought I have is to perhaps be proactive and break out the array for analysis server seperately in its own shelf so if I purchase a new server for analysis server I can just hook this array up to the new server and re-scan. I suspect analysis server will be the first to come of the server given the OLAP vs. OLTP nature of the two.<br /></li><br /><li>Some whitepapers I read say keep it simple, create a small RAID10 array for your consoidliated log files, and then create the largest RAID10 array you can create with as many spindles as possible and put all the consolidated SQL data files on this array (granted it may have partitions for some organization). The articles say that more often than not your overall performance will be better than if you try to engineer your files into seperate arrays unless you have extensive performance testing and know your exact I/O profile per database. As I don’t know this yet, should I do this initially instead of splitting the disks into seperate arrays. It has the side benefit of reducing controller needs too. However, it seems to good to be true from a simplicity standpoint. <br /></li><br /></ul><br /><br />Thanks for any opinions you can provide. I know the ultimately, I need to performance test and that once I am up and running I will be able to better gauge the impact of each database/service and retune or buy a separate server and scale out. For now I am just looking to do my best educated guess that will get me through a year before my next major infrastructure change. Based on what I am coming from, I anticipate that it will be much faster than what I have now even if this is configured sub-optimally (especially considering the software redesign and move to SQL 2005 alone runs 4-5x faster for the same workload.)<br /><br />Also, if anyone thinks I should consider the IBM x3755 over the HP DL585 G2 and would like to share their reasoning, I would love to hear it.<br /><br />Thanks!<br />Brett Thompson<br /><br /><br />Thanks,<br />Brett Thompson
even though it is perfecting acceptable to place 2-3 external storage units on to 1 PCI-Express SAS controller, even 4,
i think it is better to start with 1 SAS controller per external storage unit until you fill the PCI-E slots,
then start adding additional storage units to the existing channels also, no particular reason to split the SATA drives between MSA units,
i would probably but 12 SAS drives in one MSA, 6 SAS and 6 SATA in another
make 3 arrays of 6 disks (2 from the MSA with 12 SAS drives, and 1 from the MSA with 6)
for data
and one archival array from the 6 SAS drives

Joe, If there is one controller per external storage unit and I am using MSA50’s and using RAID5, that is only 10 drives per array. Does the benefit of having additional controllers more than compensate for the loss of spindles for the array on IOPS? I am still confused about this as all the performance graphs I have on IOPS on the controllers show a almost linear IOPS performance increase as you add drives. The P800 controller can handle 108 drives, so I would think having 30 on it wouldn’t be pushing it to its limits. Thanks,
Brett Thompson
what dumb ass is feeding this narrow information on linear iops yes that’s true for random io
but occassionally you do sequential to get on and off the disk in the first place
for that, 8 disks could probably saturate a x4 PCI-e why does the cost of a P800 hurt so whoever cited the linear to 108 disks has a serious case of looking at narrow information
disregard that person’s info for future reference
forgive my venting above,
that was the same stupid mistake made in several early Intel server chipset designs,
only looking at the requirements to run the tpc-c benchmark,
not the other functions that occur in a database
such as a database backup (not required for benchmarks) narrow vision was one reason they lost the server chipset business to Serverworks (later recovered by granting said party rights to later bus design)
Joe,<br /><br />I may have mislead you or you may have assumed that the P800 was the one that was linear. A performance white paper of the IBM MegaRaid 8480E (the one used in the IBM x3755 platform I am also considering) shows numerous graphs and they are Near-linear for 10 through 40 drives, not 108 drives. I was just sayin the P800 supports 108 drives and that I would imagine it would do close to linear through 40 drives, but never thought it would do it through 108.<br /><br />I can’t find performance test graphs for the P800 yet. The cost of a P800 doesn’t necessarily hurt any more than a grand spent anywhere else. My questions are less about the money related to a P800 than performance of Random IOPS across many spindles vs. using a P800 per shelf and only having 10 spindles per array. I basically have about $60K and roughly $22K is on the server, and the balance is for storage and I want to get the most performance bang for that buck, if that is done by maximizing the controllers first, and the getting as many shelves/hd’s as I can within that budget, thats what I want to do. <br /><br />So based on your responses (here and elsewhere) would it be fair to say that you feel that Random IOPS would be faster with more spindles but that alone is enough and you want to balance solid sequential performance and that requires fewer disks per conroller so you feel that there should be one controller per external storage shelf?<br /><br />The P800 has (2) external x4 connectors and is a PCI-e (x<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> interface. So based on your comment of 8 disks likely saturating a x4 channel would it be safe to say a P800 could be maxed out at 16 disks (8 per x4 connector)?<br /><br />Thanks,<br />Brett<br /><br /><br /><br /><br /><br />Thanks,<br />Brett Thompson
random io barely taxes any controller, you could easily drive 20K IOPS on a single controller, which is only 160MB/sec
not all database activity is random, sometime you do sequential
so start with 1 controller to 1 external unit
until your PCI-e slots are full
then you can go to 2 ext units per controller but don’t start at 3 units per controller
Guys this is the thing that gets me.
I fear the one P800 controller thing per external enclosure. Coming from a SAN background the words "single point of failure" screams at me.
I was looking at a Dell MD3000 purely because of this. I am not concerned about active/active but you can RAID your disks all you want but if your controller goes surely you are stuffed.
Why is it that it doesn’t seem to phase you ?
what is more likely?
1. Equipment failure
2. operator error i have seen many people talk single point of failure
then resort to a complex multi-path solution, only to not understand the complexities, then oops is the MD3000 out yet?
HP said they had cluster capable SAS storage in the works for some time in 2007,
so lets pretend i didn’t say this
just ask the vendor when you can get it
ExOracle, I agree with you that eliminating single point of failure is idea and I have my own share of SAN’s here in which I have redundancy. However, i find that I pay a lot for it and often the dollars spent on redundancy/SAN capabilities strip away from money available for performance. In most cases reliability outweights performance, but in this case, I have very impatient users and they want a fast screeming system so I am going for 100% performance for THIS server. For redundancy I can mirror to the older SQL servers already on the SAN and my users can just go back to the speed they are used to today. Further, as long as the controllers can support stacking external shelves (4 in this case) and I only do one shelf or two per controller, if I loose a controller I can always stack a shelf on another shelf and get by until I get a replacement controller (or have on in stock) [yes, with downtime for chaning the hookup and dealing with data corruption/backup restore]. Granted not ideal — heck if we all had oodles of cash and free time we would all have no-tradeoff storage. I also agree with Joe that the complexities of multipath/SAN does reduce relability in environment without strong SAN/System people, and solid change management. I have also seen a few cases were SAN firmware or Zoning misakes have caused significant outages across numerous systems because although it was all redundant, if there is a serious config or firmware mistake then it all goes to hell.
Brett Thompson
Joe & Brett I do agree with you.
It’s all about getting the balance right. I my opinion SANs are one of the last bits of kit out there with massive margins for hardware suppliers and they love them. SANs are great for redundancy, reliability and DR – not great for speed or costs.
I like the idea of trying to buy the same kit all the time – which these days is nearly impossible because the thing you bought last month is in the clearence dept now (OK a little over the top but you know what I mean). If you buy the same kit all the time – you can buy a spare one of "each" and as a percentage of the total cost its not a big deal.
Whats wrong with having a P800 sitting on the shelf doing nothing or in the machine waiting for disaster?
in general, i prefer to purchase extra parts rather than pay warranty on the production parts.
the warranty model may have made sense in the old days where a good size company had 1 mainframe or mini-computer, and the concept of a dedicated development environment was unheard of. going forward, i am just not happy with SAN cost structure, and the slowness with which SAN vendors move to the best available hardware.
I think when database mirroring becomes more solid, over-priced SANs should nolonger be considered
Joe, Same here — I keep spare parts for most of my high quantity items (Blade Server blades, switches, etc.). I just pay for the 4 hour response on my most critical servers where there is a strong cost/benefit argument. If you just have to pay for the server part and can deal with the next day parts warranty that most stuff comes with by default, its a lot, lot, cheaper and you can buy a lot of extra parts (That come in very handy to test with or to use when there is an ad-hoc need). As for SANs, I have a love/hate relationship with them. Originally the argument was that there was so much unused storage on seperate servers that a SAN could share them better and you would realize savings. Unfortunately, the cost of FibreChannel switches, HBAs, controllers, and licensing is so much more expensive than DAS for even HALF the storage that it doesn’t make any sense for that particular reason. I only use SANs for applications that require two hosts to have access to the same physical LUNs such as clustering and VMWare virtualization. The value gained from being able to us VMWare and VMotion in a dynamic environment is enormous — the staff/hardware reduction alone for those "low resource" servers is wonderful. It is not a great candidate for a large OLTP/OLAP server however. The other reasonable use of enterprise SANs is for those organizations that must have a very high level of relability and constant off-site mirroring of all data such as banking and the various stock markets/etc. Again, there is big money at stake there so the SAN premium is worth it. Hopefully Open Source will have some strong contenders for those of us who don’t mind taking a more active role in our SAN design (vs. purchasing the turnkey system and paying through the nose for the tier 1 vendor margins). Thanks,
Brett Thompson
i do not always buy the high availability argument
but when the bosses say make sure it is reliable and i don’t care how much it costs,
make sure you spend a lot of money on a tier 1 vendor,
so you can blame it on the vendor when problems occur if you do not spend alot of money,
it may be hard to blame a problem on the vendor
when spending more money could have redirected the blame priority 1, CYA (cover your anatomy, don’t know how this translates, there is also a shorter word beginning with A that fits)

Update… It looks like HP has just released the SAS 2.5 SFF 15K RPM drives (at least the 72GB and 144GB versions). I tested the waters and requested a quote from a reseller and received a quote, although there may be a 3-4 week ship delay if you order these. Anyway, I had roughly 60 SAS SFF 10K 72GB RPM drives in my original plan for this server. With nearly the same budget I can drop one shelf, and go with 48 SAS SFF 15K RPM drives. From posts around here I see that its a roughtly somewhere between 3/4 to 3/5 ratio for 15K vs. 10K RPM performance (E.g. 3 15K RPM drives = 4 or 5 10K RPM drives). Assuming I go with the more conservative 3/4 ratio, 48 / .75 = 64 10K RPM equivilent. It looks like I am about equivilent with less rack use/controller port use. Does anything think it would be wiser to keep the 10K RPM drives around for any reason (E.g. even a split 15K / 10K RPM reason). Thanks,
Brett Thanks,
Brett Thompson
Btw, here are the part numbers for any other HP customers looking to purchase 15K RPM SFF drives. They are not listed on all online configuration screens yet, but you can get them if you call/etc.
431933-B21HP 72GB 3G SAS 15K 2.5" SFF HDD HOT PLUG
431935-B21HP 146GB 3G SAS 15K 2.5" SFF HDD HOT PLUG
Brett Thompson

375861-B21 HP 72GB Hot Plug 2.5 SAS 10,000 rpm Hard Drive [Add $329.00]
431935-B21 HP 72GB Hot Plug 2.5 SAS 15,000 rpm Hard Drive [Add $999.00] For 3 times the cost !!!! 10K is a winner in my book.
Have you got less of a price difference?
ExOracle, Funny, that is EXACTLY what I did (check out the list price). Yes, the quoted price is much less (note I am in a competitive bidding senario as I am also looking at the x3755 so its possible that I am getting a good price). I can’t share my price, but you can see in my post how many 15K rpm SFF I can fit in place of the 10K RPM + a shelf within my budget for the project.
Brett Thompson
always competive bid
and try to wait towards the end of the FY quarter, sales rep behind on their quota need to make their number, or will get fired.
best to let them think their price is higher than the competitor
truth does not really matter (do you think every thing they say is true?)
Joe, Do you think its a good move to use the 15K rpm drives vs. the 10K rpm drives, even though I will loose a few spindles? My feeling is I will have this thing for 3-4 years in production, and I have a chance to get the 15K rpm’s now, it looks like I will still have plenty of spindles, I can add another shelf of 15K rpms in 6 months if I really find a need for more spindles. Most importantly, I won’t have 6 shelves of 10K RPM drives that will likely look really old in a year.
Brett Thompson
between a 146G 10K and 73G 15K, using only 36G for active data,
the only real advantage of 15K is slightly lower latency for tx processing why the MSA50,
some one just told me of serious problems with the DL585 G1, P600 and more than 1 MSA50 per adapter
I hope he will post, then HP needs to explain the problems use the MSA60 LFF model if space allows (2U for 12 drives vs. 1U for 10)
15K LFF drives have been around for a while, so this be more problem free

ExOracle, Re: $999 price for 72GB 15K RPM SFF SAS Drive It appears that after January 1, 2007 the distribution channel must have been able to publish the new drives on their web sites as the street prices are now available. JoeChang, I decided to go with the 15K RPM SFF drives. The main reason is Space/power/heat are major issues for us as we have a small server room and going w/ the 3.5" drives would push us to another cabinet in this server room this year. I have just enough room to comfortably fit the solution w/ 2.5" drives w/o having to pony up money for new racks, data cabling, rack UPS, electrical PDUs, etc. I’ll run your performance SQL scripts and SQLIO on the solution when it comes in and share. Brett.
Brett Thompson