SQL Server Performance Forum – Threads Archive
Please Help SQL Disk configuration on SANHello All, I have a question were I need help. We are going to install a SAN in our environment. We have an ERP application which uses SQL server. This application must have a good performance. Now I have the assignment of configuring the best performance for our SQL server. Unfortunately I don’t have any experience in a San environment. I have a budget for 24 disks 10k 300GB. I want to have a raid 10 set. What do I have to do? I also have a spare scci controller and 2 x 72 GB 15 RPM scsi disks. What do I have to do? Make 1 big partition where the log files and data file are located on? Use the 2 disks for my log files? Put the sql server installation on Local disks?
Put read back cache on on the SAN? Which cluster size Do I need..etc.. we have on this server 3 production databases. 2 small ones and 1 database which is 350 GB. with a log file of 10 GB. I also have 1 more question. Our main database (350GB) is splitted in 10 data files. This was done in the past were we had different disk sets. No when I put this on a SAN this has no advantage anymore. Does these 10 files have a worser performance then putting the data in 1 big database file? Thank you in advance for your help. Rival.
first, cancel the above order,
buy 96 10K 74GB drives if possible, or slightly fewer 10K 74GB drives if that will work,
second not as good choice but better than 24×300 is 48 10K 146GB drives. for 96 drives, have 8 FC ports (ie, 4 dual port fiber channel cards) Do not let the SAN vendor set this up without direction.
Mandate that the final setup support 100 random 8K IOPS per physical disk at queue depth 1 per disk with latency under 10ms (should be 8-9ms), the data spread across the entire disk. use IOMeter or a SQL script to test this. Do not use the Microsoft SQLIOStress tool because the dataset will fit in the SAN cache, and you will get meaningless results
Mandate that the final setup support ~10MB/sec sequential thoughput per physical disk as tested with a SQL Server table scan, or a backup to nul: if you let the SAN vendor set this up without direction on expected performance, chances are that will end up with nearly the worst possible configuration for database performance (this seems to be trend everywhere i go). If your database is currently in 10 files, and you go with 96 disks, i might suggest that you stay this way, make 10 arrays of 7-9 disks each, 6-8 disks for logs (if you do log backups)
multiple files helps performance provided they are spread across many disks total.
Hello, Thank you for this reply. This is really getting difficult for me. Our hardware vendor advised us this disks. We cannot have 96 disks. On our eva 3000 there can be only 4 shelves and is bought for grow in the future. I can only have max 24 disks. The 300 GB is for the expect grow in the future. Can you help me in more detail how I can mage this. Now we have 18 disks in the san. which is 1 big raid 5 set were also other servers data is on. Ronald
your vendor screwed you,
database is about random IOPS and sequential bandwidth, which is achieved with spindles and IO channels,
big disks is the worst price/performance considering the value of equipment degrades approx 40% per year, leaving room for growth of an existing system is a fool’s strategy (ie, popular with upper management)
it is better to replace equipment every 2-3 years, 1 year for 40% y-t-y growth,
i don’t necessarily mean throw out equipment, but rotate the top stuff to the secondary function, and buy new equipment for the primary and the fact that your vendor told you to make 1 big raid 5 set for everything, also shows they have no clue about databases, pushing you into the worst possible config
i don’t want my main database (if it is performance sensitive, ie, booking transactions) on the same storage system as other apps, i don’t care what the vendor told you, they are taking care of their own interests.
next, it is absolutely essential that the database disks be split between data & each busy log the data must have 1 set of dedicated physical disks, multiple db can share
each bust database logs must have its own set of physical disks,
there is no way around this regardless of what the vendor told you
i would not feel bad on this, as this happens to almost every body who take advice from the SAN vendor i believe the reason for this is as follows:
if you buy DA storqage, figure a 10K 74GB disk costs $200 from the disk vendor, or $300 from the system vendor, plus $200 per disk for the enclosure, working out to $500 per disk amortizing the entire storage solution.
when you buy a SAN, the bare disk cost is $2000 (list), amortizing the SAN "system", which is really just another computer system (depending on the vendor, this could be a dual Xeon box running Windows Storage Server)
works out to around $4000 per disk, list,
with discounts, ~30% if you do not get comptetive quotes, 50% plus if you solicit bids from multiple vendors,
so this works out to 2-3K per disk so to justify the huge differential between SAN and DA, the SAN vendor sales & marketing type start making up stuff, like you can get higher utilization, the big cache helps, use can do raid 5 sharing between apps, manageablity, etc, notice they generally do not provide a database oriented performance numbers to support such arguments.
most of the above are completely N/A or otherwise outright not true for performance critical databases. so pretty any one who follows the advice of the san vendor ends up with horrrible database performance, which the san engineers blames on MS & SQL Server i had one case where someone bought a DMX 800 with 60 disks, was getting 14MB/sec sequential disk access, that the SAN engineer tried to blame on SQL Server, because no one else complained.
Are there any whitepapers out there concerning optimal San configuration for SQL Server?
yes, but generally they are from san vendors, see rececnt issue of SQL Server Mag, and are vague on performance, except when the data set fits in cache
I’ve seen a few online, but they are mostly "salesman fluff" nothing that explains the nuts and bolts of proper configuration.
Hello All, Unfortunately i must stay with the 24 disk configuration. Can you please help me what i can do best when having this? Which raid set?
Which block/stripe size?
Put my log files on a local disk or also on the SAN?
etc? How can i get the best performing sql server? Thank you all for helping me. Rival Rival
that’s right, you did ask this above.
if you are stuck on 18 disks,
without knowing more about your application and typical loading for each very busy database, that is, with very heavy log writes, have 2 physical disks in RAID 1 for the logs.
the other databases can have logs on the same disk as these or on the data group.
use local disks as appropriate, tempdb on local disks might be a good choice. make one or two raid groups out of the remaining disks (hopefully 12 or more) for all data files.
your choice on RAID level, 5 could be ok if you are heavy on reads, but i would have to see your disk loading, after you setup the disks, do a stress test. also, i just spoke with the HP CTO for StorageWorks (Mike Feinberg),
i need all of you on HP storage to complain to your HP rep about the need for precise technical analysis on configuring storage systems for SQL Server. request detailed scripts and tools for verifying that the actual purchased configuration meets performance objectives