SQL Server Performance

RAID, FileGroups, DataFiles and Random/Sequential

Discussion in 'Performance Tuning for Hardware Configurations' started by pmackay, Aug 31, 2005.

  1. pmackay New Member

    He everybody,

    recently I'd read (again) the Edward Whalen's SqlServer 2000 Performance Technical Reference book, and now I have some doubts related with datafiles, groups and raid. The first time I read it, I felt like Alice in wonderland.

    In our business environment, we have one main database (40GB) and some smaller databases (less than 50 MB each one). I've created a RAID 10 of 4 disks for the data files, a RAID 1 of 2 disks for the log files and another RAID for full and transaction log backups.

    My doubts are related with the performance of the raid 10. In one part of the book, the writter explain that in some cases, two raid1 are better than one raid10, because you can put different filegroups (i.e., secondary data files), in each raid array, obtaining better performance. He mentioned concepts like random and sequential data.

    I'm not sure what does sequential data means. The example he did is the transaction log. Of course, the transaction log is an sequencial file, but in the case of a table, what kind of table is a sequential table?. Here we have a table with 38 Million rows, in which you can only insert rows. You can't update data after it is inserted. This are sequential data, isn't it?, but of course a need to read old data, brings me back to the question. What does sequential data means?

    Thanks in advance.

    Patrick MacKAY
    ASP.NET MVP
  2. Luis Martin Moderator

    One post to check:
    http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=9684#43994

    About raid 10 or raid 1 with differentes files groups, and if you can put indexes in one file group and data in other, may be you will have more performance.

    If each Raid 1 have differents channels, better.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  3. bertcord New Member

    I would advise against creating separate File Groups unless you have a specific reason to do so. For example let#%92s say your 38million record table was being scanned for a reporting type query all of the time. IF you had a single file group with only this table on it you might want to put it on its own Disk array. But again remember if this data is being read and written you no longer have sequential IO. Some will be reads some writes…

    I would only recommend creating a new FileGroup if you are only going to place one object on it. Once you start placing multiple objects on a FileGroup the extents will be fragmented and you will not be able to do as large sequential reads.

    I would also recommend not splitting Data and Index into separate FileGroups. This advice is given quite often but can actually cause worse performance. IT really depends on your data and Indexes and understanding how it is accessed.

    I am currently experimental with several options with large tables. I have a database that his 4 very large tables 300million records. 50GB per table. I am testing two methods I have 4 10 disk RAID10 LUNS to work with

    1.Single FileGroup, 4 files – all 4 objects striped across all 4 files
    2.4 FileGroups, 1 file per FileGroup, each object is placed on a separate LUN

    Ill let you know how it turns out


    How many disks do you have for you Full and Tlog backups?

    Bert


  4. pmackay New Member

    Bert,

    the raid for backups is a 3 (72GB) disk RAID5. I know the writing disadvantages of raid 5, but the raid is used only for backups, and we need more space without buying a new disk (RAID10).

    If you say that yuo can create a new filegroup only for one object, If I have, maybe 10 important objects, I'll need 10 filegroups, and at least 10 secondary files. Is that true?...

    The 38 million rows table is only accessed for short reads. The biggest read can involve 100 or 200 rows.

    Thanks.

    Patrick MacKAY
    ASP.NET MVP
  5. FrankKalis Moderator

    quote:
    ...
    but in the case of a table, what kind of table is a sequential table?. Here we have a table with 38 Million rows, in which you can only insert rows. You can't update data after it is inserted. This are sequential data, isn't it?, but of course a need to read old data, brings me back to the question. What does sequential data means?
    I would guess in this case he means a table with a PRIMARY KEY constraint on a monotonically ever increasing column which is also at the same time the clustered index.
    Most commonly this is a column having the IDENTITY property defined on it. Every time you insert a new row, it is appended at the end of the datafile to keep the clustered index in order. So, the INSERTs are done sequentially.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    http://www.insidesql.de
    Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)
  6. bertcord New Member

    Yes that would be true….however I would not advise doing this …just yet…..I need to finish some tests but I can give you two general rules

    1.Only consider for very large tables…say > 20GB
    2.Lots of table or INDEX scans. If you database does a bunch of random reads, SELECTs for single records then this strategy will not help at all. IF you only need to get 1 8KB page who cares where it is right?


    I reread you first post and you mentioned

    “a RAID 1 of 2 disks for the log files”
    How many TLOGS do you have on this RAID1?

    For you backups do you backup to tape after backing up to disk?

    Bert
  7. pmackay New Member

    Bert,<br /><br /> following you first general rule, in case of this big table (38mill), it maybe a good idea to put in a separate file, but if I only need to read a few hundreds of rows each time, I feel like it doesn't matter if the data is in the main file or in another file. Am I ok?<br /><br />In the raid for the logs, I have 1 log file for each database. Remember we have one main database, and several smaller databases. The whole disk have 14 ldf files. The biggest file is about 1 MB and the smallest, 1024 kb. I backup the transaction log of the main database every minute. We doesn't do transaction log backups for the other databases.<br /><br />No, we actually dont backup to tape. Please, don't hit me <img src='/community/emoticons/emotion-5.gif' alt=';)' />. I know that we need to do that, but our budget is short and we recently bought the storage and the disks. We spent a lot of money. <br /><br />Thanks.<br /><br />Patrick MacKAY<br />ASP.NET MVP
  8. bertcord New Member

    For a few hundred rows I would not worry about it. The database I am trying this on there are many reporting type queries that aggregate and rollup large volumes of data from the base tables. <br /><br />I am also just supporting this application and the indexes aren#%92t as optimized as much as I would like, preferably it would be better if I could change indexes but this is not happening today so I am making the best with the hardware I have.<br /><br />AS far as your TLOG drive…..I don#%92t want to break the bad news to you <img src='/community/emoticons/emotion-5.gif' alt=';)' /> but those drives are randomly accessed. To get the benefit of having a sequential TLOG drive only one TLOG can reside on the drive. You have 14 tLOGS that are written sequentially…but the combined writes off all of them are in effect random.<br /><br />NO backup tape…very bad <img src='/community/emoticons/emotion-5.gif' alt=';)' />…. The reason I Was asking is that if you did you could go with a RAID0 on your backup drive. We have several databases that are 500GB plus that we backup as follows<br />1.Backup to local RAID0 Array<br />2.FTP files to network backup server<br />3.backup to tape from backup server<br /><br />I have never tried this but you could also look at RAID3 for you backup drive. RAID3 is like RAID5 in that that is uses Parity but in RAID3 the parity is on only one drive. I have heard that if only one process is writing to the array at once you can get much better performance than RAID5, again I have never tried but might be worth looking at.<br /><br />How many drives do you have in your machine? I count 9?<br /><br />Bert<br />
  9. pmackay New Member

    Bert,

    Thanks for the clarification. I'll not move the table to another file.

    I'm going to consider the raid 3 option and how it works under different scenarios (if all the parity goes to one disk, i can see it has to do less work than raid 5), but you put me in an akward situation. The truth is that we can't afford the ideal and have one RAID1 only for each database Tlog. But if I decide to give the best performace to our main database, I'll must put the other ldf files in other drives. It makes me wonder where I can put the other tlog files in the storage?

    We have 12 disks. The 9 you count and another 3. One of those 3 is used as spare for the RAID10 used for data, and the other 2 disks are used to hold de QA database, some other backups files (not sqlserver backups), and image files for the OS of the servers.

    Resuming.

    4 Disks (72GB each) RAID 10 for data (14 mdb files) + 1 spare disk
    2 Disks (72GB each) RAID 1 for logs (14 ldf files)
    3 Disks (72GB each) RAID 5 for backups (full and log)
    2 Disks (146GB each) RAID 1 for qa and other stuff. (70GB used)

    Our main database is about 40GB and the sum of the other mdf files is 5 GB. The use of the other databases is lower than 1%, or even lower than that.

    Another question. Is there any gain in performance if I create secundary files (ndf) in THE SAME filegroup and in THE SAME drive?. I've seen that the sqlserver distributes the data across the files (through the filegroup), but i'm not sure if this could really help.

    Thanks again.


    Patrick MacKAY
    ASP.NET MVP
  10. bertcord New Member

    “The truth is that we can't afford the ideal and have one RAID1 only for each database Tlog.”<br /><br />Yes I agree…but you have to remember rules are made to be broken <img src='/community/emoticons/emotion-5.gif' alt=';)' /> How says your TLOG has to be on its own disks? How much performance does your TLOG really need? When looking at performance monitor how much TLOG activity do you have…does you main DB even need a super fast TLOG drive? I manage several very large database#%92s that need special configurations. But we also have over 100 servers that we use a standard config. For example our newest platform is the DELL 2850. This server only holds 6 internal drives. We configure it as 1 RAID10 array and slice drives out of this. We run anywhere from 1 – 20 databases on it. Our application uses a lot of BULK INSERTS and we use the BULK-LOGGED recovery model so TLOG is not a major concern. When a database starts having performance problems then we move them to a dedicated box. Once a database needs to be moved to its own server then we will create a dedicated TLOG drive for the large DB.<br /><br />Don#%92t get me wrong….TLOG performance in some cases is very critical, but you need to determine what you application requires. No sense in wasting 2 drives for TLOG if the system would perform the same if the TLOG was on the same drive as the data files. <br /><br />If you database has a lot of random data access the more spindles the better. Instead of having drives dedicated to backup, use them for data and backup. You could possibly combine all your 72GB drives into a single RAID10.<br /><br />Also remember RAID10 only helps for writes, unless you have more than 15-20% writes RAID5 might be better as well.<br /><br />Check out the following post, it will help you determine how much activity each one of your data files has.<br /<a target="_blank" href=http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=9202>http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=9202</a><br /><br />“Another question. Is there any gain in performance if I create secundary files (ndf) in THE SAME filegroup and in THE SAME drive?.”<br /><br />In SQL Server 7.0 having multiple files could increase performance but this is no longer the case with SQL 2000. SQL 2000 will spawn multiple threads automatically. This is a common misconception with SQL 2000 and was actually stated incorrectly in SQL Server Books On Line. It has been was updated in last years update to Books On Line.<br /><br />Bert
  11. pmackay New Member

    Bert,

    Sometimes we need an explanation as yours. I really appreciate your time and the useful real-life example you did. Now, I have a better understanding of the whole picture. Of course I still have doubts, but now I can try to clarify them by myself.

    We have our configuration (different raid arrays) because I read a guide provided by Microsoft (technet) where they explain how to separate the files in a storage. Now, I realize that the picture is more complex than putting files in separate disks and pray for good performance.

    I#%92m going to see your post for analyzing the IO and I#%92ll analyze my system. Then, I would like to share to you the stats and the new definition of the array (if it proceeds of course).

    Thanks again.


    Patrick MacKAY
    ASP.NET MVP
  12. bertcord New Member

    No problem, glad I could help.

    I had the same problem you did, I read all the literature that preached of putting Indexes on one FileGroup and Data on another. Well I tired this and I wish I would have thought about it more. My Indexes where accessed way more than the actual data. Our indexes were larger than our data and the INDEX RAID array was pegged while the data array was hardly being utilized.

    Let me know you results and I Will see if I can continue to help

    Bert
  13. ronberry New Member

    We are embarking on a similar configuration that I thought I would run by you to see what you think.



    Basically we are attaching a 3 node SQL cluster to a SAN.

    Performance is very important.


    Currently we have 28 drives, 2 trays of 14 drives (and 2 hot spares)

    All drives are being placed in a raid 1+0 configuration.

    The drives are being placed in the following configuration:


    Luns are configured as follows:

    MDF/LDF
    6 Drives (mirrored-12 drives total) - raid 1+0
    Lun 1-4 carved across the 6 drives (1 for each sqll server instance)

    TEMPDB/Logs
    1 drive (morrored-2 drives total) - raid 1+0
    Lun 5-8 carved across the 1 drive (1 for each sqll server instance)

    BACKUP
    4 Drives (mirrored-8 drives total) - raid 1+0
    Lun 9-12 carved across the 4 drives (1 for each sqll server instance)

    Quorum
    1 Drive (mirrored-2 drives total) - raid 1+0
    Lun 13

    The rest of the drives are spare(not allocated), and hot spares


    We are running SQL 2000, with 2 instances on each server (1 passive node)

    Each server has 2 GB for each instance and 4 CPU's, and the passive node has 8 GB


    The question that I have is, we are looking at placing 4 MDF's per database in each instance (each instance is basically 1 database/application), the databases are currently about 20-30GB, growing about 100% per year. It is an OLTP system.


    1 - Do you think we will see a performance improvement by making multiple mdf's so that we can do more parallel processing?

    2 - If we go with multiple mdf's, should we also go with multiple ldf's as well?












  14. bertcord New Member

    Some questions

    Haw many total instances for all 3 nodes?

    How many databases total? Two?

    Why 4 MDF's per DB?

    You say MDF/LDF, but also metion a Lun for TEMPDB/Logs. LDF is usually logs?

    Are teh applications currently in production and not on a SAN?

    What kind of SAN are you getting?

    Bert
  15. ronberry New Member

    Q: How many total instances for all 3 nodes?
    A: 4 - two on each of the active nodes


    Q: How many databases total? Two?
    A: Each instance is an application, generally with one database, however, one application has 3 databases.


    Q: Why 4 MDF's per DB?
    A: so that we can ready the data in parallel against each processor, each on a large table read will cause a thread for each mdf


    Q: You say MDF/LDF, but also metion a Lun for TEMPDB/Logs. LDF is usually logs?
    A: those logs are "log shipping" to the diaster recovery site


    Q: Are the applications currently in production and not on a SAN?
    A: currently in production, the SAN is new



    Q: What kind of SAN are you getting?
    A: Twin EMC CX700's





  16. joechang New Member

    Each high activity DB needs its own dedicated pair of drives for the Logs,
    it is quite ok for one set of 12 drives to be shared by multiple DBs for data, but not logs
    the quorom needs to have its own partition, but there is little activity, so consider makings a 1GB LUN for the Quorum and using the remaining capacity for tempdb

    if you do not plan backing up each db simultaneously, consider making a data LUN from 1/2 of the space on one set of disks, make a LUN for the backup destination of a different db, so that if you backup one db at time, then data is read from one set of disks, backing up to a different set of disk,
    while the next db reads data from the disks holding the prev backup, and backs up to the set of disks holding the data for the prev db.
    or something to this effect.

    if you are only buying 2 DAEs of 14 disks each, why bother with the CX700, why not the CX500?
    also, configure 2-4 FC ports for better sequential performance.
    also, consider the new EMC CX line with SAS drives
  17. FrankKalis Moderator

    ronberry, <br />usually you have "better" chances to attract people to your questions, when you start a new thread with a meaningful subject instead of "hijacking" some other thread. That way more people will notice that you need help. [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br /><br />--<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
  18. bertcord New Member

    As Joe mentioned if you want to isolate logs only one log file per RAID1.

    I would advise against the 4 MDF's. You really don#%92t have enough disk to make it worth it. I sometimes create multiple files per FileGroup but I do this to SPAN a few large LUNS. For example I have a DB that has 6 10 DISK RAID10 and I stripe the entire DB across these LUNS.

    I would not dedicate drives to tempDB in your situation.

    If you applications are in production already you should analyze them to determine what kind of disk config to give them. For example are you sure you need RAID10? What is your % read to Write?

    A CX700 with 14 drives?

    Bert
  19. ronberry New Member

    Joe

    To your questions/comments

    Actually we are buying 2 CX700's for production (one at each facility) one fails over to the other in case of outage.

    Currently we have CX500's for development.

    The CX700 was chosen, for top end growth and relaibility issues.


    The cx700's have 2 trays of 15 drives in each tray dedicated to SQL server requirements. Each tray mirrored to be a mirror of the other.


    When you say "Each high activity DB needs its own dedicated pair of drives for the Logs", are you saying that the LDF needs multiple drives that are separate drives from data drives, or are you referring to log shipping drives?

    As to backing up databases simultaneously, each database will be scheduled at a different time, no no we do not plan on more than one at a time.





  20. ronberry New Member

    Bert,

    Why would you not dedicate a drive to tempdb?

    Also the 4 mdf's are more for gaining parralism on queries.



    Actually the portion of the CX700 that is for SQL is 2 trays - 15 drives each (2 for hot spare), so 14 per tray usable
  21. bertcord New Member

    The Tlog writes to disk in a sequential manor, sequential drive access performs better than random. AS soon as you place two Sequential TLogs you defeat the purpose as your drive is now randomly accessed. IF you are to dedicate a TLOG drive you should only have one tlog on it.

    You do not need multiple files for parallelism on queries, what leads you to believe this?

    I would also recommend using more memory. Are you running x64? On x64 you will have access to a full 4GB of VAS (virtual address space) per instance. Obviously you would need to purchase more physical memory.

    Bert
  22. ronberry New Member

    Hi Bert,

    In the article -http://www.sql-server-performance.com/ac_filegroup_performance.asp

    "If you have several physical disk arrays, try to create as many files as there are physical disk arrays so that you have one file per disk array.
    This will improve performance, because when a table is accessed sequentially, a separate thread is created for each file on each disk array in order to read the table's data in parallel."



  23. joechang New Member

    i am not certain this is true, or may be dependent on the SQL Server build.
    in any case, creating one file per array (that appears to be a single physical disk to the OS) is certainly better than using the OS disk striping feature.
    i did not recall much difference between creating one giant array on the RAID controller & presenting 1 physical drive to the OS for data
    compared with creating multiple arrays & the OS seeing several physical drives with 1 data file each.
  24. bertcord New Member

    In SQL Server 7.0 having multiple files could increase performance but this is no longer the case with SQL 2000. SQL 2000 will spawn multiple threads automatically. This is a common misconception with SQL 2000 and was actually stated incorrectly in SQL Server 2000 Books On Line. It has been was updated in last years update to Books On Line.

    Original Books-On-Line Text:
    "If the computer has multiple processors, Microsoft® SQL Server™ 2000 can perform parallel scans of the data. Multiple parallel scans can be executed for a single table if the filegroup of the table contains multiple files. Whenever a table is accessed sequentially, a separate thread is created to read each file in parallel. For example, a full scan of a table created on a filegroup comprising of four files will use four separate threads to read the data in parallel. Therefore, creating more files per filegroup can help increase performance because a separate thread is used to scan each file in parallel.
    Similarly, when a query joins tables on different filegroups, each table can be read in parallel, thereby improving query performance."

    Updated Text Books-On-Line Text:
    If the computer has multiple processors, Microsoft® SQL Server™ 2000 can perform parallel scans of the data. Multiple parallel scans can be executed for a single table regardless of the number of files that are in its filegroup.

    Let the SAN hardware do as much of the striping as possible. IF you need more space or Spindles then what a single tray can handle then create multiple files. Another option that EMC will propse is metaluns, I would recommend against them.

    Bert

Share This Page