SQL Server Performance

SQL Server Filegroups Best Practices?

Discussion in 'General DBA Questions' started by Shash703, Dec 6, 2006.

  1. Shash703 New Member

    If we have a large and very busy database with some very large tables 60+ million rows. What we are seeing is alot of I/O issues Avg disk queue length around 20. What are some of the best practices for using Filegroups?

    The DB is connected to EMC SAN.

    1. Should i move just the indexes of the large tables into another FG or
    2. Should i move the large tables into there own files and FG.

    Which would give me a performance gain? Are there any better solutions to this problems?

  2. bradmcgehee New Member

    If you just create filegroups like you described, and don't move the files in the filegroups to different locations on your SAN, you aren't buying anything.

    Before I would do anything, I suggest you first determine if there is anyway to tune the queries that are running in order to reduce disk I/O. There are a lot of different tips on this website on how to do this.

    If you have exhausted the above, and still have I/O issues, then you are ready to consider other options.

    One of the options you can consider is to move the current files you have onto different locations of your SAN. This assumes that you have this ability. If you are using a virtual SAN configuration, then this option is not really applicable.

    Some of the options you have include: Moving the transaction logs onto different arrays than the data files. Moving different databases to different arrays. Creating different files within filegroups to increase the number of threads available to access the data. Creating filegroups of related tables and indexes, and moving them to different arrays to reduce I/O conflicts.

    These are just some of the many options you have, but until you identify what is causing your problem, it is hard to make a good recommendation.

    Brad M. McGehee, SQL Server MVP
  3. Shash703 New Member

    Thanks Brad for the reply, but my problem is that this is a canned application, which is poorly designed. I have tried to contact there support for "recommendations" but no real answers. The database is fairly busy around 1000 tran/sec. (Alot has to do with poor design)

    The questions that i have is that the DB is in simple recovery mode (mostly monitoring data), Since i wont have the ability to move the data files somewhere else on the SAN. Does creating another filegroup and adding data files to it on the same logical drive even help? Or adding additional datafiles to the same (Primary) FG would give me the same performance gain?

    The other question that i have is that i know i have three tables (60 mil rows each) that get hit really hard. Heavy inserts and heavy reads.

    So is it better to move the whole table and its indexes to on its own FG or datafile or

    Leave the data on the Primary FG and move the Non-Clustered indexes to a Secondary FG?


  4. chetanjain04 Member


    I believe adding data files to a new or existing filegroup on the same logical drive would not help much as the problem is IO related. Lets see what others say...

    However, I would like to know what kind of tables are they?. What is the growth trend? Is it possible to create partitions. You may also consider partitioning these tables so that the size of each partition is small and the IO gets distributed.



Share This Page