File groups for increased performance | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

File groups for increased performance

We have a server with a single raid five array for all data files and tlogs. We’ve been having a discussion concerning the merits of creating multiple file groups in an effort to increase I/O performance. My stance has been that creating multiple File groups will not provide a performance benefit if all of the underlying data files are on the same physical spindles, but I appear to be in the minority with that opinion. Am I missing something? There has also been discussion of adding another raid array and I’ve made the case that it should be used to move the tlogs away from the data (segregate sequential and random read/write) instead of using it for different file groups to segregate tables and indexes or frequently joined tables.
In the same physical spindles, you don’t need to split with more filegroups. Is wasting time.
Now, in differents spindles you will gain performance moving tlogs or heavy tables with indexes, etc.
Luis Martin
Moderator
SQL-Server-Performance.com All in Love is Fair
Stevie Wonder
All postings are provided “AS IS” with no warranties for accuracy.
on the matter of separating data and logs,
yes it is a good idea, on whether it will actually benefit depends on your expect log write volume. with a dedicated disk, log writes can exceed 3000-5000 write/sec at 0.3ms latency and 40-60MB/sec.
shared with data on 6 disks, you might get 200-300 write/sec at 5-10ms latency, i do not buy into the idea of separating data and indexes into separate filegroups, with all data in one filegroup, and all indexes in another
this is a half-baked (or even unbaked) idea with no substantiating analysis to support it. one use of filegroups that i do like is to place the data for a very large table in its own file group. all other data and all indexes in the primary filegroup. potentially additional filegroups could be used for the indexes of the large table or other large tables, but i do not like excessive complexity. the idea behind this is that the very large table then does not get fragmented, so the only reason for doing this is that this table is so large that it is impractical to defragment it. now even if multiple filegroups are used, i do not like the idea of each filegroups on its own disks,
rather, all (data, not log) filegroups share a common set of physical disks,
potential, each filegroup could get its own partition all of this is irrelevent if you do not have sufficient number of disks to power through heavy queries
<<one use of filegroups that i do like is to place the data for a very large table in its own file group. all other data and all indexes in the primary filegroup.>> Just so I understand, you would do this so all of the extents for the large table would be contiguous instead of mixed in with extents from other objects? <<but i do not like excessive complexity.>> This has been my argument. Why make the environment more complex (multiple filegroups and files) if we are not going to see a performance impact.
let me restate: avoid unnecessary complexity,
so if you do not have one really big table that is too time consuming to defrag, don’t bother,
if you do, then the extra complexity has value, and you should make the effort
Thanks, we do have one table that accounts for over 50% of the database size, I’ll offer up this suggestion and see what feedback I get.
]]>