Disk fragmentation issues and techniques | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Disk fragmentation issues and techniques

Reviewing our databases, we find 31% of our indexes have index fragmentation above 30% and should be rebuilt. Yesterday, I found fragmentation of the disk on our SQL Server 2005 system to be at 64%. There are apparently good reasons for our queries to be failing. Last night, I stopped the SQL Server and started the Windows defrag tool. Of 273GB, 24% was free and the defrag tool ran for three hours. When it completed, a message came up saying not all files could be defragmented and that the new fragmentation level was 55%. I brought the SQL Server back up. All but one web application worked properly, so I cycled the IIS web server and all was fine again. In the end, it didn’t seem that much was accomplished, but I’m thinking I need to defrag the physical disk before trying to rebuild the indexes. Questions:
1. Why could the tool not defrag more of the files? Were they still locked by SQL Server though service was stopped?
2. Should I keep running the same defrag tool more to try to accomplish more? (I heard once that was the way the Windows defrag tool worked.)
3. I’ve heard of people using a program called CONTIG from sysinternals to defrag large, individual database files. Has anyone used that and recommend it?
4. Should defrag of the physical disk be a weekly or maybe monthly scheduled task to prevent this situation?
5. Should I normally cycle the IIS server whenever I need to cycle the database server?
Take a look to the following articles…
Be careful to separate internal fragmentation of data (within one SQL Server file) with external fragmentation of the files themselves, from the opertating system/physical storage. Index fragmentation is internal, and is resolved by SQL Server commands such as dbcc reindex or dbcc index defrag. The articles MohammedU links to are good for explaining that. Internal fragmentation does need attention, depending on how much the data in your tables changes, and most people have a job or series of jobs to take care of it. Also note: fragmentation, internal or external, might not be your issue in the first place — check performance monitor or similar tools to diagnose the problem first, then take action. Index fragmentation, for example, is only really relevant to large tables, and while small ones might report high percentages of fragmentation, it makes no real difference performance-wise. A properly configured SQL-only box should show very little external file fragmentation, because SQL Server will open and modify its files in place, not create new ones or change their size. If you have some bad practice like frequent grow/shrink on the DB files, that can introduce fragmentation, but the system should not be set up that way in the first place. Basically, you allocate the space for each file and then let the machine run. Because of that, you should basically never need to do OS-level defrag, or very seldom.
Can anybody tell me that how can I know as to when I need to rebuild or dfrag the indexces. Like is their any tool or command through we can know the current position of the indexes of my table.

You don’t need wait until the disaster happens, why not schedule such a defragmentation techniques by the way of scheduled jobs and for more information. http://www.sql-server-performance.com/articles/per/detect_fragmentation_sql2000_sql2005_p1.aspx will help you to find for specific tables that are updated frequently.