SQL Server Performance

DBCC ShowContig output

Discussion in 'Performance Tuning for DBAs' started by rerichards, Feb 8, 2005.

  1. rerichards New Member

    I ran the following dbcc showcontig() on a table with a clustered index with a 90% fillfactor. The table contains 255,000 rows, and according to sp_mstablespace has data space used = 4385976 and index space used = 5424.

    DBCC SHOWCONTIG scanning 'IFSMessages' table...
    Table: 'IFSMessages' (2009058193); index ID: 1, database ID: 7
    TABLE level scan performed.
    - Pages Scanned................................: 3003
    - Extents Scanned..............................: 379
    - Extent Switches..............................: 385
    - Avg. Pages per Extent........................: 7.9
    - Scan Density [Best Count:Actual Count].......: 97.41% [376:386]
    - Logical Scan Fragmentation ..................: 0.53%
    - Extent Scan Fragmentation ...................: 30.34%
    - Avg. Bytes Free per Page.....................: 855.1
    - Avg. Page Density (full).....................: 89.44%
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    I have read articles on this web site relating to the results of dbcc showcontig(), but am unsure if my returns are within the acceptable range. The last reindex was approximately 24 hours ago. I noticed that within the last 8 hours the extent scan fragmentation has risen from 24.79% to 30.34%.

    From this, can you see any red flags?
  2. satya Moderator

    I don't see any particular issue with the results of DBCC SHOWCONTIG.

    Are you getting any performance issues on this table during inserts or SELECT statements?
    If not continue the database consistency checks and rebuild indexes schedule as usual.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. thomas New Member

    a scan density of 97% is nothing to worry about. like satya said, worry about it only if you have performance problems. also 255,000 rows isn't many at all.

    Tom Pullen
    DBA, Oxfam GB
  4. Luis Martin Moderator

    There is a wonderfull article and store procedure in our Forum written by Tom.
    http://www.sql-server-performance.com/tp_automatic_reindexing.asp

    I suggest to use it.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  5. rerichards New Member

    quote:Originally posted by thomas

    a scan density of 97% is nothing to worry about. like satya said, worry about it only if you have performance problems. also 255,000 rows isn't many at all.

    Tom Pullen
    DBA, Oxfam GB

    I have some very strange performance problems, and while I noticed the scan density of 97% was good, I noticed the increase in the extent scan fragmentation. That just made me wonder. The explanation of my strange performance problems is included in the following quote, which comes from a post on this website.


    quote:Originally posted by rerichards

    I have a table with a clustered primary key index with a 90% fillfactor and a second composite (two column) non-clustered index. The table has less than 300,000 rows. We have a separate archive table to keep the row count under 300,000.

    The table receives far more inserts and updates than reads.

    Performance monitor counters for the following are all well within accepted allowances:
    PhysicalDisk - %Disk Time - _Total
    PhysicalDisk - Current Disk Queue Length - _Total
    System - % Total Processor Time
    System - Processor Queue Length
    SQLServer: Access Methods - Page Splits/Sec
    SQLServer: Memory Manager - Target Server Memory
    SQLServer: Memory Manager - Total Server Memory
    SQLServer: Locks - Average Wait Time ms - _Total (always returns 0, never a lock)

    DBCC TraceOn(3605,1204,-1) - (no indication of dead locks)

    Daily, we delete from the table to an archive table, perform a transaction log backup, and then run a maintenance plan to shrink the database and reorganize the data and index pages.

    If we do not perform this daily routine, the application services communicating with the database time out at 30 seconds. As long as we perform this daily routine, in running a duration trace in Profiler, normal statement duration times for these inserts and updates are in the milliseconds. The 30 second command time out seems more than adequate.

    When we do not perform this daily routine we see the following in Profiler:
    1. Only the inserts are failing (timing out at 30 seconds), but not all. A service that fails on an insert, can turn right around and perform a successful insert in milliseconds the next time around. The same service that timed out when inserting can turn right around and successfully perform an update in milliseconds.

    2. Only inserts are failing (timing out). Updates do no fail.

    3. There is no gradual buildup in duration time. Every duration is at milliseconds and then all of a sudden a service times out inserting at 30 seconds.

    4. The rows size being inserted rarely exceeds 1MB and those that time out are often only several hundred bytes in size, if even that.

    5. The data files are set to automatically grow at 10%. The data file is 6GB and does not show any growth or even near the threshold of growing when the inserts are timing out (remember that some inserts continue to succeed in milliseconds and all updates are succeeding in milliseconds).

    6. The log files are set to automatically grow at 10%. The log file is 215MB and a transaction log backup is performed every 2 hours. Occasionally it does grow, but most of the time by the time the two hour backup comes around, it is around 50-60% used, or 40-50% free space (remember that some inserts continue to succeed in milliseconds and all updates are succeeding in milliseconds).

    With that lengthy background, do you have any insight on what may be at issue? When the timeouts occur we stop the application services, archive (delete some records from the table receiving the inserts), perform a transaction log backup, and then optimize by shrinking the database files and reorganize the data and index pages. Then we restart the application services and things proceed without any insert timeouts.
  6. thomas New Member

    Don't shrink the database, is my first thought. ever! it causes index fragmentation.


    I'll read the rest when i get a sec and try to make further suggestions.

    Tom Pullen
    DBA, Oxfam GB
  7. satya Moderator

    Timeouts might occur due to various reasons, bad code or plan and few times fragmentation of indexes.

    It is better to defragment the indexes and recompile the involved stored procedures in order to purge the cache and refresh with updated plans.

    For thsi reason there is no point in shrinking the database and running reindex for performance purpose will not result any thing positive.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  8. hoo-t New Member

    I'd be willing to bet that your timeouts are due to the database files expanding. You are shrinking your database every day, which means that it is expanding every day. You run an insert, the data file is full, so it expands BEFORE the insert can complete. How much are you expanding in one shot? Try expanding the database manually by that amount and see how long it takes. If you're shrinking your database every day, you are creating a HUGE performance drain! Stop doing that and let the database settle in at its "preferred" size, and I bet you'll quit timing out on inserts!

    Steve
  9. thomas New Member

    Yep, I concur with Steve. Desist from that shrinking a.s.a.p.!!

    Tom Pullen
    DBA, Oxfam GB
  10. mmarovic Active Member

    quote:The data files are set to automatically grow at 10%. The data file is 6GB and does not show any growth or even near the threshold of growing when the inserts are timing out (remember that some inserts continue to succeed in milliseconds and all updates are succeeding in milliseconds).
    This sentence was confusing me, so I thought it is not expanding. However it is possible that growt occures when dirty pages are written to disk, not immediately after insert. I don't have time to test it right now. Does anyone know the answer?
  11. thomas New Member

    I don't know precisely what provokes it - apart from the data file filling right up - not something I like generally to allow to happen! Server'll grind to a right halt while the data file grows though, easily enough impact to generate timeouts.

    Tom Pullen
    DBA, Oxfam GB
  12. satya Moderator

    PROFILER or server side traces are the best tools to find out the slow running queries and check the timeout occurrence.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  13. rerichards New Member

    quote:Originally posted by hoo-t

    I'd be willing to bet that your timeouts are due to the database files expanding.

    It appears you are right on! We just experienced a timeout episode this morning and it was during a data file growth. All the updates were succeeding in milliseconds, and the inserts were mostly timing out, though a few intermittent ones were succeeding in milliseconds.

    I have removed the shrink database.

    During a slow period I am going to expand the data file, then monitor its space usage.
  14. satya Moderator

    If the timeouts occuring during the datafile expansion then keep the value to 20% rather than standard 10% or set a size to 500K or so.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  15. mmarovic Active Member

    You followed-up size of your db for some time, so you should know what was the max size. I would set the max size measured + 10-20%, so your db files never expand.

Share This Page