SQL Server Performance

Table Very Slow After Defrag

Discussion in 'ALL SQL SERVER QUESTIONS' started by sberringer, Apr 26, 2013.

  1. sberringer New Member

    SQL 2005 Standard. We have a table (100,000 rows) that was badly defragged. We ran a maintenance plan defrag which I believe is the statement below.

    It completed successfully and the fragmentation level went from 99% to effectively zero. A simple insert, that used to take under a second, now took 2 minutes. No triggers. Creating a new table and coping over all the rows and renaming fixed the problem, but I'm real curious. Any ideas?
    I believe the statistics would have been rebuilt with the ALTER statement below.

    ALTER INDEX [PK_Address] ON [dbo].[Address] REBUILD PARTITION = ALL[/SIZE]
    [FONT=Courier New][SIZE=12px]
  2. Luis Martin Moderator

    Did you rebuild all indexes?
  3. sberringer New Member

    Yes. It was a sql maintenance plan.
  4. Luis Martin Moderator

    Some time ago, in similar situation, I droped all statistics and let sql to generate new ones. After that, problem gone.
    If you have set autostatistics on, then you can try this.
  5. Shehap MVP, MCTS, MCITP SQL Server

    Adding little bit more for what Luis said, update statistics can be useful here for that troubleshooting cases , you can do this also using exec sp_updatestats

    But irrespective that , I do believe you should check some other assumptions :

    ·Rebuild indexes of all other non clustered indexes exists on the table , then try again

    ·If it still persist, try to use Dynamic query using exec sp_executesql which caches query execution plan and it makes often an outstanding performance improvement for lots of T-SQL query

    ·It should work fine now otherwise you have to check if any other select statements are used within insert statement that can needs for additional indexes to fasten its performance consistency

    Kindly Let me know if any further help is needed
  6. sberringer New Member

    Thank you Luis and Shehap. I think your right it was the statistics, but it's still a mystery as to why that would make an insert slower. To my knowledge statistics are information about value distribution, so SQL can know when it makes sense to use an index. The statistics would not be updated, or even referenced, on a simple insert statement.
  7. Shehap MVP, MCTS, MCITP SQL Server

    To avoid that delay related to update statistics , you can enable the option AUTO_UPDATE_STATISTICS_ASYNC which can help your queries to work asynchronously regardless of update statistic process as it will run at the background of query and once finished , query will use the updated statistics , you can enable it using the below T-SQL Statement , but you have to keep monitoring closely and determine its avail along with all of the other T-SQL queries running on the same DB


    Or much better to schedule update statistics using exec sp_updatestats by a daily frequency or whatever according to your volume of data and volume of transactions

    By this way , statistics should be no longer a showstopper for your query analyzer and queries should take better execution time

    Please try these options and let me know your feedback
  8. sberringer New Member

    Thank you for the info. The thing that makes me wonder... After the defrag my selects were OK, it's an insert that took a really long time. I don't think the statistics would affect that. When a row gets inserted, the table has an identity key (clustered) it would add the row to the end and add to the index. Statistics would not come into play for that.
  9. Shehap MVP, MCTS, MCITP SQL Server

    such circumstances need normally for a full healthy check on DB server level to find out all waits and bottlenecks that can impact negativity on such simple insertion statements to be able to compromise clearly such weird behavior, you can read more about that regard at my blog http://www.sqlserver-performance-tuning.com/apps/blog/show/13109729-main-db-checks-for-dba , also I do recommend to use schema partitioning for this large table with applying storage aligned practice which mean both Indexes and tables are aligned on the same partitioning schema to optimize IO and CPU parallel processing which can improve significantly lots of performance terms such :

    · IO cost of different select and DML query like you case “Insertion statement”
    · Reduce Locks escalation level from table level to partition level which can reduce probabilities of heavy locks
    · CPU cost of select and DML queries

    Actually no certain answer easily can be found here for your case unless a full healthy check so please check it and let me know your feedback
  10. Charandeep Nayyar New Member

    if an index is interleaved with other indexes on disk, running DBCC INDEXDEFRAG against that index does not make all leaf pages in the index contiguous. To improve the clustering of pages, rebuild the index.

    DBCC INDEXDEFRAG cannot be used to defragment the following indexes:

    A disabled index.

    An index with page locking set to OFF.

    A spatial index.

    DBCC INDEXDEFRAG is not supported for use on system tables.

Share This Page