SQL Server Performance

Cannot shrink a large database full of empty holes

Discussion in 'Performance Tuning for DBAs' started by CrackerJack, Jan 4, 2007.

  1. CrackerJack New Member

    Hello.

    Have an 85GB database made up of a bunch of tables. Only 3 of the tables have any size to them being in the 20-35GB of allocated space. According to SHOWCONTIG, these tables are only using 2 percent of the space on each page. All the stats I can produce show me that all the data on this database could fit in 11GB and that I have way too many pages in the 3 tables mentioned (see stats below).

    Some of these tables have clustered indexes and some have only nonclustered indexes. This database has been growning in leaps and bounds (8GB a month) since earlier last year to get to 85GB. The E: harddrive the data and log (log using 400MB) files are on only have 28GB of free space left, so I have that also going for me . To boot, whoever set up this server with SQL for the application using this database, used Personal Edition 2000 of SQL rather than Standard or Enterprise.

    Tried running SHRINKFILE to get size down to 40GB, but it only took off a few GB's. Did compressing options to move pages to front of file and also indicated the maximum percentage of free space in the files to be 15-20%. But again, no change to overall size and the SHOWCONTIG looked the same.

    Checked the fill factor on the indexes for these large tables and they are set at 90%

    I did SHOWCONTIG on the three indexes in one of these large tables and got the results below where each index numbers looked about the same:

    DBCC SHOWCONTIG scanning 'act_log' table...
    Table: 'act_log' (52195236); index ID: 3, database ID: 7
    LEAF level scan performed.
    - Pages Scanned................................: 1730
    - Extents Scanned..............................: 266
    - Extent Switches..............................: 271
    - Avg. Pages per Extent........................: 6.5
    - Scan Density [Best Count:Actual Count].......: 79.78% [217:272]
    - Logical Scan Fragmentation ..................: 97.86%
    - Extent Scan Fragmentation ...................: 97.74%
    - Avg. Bytes Free per Page.....................: 863.9
    - Avg. Page Density (full).....................: 89.33%

    I did SHOWCONTIG on the table itself and got these results:

    DBCC SHOWCONTIG scanning 'act_log' table...
    Table: 'act_log' (52195236); index ID: 0, database ID: 7
    TABLE level scan performed.
    - Pages Scanned................................: 281041
    - Extents Scanned..............................: 280925
    - Extent Switches..............................: 280924
    - Avg. Pages per Extent........................: 1.0
    - Scan Density [Best Count:Actual Count].......: 12.51% [35131:280925]
    - Extent Scan Fragmentation ...................: 99.59%
    - Avg. Bytes Free per Page.....................: 7914.3
    - Avg. Page Density (full).....................: 2.22%

    The values on the table look bad. 1 page per extent, scan density pf 12%. ave free space per page of 7914, and ave page density of 2% all look bad and would account for a table using way too much space for the amount of data actually there.

    Doing a sp_spaceused on the database and one of the large tables within shows:

    Database Database_size Unallocated_space
    AHD 86,766.75 MB 431.20 MB

    Reserved Data Index_size Unused
    88,273,464 KB 11,440,992 KB 336,624 KB 76,495,848 KB

    Table Name Rows Reserved Data Index_size Unused
    act_log 450,120 36,084,184 KB 4,505,280 KB 68,344 KB 31,510,560 KB


    Am I fighting the fact that this is Personal Edition, do not have much room left on the drive for SQL to do its thing, or just not running the right commands ??

    Thanks (Sorry for all the detail)
  2. mmarovic Active Member

    To solve the root of the problem (probably bad clustered index choice) we need to see the structure of fragmented tables including clustered index create script. Another possibility is that you have (n)text columns in these tables.
  3. MohammedU New Member

    Run DBCC DBREINDEX first so that you can remove the fragementation and then run the shrinkfile command. If any of the tables contains text columns you may not be able shrink the file...

    Reducing SQL Server Index Fragmentation
    http://www.mssqlcity.com/Articles/Adm/index_fragmentation.htm



    1. DBCC DBREINDEX()

    2. To shrink data/log file use DBCC SHRINKFILE command...

    Check BOL for more details...

    DBCC SHRINKFILE and SHRINKDATABASE commands may not work because of sparsely populated text, ntext, or image columns

    http://support.microsoft.com/kb/324432





    Mohammed U.
  4. Luis Martin Moderator

    Table: 'act_log' (52195236); index ID: 0, database ID: 7

    That is table itself. Reindex ID: 0, do nothing.
    In this case the only way is recreate the table.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All in Love is Fair
    Stevie Wonder


    All postings are provided “AS IS” with no warranties for accuracy.



  5. mmarovic Active Member

    quote:Originally posted by LuisMartin

    Table: 'act_log' (52195236); index ID: 0, database ID: 7

    That is table itself. Reindex ID: 0, do nothing.
    In this case the only way is recreate the table.
    ... or to add a clustered index to the table.
  6. CrackerJack New Member

    Thanks All. Will do some defraging/Re-Indexing of the indexes this weekend (slow time for production). Will also create a temp clustered index on an appropriate field(s) to get the data re-sequenced and hopefully packed tighter. Then delete the temp clustered index.

    By the way, I created a test database on the same instance and copied a couple of these larger tables. One table was 5GB on original production database and the copied table on the test database only took 22MB!! This table did not have a clustered index. The other table I copied was 28GB with a clustered index and the copied version only took up 1.2GB.

    Hoping not to have to do such drastic moves (copy individual tables to a temp database and then copy them back to original db) and that the DEFRAG/REINDEX and building temporary clustered indexes will pack these tables tight. Then I will shrink the database.

    Stay tune.

    Thanks
  7. mmarovic Active Member

    I am not quite sure, but copying data may actually be faster and less log consuming if you do it in batches (e.g. 1000 rows per iteration) then creating a clustered index.
  8. CrackerJack New Member

    All,<br /><br />Thanks for your input !<br /><br />Yesterday, when usage was low for this application, there was about 9 tables of any size (2-35GB) and all but one had no clustered index. I found that if I created a cluster index on the primary key field (all were an id field) and I used a 95% fill factor, that I got anywhere from a 50% to 90% space gain (mmarovic had this solution pegged). Doing further defrag and reindexes did not seem to help anymore than that. Then I deleted this newly created clustered index.<br /> <br />After working with all 9 tables, the database was still sitting at 88GB, but it had about 30GB of freespace. Then did a Shrinkfile doing the compress option. After 2 hours, the database was at 53GB with an additional 20GB free. The Shrinking and compressing seemed to come up with still more free space but had not released this extra free space. <br /><br />Planned on doing some more shrinking next weekend, but did not want to wait another two hours yesterday.<br /><br />I am thinking I should re-create those clustered indexes and leave them out there. Pretty much all the tables I was working with have an id field as its key where rows are continually added with this id field keep on bumping up. There is little data deleted once it is added at this point. So, I do not have to worry much about later page splits in trying to add keys in the middle.<br /><br />Definitely opportunities for further improvements, but getting this space knocked down will takes the pressure off. [<img src='/community/emoticons/emotion-2.gif' alt=':D' />]<br />
  9. mmarovic Active Member

    Thank you for the feedback.
  10. MohammedU New Member

    Thaks for the detailed update...

    Mohammed U.

Share This Page