SQL Server Performance

I need help shrinking my database

Discussion in 'General DBA Questions' started by airjrdn, Jan 14, 2003.

  1. airjrdn New Member

    For starters, I'm using Spotlight on Windows/SQL Server to determine some of this information.

    I've got a fairly large database, approx 120G. One main detail table has about 70M records in it. I recently deleted an unused column that was a char(210). I've since ran statements to shrink the database, but it still has a lot of unused space in it. Here's what I can tell you:

    Here's what Spotlight on SQL Server shows as far as space I should be able to free up:
    ---------------------------------------------------------------------------------------------------------------------
    Table........Rows........................MB..........................MB Free................% Free
    --------------------------------------------------------------------------------------------------
    Table1......64,505,796..............69,775.57..............33,366.67..............47.82
    Table2......92,704,068..............26,236.69..............14,854.30..............56.62
    Table3......95,950,203..............13,556.11..............3,423.95................25.26

    That's about 50G I "should" be able to free up.

    I've tried running some commands to both shrink the database, and shrink the files individually. Here's some of the output from those commands:

    DBCC ShrinkFile ('dbName_Data', 10)
    /* 48 minutes
    DbId.....FileId.....CurrentSize.....MinimumSize.....UsedPages.....EstimatedPages
    --------------------------------------------------------------------------------------------------
    7...........1............8016784............2560000.............8012408...........8012408
    */

    DBCC ShrinkFile ('dbName_Data', 10, TruncateOnly)
    /* 2 minutes
    DbId.....FileId.....CurrentSize.....MinimumSize.....UsedPages.....EstimatedPages
    --------------------------------------------------------------------------------------------------
    7...........1...........8016784...........2560000...........8012408...........8012408
    */

    DBCC ShrinkFile ('dbName_Index', 10)
    /* 5 minutes
    DbId.....FileId.....CurrentSize.....MinimumSize.....UsedPages.....EstimatedPages
    --------------------------------------------------------------------------------------------------
    7...........3...........6644368...........1280000...........6642984...........6642984
    */

    DBCC ShrinkFile ('dbName_Index', 10, TruncateOnly)
    /* 1 minute
    DbId.....FileId.....CurrentSize.....MinimumSize.....UsedPages.....EstimatedPages
    --------------------------------------------------------------------------------------------------
    7...........3...........6644368...........1280000...........6642984...........6642984
    */

    Running.....dbcc showcontig (2069582411,0)
    Reports:
    DBCC SHOWCONTIG scanning 'tbCDRData' table...
    Table: 'tbCDRData' (2069582411); index ID: 0, database ID: 7
    TABLE level scan performed.
    - Pages Scanned................................: 4651963
    - Extents Scanned..............................: 584448
    - Extent Switches..............................: 584447
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.49% [581496:584448]
    - Extent Scan Fragmentation ...................: 8.18%
    - Avg. Bytes Free per Page.....................: 1281.6
    - Avg. Page Density (full).....................: 84.17%

    Which shows a good Scan Density does it not?

    Thanks in advance for any help you may be able to offer.
  2. bradmcgehee New Member

    One of the possible areas that is causing your empty space is the Average Page Density. It is showing 84.17%. This is telling you, that on average, that abouty 15 percent of each page is empty. In a 120GB database, this account for about 18GB of empty space.

    This may or may not be a problem. As you probably know, when you create or rebuild clustered indexes (and I am assuming that these tables have a clustered index), that a fill factor setting is applied. This option determines how much free space there should be. If the database is a OLTP database, you will want to have some free space. Generally speaking, I try to have between 5 adn 10% of free space. If the database is OLAP database, then their should be no free space as it is not needed and only needlessly boosts I/O.

    In your case, if the databae is OLTP, you might want to consider reducing the amount of free space from about 15% to about 5 to 10%. This would free up some free space and to a limited degree, help to reduce some I/O, helping to boost overall performance.

    Your scan density of 99.49% indicates that you have recently rebuilt your indexes and that they are in good shape.

    Also, consider running DBCC UPDATEUSAGE. If SQL has lost track of the amount of space used (it does this sometimes), running this command will fix any problem. Only run this command during off hours.

    Hope this helps a little bit.


    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  3. airjrdn New Member

    Thanks for the reply. All knowledge helps at least a little bit. <img src='/community/emoticons/emotion-5.gif' alt=';)' /><br /><br />There isn't a clustered index on the main table anyway. I apologize if this is a stupid question, but what exactly is an OLTP database?
  4. bradmcgehee New Member

    You want a clustered index on all tables. There is lots of information in this forum and on the website on why you want them.

    An OLTP database is an Online Transaction Processing Database. Essentially, this is a database this is subject to lots of data modifications. An OLAP database is an Online Analytical Processing Database, and is used for data warehousing, and is essentially read-only.


    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  5. airjrdn New Member

    Well, I would imagine this is probably an OLAP database then. It's a database with 3 months worth of data at any given time. Each day about 750k records are added, and about 750k records are resummarized and removed (91 day old data is deleted). There are multiple summary tables letting users start at a high level, then drill down to the detail level.

    I don't disagree regarding the clustered index portion of your post, this entire database & application were dropped in my lap after the company hired a consulting firm to come in build a quick system and hand it to my department. At this point, we are just trying to make it more efficient where possible.
  6. Chappy New Member

    When shrinking databases from Enterprise Manager, you are given the option to shuffle the pages so free pages are moved to the end of the file before the file is truncated. I dont know what the equivalent option in TSQL is, im sure there is one.
    Also note that whilst shrinking the database might cause immediate benefits in terms of space, if its a highly dynamic database as yours appears to be, it may not necessaarily be the best strategy if the file is likely to require the freed space again quite soon.
  7. airjrdn New Member

    Good thought. I figure keeping about 10% free space in the main table would be good.

    There isn't a way to manually shrink a database from EM using SQL2000 is there?
  8. bradmcgehee New Member

    Yes you can. Right-click on the database, select "All Tasks," and then select "Shrink Database."

    Also, since you often add and remove data from your database, I would recommend you run DBCC UPDATEUSAGE often, at least weekly, in order to prevent SQL Server from tracking incorrect size statistics. Also, once you get clustered indexes on your tables, you will want to rebuild them weekly, especially right after you purge old data.


    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  9. sqljunkie New Member

    To me is sounds like the problem is first the fill factor then the DB Shrinking. I would redo or create a Clustered Index with a Fill Factor between 90 and 95%. From your info it looks like you're increasing the tables by approximately 1% everyday. I would then create a maintenance plan to fix the indexes on the weekend or whenever an offline period occurs, if possible.

    Once the fill factor is adjusted your Ave. Page Density will come inline and you'll be able to shrink the files more.

    Another note about Fill Factors. First, the fill factor percentage determines how much space is available on each database page. You set it at a specified value to decrease the amount of page splits that will occur while keeping it high enough so you will not hurt read perf. Your tables have a lot of pages in them so there is going to be a lot of free space on each page depending on what fill factor you specify. Secondly if you are inserting data based on an auto-increment field and that field is the clustered index, you'll always be inserting at the end of the table and thus you'll always be page splitting. If this is the case set the fill factor to 100 so you'll have the best read performance.
  10. airjrdn New Member

    The data heavily based on a particular date. Probably 95% of the data inserted nightly would go at the end of the table if a clustered index were to be built on that date. That would mean however that there would still be quite a few records that would need inserted somewhere besides at the end of the table.

    Only testing will show how a clustered index on that column would impact performance. I may just have to bite the bullet and put one on there to see. I'm just worried about how long it will take and what performance will be like during the build of the index.

    BTW, what raid configuration would you guys recommend for this database given what you currently know about it?
  11. sqljunkie New Member

    In your case I would select a high fill factor...

    You need to put a cl indexes on the tables, no matter how long it takes. Search this site for guidance and best practices for creating the indexes.

    I almost always recommend RAID-10. Lots of people use RAID-5 because it is cheaper. With the amount of inserting you're doing I would strongly recommend RAID-10 for the better write performance you'll get.
  12. airjrdn New Member

    Once again, thanks for the helpful info.

    I'm currently rebuilding all indexes on a table with few columns, but about 97M rows. It's been running for a little over a half hour and hammering the machine, but I'm hoping the space savings will show up in the unused space reported for that table and it's indexes. I'll let you know once it's complete.

    Currently, that machine is a single 1.2Ghz processor with 2G of memory. We've got a 2nd processor, but haven't had an opportunity to add it yet. The OS is NT4 SP6, and we plan on upgrading to Win2k when adding the 2nd CPU. The 2nd CPU should help out quite a bit, but disk I/O is killing us during our nightly inserts. I believe it's running RAID-5, but I need to have the network guys find out for sure.
  13. bradmcgehee New Member

    Let us know what happens. If you are running RAID 5, that can partially explain why your INSERTs are slow, as RAID 5 performance if generally horrible (relatively speaking)when lots of INSERTs are made.

    Also, when adding a clustered index, often, it should be based on the primary key, although this is not always true. See the section on this website about clustered indexes for lots of tips on how to select a clustered index.


    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  14. airjrdn New Member

    Will do.
  15. scardoso New Member

    Try using the code:

    SET NOCOUNT ON
    DECLARE @LogicalFileName sysname
    declare @MaxMinutes INT
    declare @NewSize INT

    -- *** MAKE SURE TO CHANGE THE NEXT 4 LINES WITH YOUR CRITERIA. ***
    USE database -- This is the name of the database for which the log will be shrunk.
    SET @LogicalFileName = 'database_log' -- Use sp_helpfile to identify the logical file name that you want to shrink.
    set @MaxMinutes = 10 -- Limit on time allowed to wrap log.
    set @NewSize = 5 -- in MB

    -- Setup / initialize
    DECLARE @OriginalSize int
    SELECT @OriginalSize = size -- in 8K pages
    FROM sysfiles
    WHERE name = @LogicalFileName

    SELECT 'Original Size of ' + db_name() + ' LOG is ' + CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' + CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
    FROM sysfiles
    WHERE name = @LogicalFileName

    CREATE TABLE DummyTrans (DummyColumn char (8000) not null)

    -- Wrap log and truncate it.
    DECLARE @Counter INT
    declare @StartTime DATETIME
    declare @TruncLog VARCHAR(255)

    SeT @StartTime = GETDATE()
    set @TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'

    -- Try an initial shrink.
    DBCC SHRINKFILE (@LogicalFileName, @NewSize)

    EXEC (@TruncLog)

    -- Wrap the log if necessary.
    WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
    AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
    AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
    BEGIN -- Outer loop.
    SET @Counter = 0
    WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
    BEGIN -- update
    INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
    DELETE DummyTrans
    SET @Counter = @Counter + 1
    END -- update
    EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
    END -- outer loop

    select 'Final Size of ' + db_name() + ' LOG is ' + CONVERT(VARCHAR(30),size) + ' 8K pages or ' + CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
    FROM sysfiles
    WHERE name = @LogicalFileName

    DROP TABLE DummyTrans
    PRINT '*** Perform a full database backup ***'
    SET NOCOUNT OFF

    Sérgio Cardoso
  16. airjrdn New Member

    Thanks for the code Sérgio!<br /><br />It just so happened, I had another (much smaller) database in need of some shrinking. I'm running your code right now. [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]
  17. bradmcgehee New Member

    Thanks for your contribution.

    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  18. airjrdn New Member

    Turns out, the pasted code worked like a charm on my log files. One shrank from 23G to the new 5M size. Great work!

    I've still got a lot of free space in some of the tables though, as this code only cleans up log file size issues.

    Any new ideas on this front?
  19. bradmcgehee New Member

    Have you rebuilt the indexes using an appropriate fill factor yet? The fill factor, will of course leave some room in your tables. Is this what you mean by free space in some of the tables, or am I missing your point?

    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  20. airjrdn New Member

    I rebuilt one of them using a fillfactor of 95. It reduced the freespace a little, but not by much. Should I use a different fillfactor?
  21. bradmcgehee New Member

    You will have to experiment with the fillfactor to see what is best for your particular situation. Databases with lots of INSERTS need a higher fillfactor than those that don't. But overall, 95 is a good starting number, and the one I generally use. I also rebuild my indexes each week in order to ensure that the 95 fillfactor is maintained.

    How much free space do you now have?



    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  22. airjrdn New Member

    Well, I haven't had time to run this on the larger tables yet, so I really can't say. This is a database that really needs to be up 24/7, and with 112G of data, it's going to take a while to run. This weekend was packed with other server maintenance, so hopefully I'll be able to run it soon.

Share This Page