SQL Server Performance

Getting a fat, neglected, out of shape database into shape. Software Vendor has me confused.... (Long)

Discussion in 'Performance Tuning for DBAs' started by Ziptar, Sep 26, 2007.

  1. Ziptar New Member

    Long time Lurker, First time Poster.
    I have a Database attached to a web app that in the two years of it's existance hasn't been well admined at all.
    This is what I have:
    Millions of orphan records due to a lack of foreign keys in the vendors provided schema, I have been deleting them, (for instance, One table has 27GB of orphaned rows in it.)
    A database with a horrible average queue length. (depending on user load, average so far for today since 9AM, 4.6377)
    Running DBCC INDEXDERAG Nightly to avoid locks against indexes that defrag quickly and cause 100% DB CPU usage (The indexes and running of nightly defrag also vendor provided.)
    120GB Database on 4 file groups, one 119GB, and three 25GB partitions on a EMC Cx3-40 that have not been set to 64k allocation unit size and 64K Alignment Value (Per EMCs Recommendations). Trans-Log is on it's own 125GB partition.
    All tables that had been previously in seperate file groups were recreated on the default thanks to a vendor provided software upgrade.
    Auto shrink was enabled until I turned it off last month, Auto grow is enabled.
    The file system is horribly fragmented.
    Two Dell 6850s Quad Xeon 3.0Ghz Dual Core, 32GB RAM Clustered, SQL server 2000 32 Bit.
    EMC Clarrion Cx3-40 SAN with 659GB in Raid-10 available 319 GB in use by DB Currently.

    I have 310GB of free SAN space available and have been given a two day database maintenance window next week, to try and do as much as i can to make improvements, My Plan is basically this (assuming it all fits into 2 days)

    Get Backups.
    Defrag Local server drives (OS, Paging, etc) ,setup local drive de-fragmentation schedule
    Create 4 50GB SAN volumes (set to 64k allocation unit size and 64K Alignment Value) with available SAN Space to use as new Table File groups.
    Shrink DB (not sure this can be done in 2 days)
    Move current DB file(s) to aligned (and not fragmented) containers (including T Log) via detach / reattach.
    Align previously used of containers as above
    Install 2003 Server SP2 on each DB server on Cluster
    Install SQL server 2000 Cumulative hot fix.(we need this because we are missing the patch that will allow AWE to use the full 16GB of RAM, right now it sees half that.)
    Enable /PAE only in boot.ini (each node)
    Reconfigure DB Min and Max Memory Sizes (16GB now available to SQL Server thanks to Hot fix)
    Set DB and T-Log min and max size (yet to be calculated), Setting to hard size without Auto grow or Shrink to decrease file system fragmentation.
    Move TempDB to seperate container
    Move Tables and indexes into seperate file and index groups again (The Indexes will go on the 3 25GB partitions, that are now used as table file groups.) Mostly by moving tables by creating and dropping existing clustered indexes.
    Do DBCC REINDEX (shouldn't be need as this happens when non clustered indexes are moved, but we may not get to move them all)

    Makes sense yes?? this is just to get us by until I can upgrade to SQL Server 2005 64 Bit in December.
    I am was talking to the vendor yesterday, a couple of their SQL Server gurus were on the call as well. They left me thoroughly confused, basically they are telling me that
    1) if I am going to shrink the DB files that I needed to have 160GB of space available outside of the transaction logs in order for SQL Server to do the shrink, and thety gave me the imp[ression that the shrink was going to somehow move tabel out to new file groups.
    That was a new on on me, as I have never needed that for a shrink before. he pointed me to KB# Article 324432 I need to read it 2 or 3 more times and do the match outlined in it but, it doesn't seem to make sense in our case. The DB will be offline for the shrink.
    2) They recommend that we not move tables to new filegroups by creating or using existing clustered indexes as there is "risk" of data loss.
    That's a new one on me, I thought that was the prefered method for moving tables to new file groups.
    I had planned to setup the new file group partitions on the SAN tomorrow but, now that the Vendor wants 160GB available just to shrink it blows my whole plan.

    Any input on the vendors input or a better method to skin this cat is appreciated.

  2. satya Moderator

    The steps are looking good, but I would ask why you need to shrink the data file when you have enough space with upgraded server. Also for better performance you can move the indexes to a seperate filegroup and there will not be any data loss. If you can get downtime on this database the whole exercise can be completed within 1 day.
  3. Ziptar New Member

    Thanks for the Reply, The shrink is a one time thing just we can get a handle on what the size of it is, and make it easier to move around should we want to, and also make the File System Defrag a little better / quicker.
  4. satya Moderator

    Hope that works, but in case if this is causing issues again you have to set optimum value to the data & log file sizes in order to avoid repetitive exercises.

Share This Page