SQL Server Performance

Performance of SQL Server

Discussion in 'Performance Tuning for DBAs' started by Eugene_I_Kushnir, Oct 16, 2007.

  1. Eugene_I_Kushnir New Member

    Hi DBAs. I have a problem. I have one server. There are 2 phisical arrays.
    Logical disk D: (Raid10 of 10 phisical disks).
    Logical disk H: and G: (Raid 10 of 8 phisical disks).
    There are 5 databases on the D: (size of 2 is more than 200 GB), TempDB and 2 big databases on the disk G: and Transaction Logs on the disk H:
    Here some performance counters:
    %Processor Time - avg more than 75-80%
    Avg Disk Queue Length H: G: - 3
    Avg Disk Queue Length D: - 15
    Processor is too high. How can I resolve this problem?
  2. techbabu303 New Member

    I would do kind of check list based on your perfomn counters well discussed in Brad McGhee artcile also.
    Here are some pointers to find the cause
    1> Indentify the peak time when this average is above 75 % and run the following query as job at that time to capture the long running query.
    /* Check for top 10 programs active with higher CPU and memory cycles */
    select top 10 pro.program_name,pro.cpu,pro.spid,
    pro.memusage,pro.physical_io,db.name,pro.status,
    pro.last_batch
    from sysprocesses pro,sysdatabases db
    where
    spid > 50
    and cpu > 50
    and memusage > 100
    and pro.dbid = db.dbid
    and pro.status !='sleeping'
    /* To find the query from SPID identified above run the following */
    DECLARE @Handle binary(20)
    SELECT @Handle = sql_handle FROM sysprocesses WHERE spid = <spid id>
    SELECT * FROM ::fn_get_sql(@Handle)
    2> Check for external fragmentation in each database to isloate if this caused by fragmentation, there are good articles and SQL scripts in forum by Grurs on it. Here is one I modified based on Gurus script.
    /*Perform a 'USE <database name>' to select the database in which to run the script.*/
    -- Declare variables
    SET NOCOUNT ON
    DECLARE @tablename VARCHAR (128)
    DECLARE @execstr VARCHAR (255)
    DECLARE @objectid INT
    DECLARE @indexid INT
    DECLARE @frag DECIMAL
    DECLARE @maxfrag DECIMAL
    -- Decide on the maximum fragmentation to allow
    SELECT @maxfrag = 20.0
    -- Declare cursor
    DECLARE tables CURSOR FOR
    SELECT TABLE_NAME
    FROM INFORMATION_SCHEMA.TABLES
    WHERE TABLE_TYPE = 'BASE TABLE'
    -- Create the table
    CREATE TABLE #fraglist (
    ObjectName CHAR (255),
    ObjectId INT,
    IndexName CHAR (255),
    IndexId INT,
    Lvl INT,
    CountPages INT,
    CountRows INT,
    MinRecSize INT,
    MaxRecSize INT,
    AvgRecSize INT,
    ForRecCount INT,
    Extents INT,
    ExtentSwitches INT,
    AvgFreeBytes INT,
    AvgPageDensity INT,
    ScanDensity DECIMAL,
    BestCount INT,
    ActualCount INT,
    LogicalFrag DECIMAL,
    ExtentFrag DECIMAL)
    -- Open the cursor
    OPEN tables
    -- Loop through all the tables in the database
    FETCH NEXT
    FROM tables
    INTO @tablename
    WHILE @@FETCH_STATUS = 0
    BEGIN
    -- Do the showcontig of all indexes of the table
    INSERT INTO #fraglist
    EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
    WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
    FETCH NEXT
    FROM tables
    INTO @tablename
    END
    -- Close and deallocate the cursor
    CLOSE tables
    DEALLOCATE tables

    -- Do select to get the indexs id which have greater fragmentation 30%
    SELECT ObjectName, ObjectId, IndexId,IndexName ,LogicalFrag
    FROM #fraglist
    WHERE LogicalFrag >= @maxfrag
    AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
    DROP TABLE #fraglist
    3> Make sure yoiur maintainence plans are not cusing this at regular intervals, like checkdb , backuo or Tlog backup. There is intresting article on this also in this forum please revisit them to negate this possibility.
    4> Offcourse you can SQL profiler to run trace when you see high peak usage to identify eaxct cause but do not use on servers where you have greater than 1000 batch req/sec. This is expert advice on one of articles here.
    Regards
    Sat
  3. satya Moderator

    If a particular query is performing poorly the first thing you would check is relevant indexes, even though having associated indexes against those tables may not help any better to the performance. The next phase you observe is high CPU count from the Task Manager (TM), up to some extent using TM is useful but never come into conclusion that a particular process is the root cause of such high CPU count.
    Using SYSMON (PERFMON) will get you more information and that too in real-time, stored procedure recomiplation is the major issue and sometime the only root cause for this high CPU spikes.
    One of the Technet paper suggests that: Capturing this event has significant performance overhead, as it is captured for each compilation or recompilation. If you see a high value for the SQL Compilations/sec counter in System Monitor, you should monitor this event. With this information, you can see which statements are frequently recompiled. You can use this information to change the parameters of those statements. This should reduce the number of recompiles.
    http://sqlserver-qa.net/blogs/perftune/archive/2007/09/20/2152.aspx fyi.

Share This Page