SQL Server Performance

Inconsistent Query Response time

Discussion in 'SQL Server 2008 T-SQL Performance Tuning' started by anonymous2009, Sep 21, 2011.

  1. anonymous2009 New Member

    Hello,

    Question 1:I have this read only database against which I'm running couple of queries to record their response time.
    Inspite of data being static and execution plan being the same, the response time keeps varying a lot.

    When I run the query immediatedly after restarting my sql server, I get 3453ms, I shut down the sql server service, restarted after sometime, and reran the query. The second time, the response time is 8399ms.

    Even without restarting sql server service, the response time of same query differs between each time the query is executed.

    How to baseline the response time?
    What is going on?
    Is there a reason for this?

    Question 2:
    Also, based on the first response time of 3453ms, I tuned the queries by
    - Adding additional indexes
    - Replaced some queries with indexed views
    - Made the index fill factor as 100% as the table is a read-only table and the data is refreshed only once a month.
    - Made the database itself as Read-only
    - Gathered STATS

    After doing all this query response time went down to around 2075ms.
    But when I tried the same set of queries after two days, and the response time went up to 5451ms.
    Nothing changed in between.

    What is the reason behind this?
    How to baseline if the results are so inconsistent?

    Thanks.
  2. anonymous2009 New Member

    Forgot to mention that I'm running SQL SERVER 2008 EXPRESS EDITION in Windows 7 operating system.
  3. Luis Martin Moderator

    Welcome to the forums.
    When you shutdown the sql, it start taking memory from zero.
    After some time using sql, it try to get max memory available. So, any query will run faster than initial time.
    More, if you run any query some days after, not necessary cache will have execution plan for that query.
    So, thats sound good for me.
    HTH
  4. preethi Member

    Adding on "How to get the baseline"...
    Repeat the query multiple times and get the average.
    You can also check on Max and Min and eliminate the noise (Deviating a lot from the rest) and then take the average as baseline.
    I generally take a query with multiple different parameters and (Around 100 times) and then set the baseline. It includes, CPU,Reads,Writes & Duration. Generally reads and writes remain the same. CPU may vary (I have seen where CPU takes set of values 0, 15, 16, 31, 32.... That is not alarming.
    Duration is the one which varies.
  5. Shehap MVP, MCTS, MCITP SQL Server

    Honstly , performance consistency is big article , but anyway assure the following for that query since I do believe much it could be 0 sec (few msec) consistently even within tables joined together + having millions of records .....How..?

    1- First , get assure of that all coding enhancements are settled well there as for example:

    · Try not to use TVF within any join and replace them by any other appropriate methodology like "Temp table " with Temp clustered indexes as well or CTE..etc

    · Try not to use scalar functions within select columns as possible.

    · Try to avoid using Views as possible and get their select commands inside the SP to reduce significantly Compiles/sec

    · Try to avoid using String columns for inner join between huge data entity tables and get use to do it using Numerical columns

    · Try not use Not in / Not like with selects from views.

    · Get use to have parameterized SQL statements Exec SP_executesql @sql to cache query execution plans and save time of revalidating it each time it executes.

    · Get use to assure with (nolock) hints are there for pure select statements ( If business accept that ) to save time of acquiring lock each time execute and also avoid any latency that might be caused by heavy locks circumstances.

    · Get use to avoid much table hints like with ( index ) and with with (index , forceseek)…etc since they are almost result of performance degradation.

    · Get use to avoid much query hints like “Force order “ +”MAXDOP “ + “Hash join”+”Merge join”……etc

    2- Second, get assure of index optimizations by:

    · Reaching to much index seek besides of the least I/O + CPU cost < 1 and much preferred to be < 0.1 as possible.

    · Avoiding any table scans there + also index scans as possible.

    · Using compound covered indexes for the important queries.

    · Using compression technology for indexes and particularly more page compression.

    · Using Filtered indexing according to the most generic business parameters.

    · Online index rebuild is scheduled well + the same also for Update statistics.

    3-Get assure of CPU resources by :

    Assuring CPU utilization not exceed 70 % as Maximum and quite much preferred not to exceed 25 % as average

    Assuring CPU parallelism assigned for each user using Resource governor technology of 2008.

    let me know where you get of them please

Share This Page