SQL Server Performance

HELP: Query Needing tuned

Discussion in 'ALL SQL SERVER QUESTIONS' started by ravendarksky, Apr 25, 2012.

  1. ravendarksky New Member

    Hi there,

    I've recently been tasked with getting some legacy SQL reports working at a decent speed on our new SQL Server 2008 box. These reports used to take 18+ hours to run and for the most part I've been successfull in greatly improving their performance! However I'm having particular trouble with a small and relatively simple LOOKING query.... I should also note that I don't properly understand what the writer was trying to achieve with this query, but it is used several times throughout several reports (each time taking about 2 minutes to run which adds up to a very significant slow down)

    Code:
    declare @xdate as Datetime
    set @xdate = CONVERT(DATETIME, '31 Dec 3000')
    
    SELECT
        A.IDField,
        NEXT_RETURN_DATE =
                        MIN(CASE
                            WHEN A2.EndDate BETWEEN DATEADD(S, 1, A.EndDate) AND DATEADD(M, 3, A.EndDate) THEN A2.EndDate
                            ELSE @xdate
                        END),
        NEXT_START_DATE =
                    MIN(CASE
                        WHEN A2.EndDate BETWEEN DATEADD(S, 1, A.EndDate) AND DATEADD(M, 3, A.EndDate) THEN A2.StartDate
                        ELSE @xdate
                    END)
    FROM Table1 A
    LEFT JOIN Table1 A2 ON A.NonIDField1 = A2.NonIDField1 AND A.NonIDField2 = A2.NonIDField2
    WHERE  A.EndDate BETWEEN '01 Jan 2010' AND '01 Feb 2011 23:59'
    GROUP  BY A.IDField
    I'd love to hear some suggestions on how I could improve this or why it's taking so long to execute.

    Looking at the execution plan doesn't seem to help me any in this particular case:

    Select 0% <- Hash Match (Aggrigate) 5% <- Hash Match (left outer join) 15% <- A & B

    A <- Compute Scalar 2% <- Table scan Table 1 39%
    B <- Table Scan Table2 39%

    Table has about 90k rows being used for this query! Hardly anything....

    Thanks, and sorry if this is a stupid question! I'm new here but looking forward to learning and hopefully eventually helping some people out myself!

    - Angus
  2. FrankKalis Moderator

    Welcome to the forums!
    What are the indexes on the tables? How many rows are supposed to be returned? Is this the full statement, or is that what you've posted only part of some bigger statement. Somehow this looks like it could be contained in a function.
  3. Shehap MVP, MCTS, MCITP SQL Server

    It sounds like here some heap tables (tables without clustered indexes) as long as Table scans are exists there that might result of much RID lookups as well , so please take care of it

    Despite of it, please work out the below indexes implemented with 2008 techniques like Index Filters +Data compression (mostly probable page compression for your case)

    createnonclusteredindex Table1_index1 on Table1(NonIDField1 ,NonIDField2,EndDate)

    include (IDField)with (Data_compression =page)

    where (EndDate >='01 Jan 2010'AND EndDate <='01 Feb 2011 23:59')

    createnonclusteredindex Table2_index1 on Table2(NonIDField1 ,NonIDField2,EndDate)

    with (Data_compression =page)

    Also replace between operator by > & < operators to match up with the Index filter above to be eventually :

    A.EndDate >='01 Jan 2010'AND A.EndDate <='01 Feb 2011 23:59'

    Also , you might check if any heavy locks are exists there impacting on that transaction itself or other transactions as long as it takes 2 minutes where enough room to get heavy locks

    Thereby , you have 2 main choices :

    · Either to use with (nolock)hint ( if business accept since It might read dirty ( uncommitted ) data )

    · Or use Readcommittedsnapshotisolationlevel using row versioning by just using the below commands with considering some impacts on TempDB mainly

    Alterdatabase DBName set Read_committed_snapshoton

Share This Page