SQL Server Performance

Long covering index or slow stored procedure?

Discussion in 'Performance Tuning for DBAs' started by sql_er, Aug 14, 2008.

  1. sql_er New Member

    Hello,

    I have a stored procedure which is running slowly. Looking at the execution plan in the query analyzer I saw that the subtree cost is 3.5 and there is a bookmark lookup that takes up 65% of the cost.

    I added a covering index, and surely the subtree cost dropped to 1.75 (although logical reads increased, which is another puzzle).

    The problem is that the covering index I added includes many columns [4 varchar and a few ints] (had no choice, since the query is using all of them, and I need to cover them all with an index), and they add up to ~ 1200 bytes(?). Although the index was allowed to be added, the warning message came us saying that if the length of the index exceeds 900, inserts might fail. I believe they will fail if the data inserted in a row will exceed 900 bytes (as row cannot be split between 2 pages I believe).

    So, although I don't expect any data entered to exceed 900 ... but who knows - it seems unsafe to have this index.

    What do people do in this case? Just don't add an index and live with higher query costs?

    Please advise
    Thanks in advance!
  2. Luis Martin Moderator

    Check if ITW (DTA better) confirm you index or suggest any other.
  3. satya Moderator

    How many times it has been recompiled since it is created?
    Also the reindex of all the covering indexes and compilation of all the tables that are involved?
  4. sql_er New Member

    The stored procedure does not have WITH RECOMPILE with its definition, but I do recompile it every time before testing its speed improvement using sp_recompile stored procedure.
    As for the covering index, i just created it to test it out, so you can say that it was just rebuilt.
    As for the recompilation of the tables - I am not sure what it is ...
    Thank you!

  5. preethi Member

    I believe you are using SQL Server 2005.
    You can keep the int columns in the index and move the varchar columns to Include part, if your query supports it. It will help you to overcome the 900 bytes limit. (Mainly move the columns in the Where clause into Index key and rest to Include.
    Additionally, you can think of having multiple indexes. SQL Server can take advantage of multiple indexes.
    Please test the solution before rolling it out.
  6. sql_er New Member

    Preethiviraj,
    I am actually using SQL Server 2000, not 2005, so I don't think I can take advantage of the include you mention.
    As for the multiple indexes, I will try that. I was, however, under the impression that SQL Server can only use I index per query. Was my understanding wrong?
    THank you!


  7. preethi Member

  8. sql_er New Member

    Hello,
    Thank you for the information about Index Intersection.
    I guess I have been misinformed before. My impression was based on Tip 1 here: http://www.synametrics.com/SynametricsWebApp/WPTop10Tips.jsp, where it says that only 1 index can be used per table.

    Thanks!

Share This Page