SQL Server Performance

Converting from table variable to temp table

Discussion in 'SQL Server 2005 T-SQL Performance Tuning' started by atulgoswami, Nov 20, 2009.

  1. atulgoswami New Member


    I came across a strang scenario, which is as below:
    One of the stored procedure was using table variable and referenced almost 20 times on the combination of three columns. In this, it is processing huge data and performance was not good.

    In a hope to improve the performance, i converted the table variable to temp table and created a non clustered index on it.
    after this when i checked the execution plan, i was surprised as
    - some of the clustered index seek operations converted into clustered index scan
    - Number of Hash Match operations got increased from 5 to 10
    - Nested loop (left outer join) converted into nested loop (right outer join)
    - number of nested loop operations were reduced as some of them converted to Hash Match
    - however table scans were reduced.
    I am just struggling to understand why and how it happened just changing from table variable to temp table.
    Thanks in advance
  2. Adriaan New Member

    A table variable can only have a PK as its index, so the use of any fields not covered by that will result in table scans.
    If your table scans get replaced with index scans, that is probably an improvement. And with the index scans, some of the index seeks would be additional operations, so it makes sense for them to be dropped from the plan.
  3. atulgoswami New Member

    Sorry for late reply and Thanks..
    From your point, earlier table variable did not have any PK on it and i am sure, to make it, i have to define explicitly.

    My confusion is with couple of JOINs which were with table variable, were performing nested loop but after making temp table, those were converted into Hash Match. I have defined a non clustered index on temp table.

    Another strange thing, i checked the execution plan and number of actual rows were also got increased....

    I thought i may be doing something wrong and checked quite a few times but observation was same and strange.

    I think i'll have to spend some more time to see what is happening there.

    Anyway, it would be of great help if someone can give some info or any third party tool to analyze execution plan correctly.

    Thanks in advance
  4. Adriaan New Member

    Is performance with the temp table better, and is it acceptable, or are you looking for further improvement?
  5. atulgoswami New Member

    Earlier execution was ending with timeout in production and as of now i dont have update how much it is taking after these changes....On sandbox and staging, some improvment is there but will have confirm from production.
    but i think 'll have to see by my own what is happening and probably will be able to update something next week.
  6. Adriaan New Member

    Your original posting mentioned twenty "references" to the table variable, on three columns. By "references", do you mean expressions or joins?
    If you're joining on that table variable twenty times in a single query statement, it looks like you may have a case of over-normalization.
    If you're using those columns in expressions, would those expressions be UDFs? If those UDFs are situated anywhere past the FROM clause, then the UDFs will be killing your chances of benefiting from any indexes.
  7. atulgoswami New Member

    20 references on three columns mean - used in different join operations in 20 different SQL statements with set of tables (>10).

    I think , my original post was not clear. Let me describe you the logic.

    1. Logic is instead of using 10 different tables in all SQL select statements, we made a table variable and inserted the required data into it.
    2. Now this table variable is used in getting different-different data from some other tables in 20 odd SQL statements. The whole idea is behind this to avoid all 20 calls to these 10 tables and make more readable/maintainable.

    Am I making sense in my approach?
  8. Adriaan New Member

    With the table variable you're introducing some overhead that probably costs more in execution time than it's worth for the "ease of maintenance".
    I'd use a working table only for reading data from various tables, then using the same data in multiple queries in the same procedure. (And perhaps when retrieving data from a remote server.)
    From your description it's not very clear if the 20 odd SQL statements all read the same rows and/or the same columns from the table variable. If they're all reading different rows and/or different columns, then there is probably no reason to build a working table.
    You may also be bypassing SQL's data buffering capabilities
    SQL can be a little counter-intuitive for programmers ...
    Write those queries joining on the original tables, even if it would be simpler to write them with a single join on the table variable. Then look at performance.

Share This Page