SQL Server Performance

Temp tables (and indexes) Statistics

Discussion in 'Performance Tuning for DBAs' started by hominamad, Dec 11, 2003.

  1. hominamad New Member

    Hi Everyone,

    I'm running some of our slower-running queries in query analyzer with the execution plan on in order to try and resolve some performance issues. I notice that in most, if not all queries where I am using temp tables, I get a warning message in the execution plan saying "Warning: Statistics could not be created". Is this a normal thing to see when using temp tables, or is something wrong here?

    Also, my second question: I notice that in all of these areas, SQL Server is doing a full table scan on the temp tables. Is this normal behavior as well? Is there a way to avoid this issue? Are table variables better?

    Thanks,

    ~W
  2. satya Moderator

    In my opinion using #temp tables are better than using table variable.
    How about update stats on the queries tables?

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  3. Luis Martin Moderator

    If application are using auxiliars tables like:
    Create table #auxtable.....
    insert #auxtable......select .....

    Try to see execution plan for: select......
    Profiler or Index tuning can´t optimize #auxtable but select yes.


    Luis Martin

    ...Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
    Bertrand Russell
  4. hominamad New Member

    quote:Originally posted by satya

    In my opinion using #temp tables are better than using table variable.
    How about update stats on the queries tables?


    Yeah, the problem is actually a problem which I've been posting about here for a few weeks! We are experiencing fatal system slowdowns every few days in which the only way to recover is to immediately run sp_updatestats. As far as I know, this utility shouldn't have to be run so frequently, and actually if Auto Update Stats is on (which it is), shouldn't SQL Server do this automatically at the approriate time?

    Anyway, we're trying to investigate why we need to run sp_udpatestats so often in order to save our system. We are using temp tables in almost every stored procedure and when I saw this message in the execution plan, I was wondering if it could be related to this problem. So, is it normal to see this statistics warning for temp tables?

    ~W
  5. bradmcgehee New Member

    A table scan is normal on temp tables, unless you add your own index. For very large temp tables, it is often a good idea to create the table, insert the data, then add appropriate indexes. While there is some overhead building the indexes, this is often less than the hit taken because the queries running against the tables have to be table scans. The only way to really know if this will help you is if you test.

    If a table does not have an index, like with most temp tables, there are no index statistics to update. The Query Optimizer can sometimes create statistics on the fly for normal tables, but I don't think this is done for temp tables (but I may be wrong).

    I was told my a high-level Microsoft SQL Server tuning expert that they recommend that DBCC UPDATE STATISTICS should be run every night, assuming you have time in your maintenance windows. Auto stats has some limitations, and the only way around them is to manually update the statistics.

    -----------------------------
    Brad M. McGehee, MVP
    Webmaster
    SQL-Server-Performance.Com
  6. richmondata New Member

    The optimizer tends not to use indexes built on a temp table after it has been built. The way around that is to define the indexes in the CREATE TABLE statement. If one or more of the desired indexes is not unique and the table doesn't have another unique column, then add an identity column to the CREATE TABLE definition, and then add the identity column as the last column in a table-level UNIQUE constraint with the index column(s) you want making up the leading columns. If the temp table already contains a column with unique values, you can use it instead of adding the identity column, e.g.

    CREATE TABLE #temp
    (rowID int not null identity
    ,lname varchar(50) not null
    ,fname varchar(50) not null
    ,UNIQUE (lname, fname, rowID) )

    Since the index supporting the UNIQUE constraint is built at the same time as the table, the optimizer will know it's there and will be able to use it in subsequent queries.

    The other route is to create the indexes on the temp table in separate statements after the table is created, but to get the optimizer to use those indexes, you will have to use the EXEC () option with your query inside the parentheses, e.g.
    EXEC ('select lname, fname from #temp where lname = 'smajd')
  7. frankwxn New Member

    after I created the Index for the #temptable , it is even slower than before. and I think the # temp table is not very big
  8. Luis Martin Moderator

    Cant post some query accessing #temptable?


    Luis Martin

    ...Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
    Bertrand Russell
  9. frankwxn New Member



    CREATE TABLE #Transaction (
    [TranactionID] [int] ,
    [UserID] [int] ,

    .
    .
    .

    [RefundDate] [datetime]
    )




    CREATE CLUSTERED INDEX IX_Transaction_Temp ON #Transaction
    (
    TranactionID
    ) ON [PRIMARY]
    .
    .
    .
    CREATE NONCLUSTERED INDEX IX_Transaction_Temp_10 ON #Transaction
    (
    RefundDate
    ) ON [PRIMARY]

    Set dayTotal = (Select SUM(Amount_Of_Transaction) as daytotal from #Transaction WHERE Transaction_number != 'FAILED' and RefundFlag = '1' and UserID != '0' ' and Date_of_Transaction >= @Shortdate and Date_of_Transaction < dateadd("d", 1, @Shortdate)),
    ... and several other similar query



  10. Luis Martin Moderator

    I if were you:

    1) Create real table same #temp.
    2) Fill with data.
    3) Run SQL's to access table.
    4) Run ITW to find out good indexs.

    I don't know if cluster is neccesay in this table.


    Luis Martin

    ...Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
    Bertrand Russell
  11. Twan New Member

    If the table is not very big then use table variables (and therefore without indexes...) table variables are faster than temporary tables, since SQL won't do any recompiles of the procedure as the table is created/populated and manipulated (which it will for a #temp table)

    Also there is no locking on @temp tables, which makes them slightly faster. BUT you can't index them so no good for large temp tables (say more than a few hundred rows, or very wide rows)

    Cheers
    Twan

    PS I know that there are lots of people with update stats problems, but I can't quite understand it... I have been using SQL for a long long time, and other than problems where a table has grown suddenly (due to bulk loading) I've not come across stats type problems, and don't run updatestats/reindex jobs on my production databases... (and yes they are a mix of OLTP and DSS)

    You do however always need to take care with writing of procedures so that they are able to perform well regardless of the parameters passed to them (even if it means having multiple procedures with a wrapper around them (e.g. for optional parameters)

Share This Page