SQL Server Performance

Temporary Tables

Discussion in 'T-SQL Performance Tuning for Developers' started by sebasnn, Jun 4, 2010.

  1. sebasnn New Member

    Hi, im developing an app that should work in real time, so any delay noticed by the users its immediatly reported to us.The problem is that im using an in clause in my query wich covers lots of ids in my table (i need to get specifically the user's data), so the query goes like this:
    select field1, field2, ..., field12 from tablewhere date_field > (current_timestamp - '00:00:30') and table_id in (id1,id2,...,idN)
    I've a clustered index on table_id, fill factor 80%.
    I'm getting the info in about 0,5 seconds for 900 records (wich is the average ammount of ids to get), and sometimes more, wich is unacceptable. So testing a couple of things, i've figured i could try and join instead of using where table_id in...
    So i've created, for each session opened, a temp table called #ids and joining, the query looks like this...
    select field1, field2, ..., field12from table inner join #ids on ID = table.table_idwhere date_field > (current_timestamp - '00:00:30')
    The response time seems to be better (< 0.3 seconds) but i'm a little concerned about having like 30+ temp tables since the server is writing down info all the time (and every time somebody restart the app, this temp table should be created once again) and honestly, this is my first time with #tables (i've read the tip section about #tables)So, should i be concerned about the overall performance in our server? Is it there a way to improve my querys that i'm not noticing?Btw, option (fast n) won't work, since i need the full RS to process the data (and i'm working on VB6)
    Thx in advance and sorry for my lousy english =P
  2. FrankKalis Moderator

    Is this a new question? I thought I've already replied to it yesterday...
    Anyway, for the sake of completeness and who knows whom it might help.
    Rather than CURRENT_TIMESTAMP - '00:00:30' use DATEADD to avoid data type conversion that may affect performance negatively.
    Investigate whether a table variable imporves things. With only a few hundred rows a table variable may turn out a better choice than a temp table.
    Query runtime WILL vary due to reasons such as server load, network traffic, etc... A query guaranteed to be ~0,5 seconds is ambituous. I wouldn't commit myself to such a guarantee.

Share This Page