SQL Server Performance

Any way to optimize query of datetime column

Discussion in 'T-SQL Performance Tuning for Developers' started by vladimir_sim, Aug 3, 2005.

  1. vladimir_sim New Member


    Hi,

    Is there any know way to optimize the query of data consisting
    of DATETIME column from my database table ?

    My SQL command used to query using condition like

    ".. WHERE convert(char(10), exp_dt,10)<> '9999-12-31') "


    I am thinking if only I can convert the DATETIME column to another extra
    column that store the date in numeric, it should be able speed up my query
    if I also convert my search date into numeric value too. Query using numeric
    column is the best and I can index on it too.

    I ready some articles advising us to re-structure our table
    design.

    Anyone got any good advise or comments on this area. Please
    advise. Thank you
  2. Madhivanan Moderator

    What difficulty did you face?
    You can use this in SQL Server as well

    ..Where exp_dt<>'99991231'

    or

    ..Where convert(varchar,exp_dt,112) <>'99991231'



    Madhivanan

    Failing to plan is Planning to fail
  3. joechang New Member

    the conversion of a datetime to string is a moderately expensive operation, so you want to avoid it in high row count operations,<br />since the date portion is stored in the first 4 bytes of a datetime field,<br />try the following<br /><br />DECLARE @Ref binary(4)<br />SELECT @ref = SUBSTRING(CONVERT(binary(<img src='/community/emoticons/emotion-11.gif' alt='8)' />,CONVERT(datetime,'9999-12-31')),1,4)<br /><br />SELECT xxx<br />WHERE SUBSTRING(CONVERT(binary(<img src='/community/emoticons/emotion-11.gif' alt='8)' />,getdate()),1,4) &lt;&gt; @ref<br /><br />
  4. FrankKalis Moderator

    Another way would be<br /><pre id="code"><font face="courier" size="2" id="code"><br />...WHERE DATEADD(d,0,DATEDIFF(d,0,&lt; your_column &gt<img src='/community/emoticons/emotion-5.gif' alt=';)' />) &lt;&gt; '99991231'<br /></font id="code"></pre id="code"><br /><br /><br />--<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
  5. Madhivanan Moderator

    >>the conversion of a datetime to string is a moderately expensive operation
    I also thought it is true
    But excution plan says the convertion also takes same time
    In the below code, the query cost of insert is 0.02% and two select statements costs same 0.04%

    Frank, what do you think of this?



    Declare @t table(mydate datetime)
    Declare @i int
    set @i=1
    while @i<=5000
    Begin
    insert into @t values(getdate())
    set @i=@i+1
    End
    Select mydate from @t
    Select convert(varchar,mydate,112) from @t


    Madhivanan

    Failing to plan is Planning to fail
  6. FrankKalis Moderator

    Deleted my <s>first </s> previous reply as it had two mistakes in there...<br />Consider this:<br /><pre id="code"><font face="courier" size="2" id="code"><br />drop table t1<br />create table t1(mydate datetime primary key)<br />Declare @i int <br />set @i=1<br />while @i&lt;=5000<br />Begin<br /> insert into t1 values(getdate()+@i)<br /> set @i=@i+1<br /> End<br /><br />DECLARE @Ref binary(4)<br />SELECT @ref = SUBSTRING(CONVERT(binary(<img src='/community/emoticons/emotion-11.gif' alt='8)' />,CONVERT(datetime,'20051231')),1,4)<br /><br />Select * FROM t1 WHERE convert(varchar,mydate,112) &lt; '20051231'<br />SELECT * FROM t1 WHERE SUBSTRING(CONVERT(binary(<img src='/community/emoticons/emotion-11.gif' alt='8)' />,mydate),1,4) &lt; @ref<br />SELECT * FROM t1 WHERE DATEADD(d,0,DATEDIFF(d,0,mydate)) &lt; '20051231'<br /></font id="code"></pre id="code"><br />The results from SET STATISTICS_TIME ON are:<br /><pre id="code"><font face="courier" size="2" id="code"><br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 0 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 0 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 0 ms.<br />SQL Server parse and compile time: <br /> CPU time = 0 ms, elapsed time = 3 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 0 ms.<br /><br />(148 row(s) affected)<br /><br />Table 't1'. Scan count 1, logical reads 12, physical reads 0, read-ahead reads 0.<br /><br />SQL Server Execution Times:<br /> CPU time = 17 ms, elapsed time = 17 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 20 ms, elapsed time = 17 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 20 ms, elapsed time = 17 ms.<br /><br />(148 row(s) affected)<br /><br />Table 't1'. Scan count 1, logical reads 12, physical reads 0, read-ahead reads 0.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 3 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 3 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 3 ms.<br /><br />(148 row(s) affected)<br /><br />Table 't1'. Scan count 1, logical reads 12, physical reads 0, read-ahead reads 0.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 6 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 6 ms.<br /><br />SQL Server Execution Times:<br /> CPU time = 0 ms, elapsed time = 6 ms.<br />SQL Server parse and compile time: <br /> CPU time = 0 ms, elapsed time = 0 ms.<br /></font id="code"></pre id="code"><br />which shows that CONVERT is the slowest of the above approaches. Joe's approach takes the internal storage format of a DATETIME into account and seems to produce the fastest result. However the mehtod is not always intuitive. I for one prefer the DATEADD...DATEDIFF approach. It is easier for me to understand and still offers very good performance. I think I remember also that it outperforms the SUBSTRING...BINARY method in certain complexer queries and in situations where you JOIN on DATETIME columns, but I can't find anything about that right now. It will be easier anything when the next version finally introduces the DATE data type.<br /><br />--<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
  7. joechang New Member

    the sql optimizer only tries to estimate the cost of table operations, and not for logic,<br />for example<br />WHERE a= 1 <br />has the same cost as<br />WHERE a=1 AND b &gt;0 <br />so long as the estimated row count does not change,<br /> but in fact, there is a cost of evaluting the extra logic.<br />regarding convert, try creating a table with 10 date time columns,<br />then run a query that does 1 datetime convert to another that does 10 datetime converts,<br />each convert clearly has a cost, and yet the optimizer ignores this.<br />this is actually reasonable because the execution plan would not change regardless of whether the extra logic is evaluated.<br /><br />however, it does point to certain non-obvious optimizations, moving logic out of high row count operations into a lower row op with the use of nested queries.<br />see the following example query for aggregating by date where the table rdate column is a date time <br /><br />-- group by date only portion of date time, convert each row to varchar<br /> SELECT CONVERT(datetime,CONVERT(varchar,rDate,101)) AS DT, COUNT(*) AS Cnt<br /> FROM M2C_01 c <br /> GROUP BY CONVERT(datetime,CONVERT(varchar,rDate,101))--, CONVERT(varchar,rDate,101)<br /> ORDER BY CONVERT(datetime,CONVERT(varchar,rDate,101))<br /><br />-- converts date time to binary, substrings for date portion, native group by <br />-- converted back to datetime, then converted to string<br />SELECT CONVERT(VARCHAR,CONVERT(DATETIME,DT+0x00000000) ,101) AS DT1, COUNT(*) AS Cnt<br />FROM (<br /> SELECT SUBSTRING(CONVERT(binary(<img src='/community/emoticons/emotion-11.gif' alt='8)' />,rDate),1,4) AS DT<br /> FROM M2C_01 c <br />) t<br />GROUP BY DT <br />ORDER BY DT<br />
  8. For date only comparisons, I convert datetimes to ints. where @dt = '1900-1-1' convert(int, @dt ) = 0. In adding this to the code above, it seems to perform well.
  9. vladimir_sim New Member

    Hi,

    On this datetime issue, if the use of function in our
    query is an expensive operation to convert the datetime
    column.

    Then is there any way or function in SQL Server 2k
    to convert the datetime column to numeric value
    and vice versa.

    My intention is to use convert the datetime
    format to numeric value for my query use.
    Numeric is good for index use.

    Anyone got any good suggestion
    or ideal. Thank you
  10. mmarovic Active Member

    What percent of rows is in the table satisfies the condition mentioned?
  11. druer New Member

    I'm not sure I understand why you are trying to do the convert at all. I'm assuming that the 9999-12-31 value is something that you default the field to, to indicate that there is no end to the value. So I'm also assuming that the value is defaulted to 00:00:00.000 for time so I'm not sure why the datefield <> '12/31/9999' won't work for you and you need to do any type of a convert at all.

    Secondly I'm not sure that you need to do a <> considering you can't go above that date, so I think you'll speed things up slightly by simply doing a "<" only, ie datefield < '12/31/9999'

  12. vladimir_sim New Member

    Hi,

    First, some of my tables columsn are not always default to 9999/12/31 12:00:00,
    some of the columns might contains the time of the day, some might not.

    Next, the datetime columns is one of the search condition in some of my query,
    using QA, it show that the datetime columns is causing those scanning of my table
    instead of seeking the index.

    Therefore, I am hopping to have another way to convert the datetime column to
    some numeric value, index that column and searching using that column, it should
    help in my query.

  13. mmarovic Active Member

    You have an index scan because of conversion you use (but you don't have to) in queries and <> condition. I asked you for percentage of rows that satisfies condition to see if it is possible to split it up to union of '<' and '>' queries. If so you could use index effectively. If not, conversion to any data type won't help, you'll still have index (or table) scan.
  14. druer New Member

    You are actually recording a time value for some values on 9999 - 12 - 31. I've never seen a system that actually planned things out that far, as that is 7900 years from now. I've only ever seen the date 9999 - 12 - 31 used for temporality where it indicates that something is good from now until forever. Not sure what time values could actually mean on a date that far out in the future. I could understand time on legitimate dates that are actually historical or soon to be reached. I just mentioned that to see if cleaning up your data might make your query easier.

Share This Page