Hi, I have a table, call it T, with primary key: id int, date_ smalldatetime, f1 decimal(5,2), f2 smallint T is clustered on: date_, id, f1, f2 T has a non-clustered index on: id, date_, f1, f2 The following query (in a stored procedure) pulls a time series of data. It inconsistently uses the non-clustered index. Sometimes it performs a clustered index seek (incorrect), but if I update statistics on T, SOMETIMES I can get it to do an Index Seek on the non-clustered index (correct). CREATE PROCEDURE [dbo].[getValueFromId] @id int, @f1 decimal(5,2), @f2 smallint, @dtStart smalldatetime = NULL, @dtEnd smalldatetime = NULL AS BEGIN if @dtStart is null set @dtStart = '19000101' if @dtEnd is null set @dtEnd = '20781231' select date_, value from T --with (INDEX(IX_nonclustered)) where id = @id and date_ between @dtStart and @dtEnd and f1 = @f1 and f2 = @f2 and order by date_ asc END If I include the index hint, it (correctly) uses the non-clustered index. (Replacing "between" with >= and <= does not fix the problem.) This appears to be a bug in the optimizer. It's a simple query, no joins, perfectly SARGable, using all fields in the non-clustered index key. Aside from using an index hint, does anyone have any suggestions or experience with this kind of problem?