What effects the seek vs scan | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

What effects the seek vs scan

Hi, I’ve searched the web but can’t find an answer to my problem.
I’ve taken a copy of my production database and restored it onto my test environment and a specific select query takes about 8 seconds on the production server and about 8 minutes in test. I’ve reviewed the query plan and they both use the same indexes however on production it’s doing a clustered index seek and on test it’s doing a clustered index scan. I’ve updated the statistics, reindexed the table and even completely dropped the index and recreated it with little change.
My question is, what else effects the optimisers choice? Could it be because my test kit is lower spec, if so whats going to be the most likely cause, memory, disk sub system, something else?

Did you drop/recreate the proc after you did the reindex? Perhaps SQL Server is still using the older query plan from cache.. Dropiing the proc clears any query plan from cache. ***********************
Dinakar Nethi
SQL Server MVP
***********************
http://weblogs.sqlteam.com/dinakar/
I’m actually just comparing the select statement. Although it began life in a procedure, to minimise any other potential reasons, I ran the query plan (and execution) in query analyser against the select alone.
http://blog.sqlauthority.com/2007/03/30/sql-server-index-seek-vs-index-scan-table-scan/
http://blogs.msdn.com/craigfr/archive/2006/06/26/647852.aspx Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Thanks Satya, I’ve already read these, I know the difference between a scan and a seek and I know why the scan, in my case, is going to be less efficient. What I’m trying to figure out is why the same code and data with equally up to date stats and indexes and no issues around compilation would pick the same index but choose to scan rather than seek. The test box is lower spec and SQL Server may have been installed differently initially, which is why I’m trying to generalise with the question…
What effects the optimisers decision to scan rather than seek, if you exclude compilation, data and fragmentation of the index.
By default where there is a data correlation between the order of the non-clustered index entries and the rows in the base table, the optimizer may choose a plan that performs a table scan or clustered index scan rather than using a qualifying non-clustered index to filter the data. The SQL Server optimizer is built on the assumption that there is no data correlation between indexes and hence assumes that the cost in physical IO of using a non-clustered index to retrieve the data row to be 1 physical IO operation for every qualifying result row. For a table scan, the SQL Server optimizer computes the physical IO operations to be equal to the number of pages in the table regardless of the number of estimated rows that meet the filter conditions; to change this behaviour you can use index hints. http://msdn.microsoft.com/archive/en-us/dnarsqlsg/html/msdn_showplan.asp?frame=true fyi. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
In both cases it’s picking the same clustered index, the only difference is the scan vs seek
I have a feeling you are not comparing apples to apples. In almost all the cases I have seen query plans behaving differently in different environments (with diff hardware). I would recommend putting the T-SQL inside a proc so you can drop/recreate the proc for each modification. That way you can remove any previous query plans. did you update the stats AFTER the reindex or before? In your original post you said "I’ve updated the statistics, reindexed the table and even completely dropped the index..". Normally, you drop the index, recreate the index, then sometimes update the stats. Then you exec the proc a few times with diff parameters each time and let SQL Server tune itself and identify the best query plan. Then you can see if its still doing a scan or seek. ***********************
Dinakar Nethi
SQL Server MVP
***********************
http://weblogs.sqlteam.com/dinakar/
I agree, I’m not comparing apples to apples and the environments are very different, my question was what effects the plan with regard to environment most.
You see my problem is that I’m never going to get sign off for a test environment which is up to the same spec as my production environment and as it stand the code works correctly in production but will never pass a unit test in the test environment. I also don’t want to re-write code which works in production just so that it works in test (ie putting in optimiser hints, which I don’t think would work anyway as the correct index is being chosen, just in the wrong way). Putting the T-SQL into a proc doesn’t make any difference either, it still scans in test and seeks in production, no matter how often it’s run. I believe, stats are automatically updated when you reindex rather than defrag, but either way I’ve tried every combination which is why I think it’s environmental (ie hardware or possibly the fact that production uses Enterprise and test uses the Dev edition of sql server)
Do have similar set of hardware from Production to Test, in terms of memory, hard disk & other configuration? Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
@http://www.askasqlguru.com/ This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
]]>