SQL Server Performance Forum – Threads Archive
about indexing and preventing full table scanssome tables have 50-60 columns ( linked by uniqueid )
each column has between 5 and 200 unique values. hence when the query is select a, b, c where x and y or z and (k or m ) and p and q
now the profiler does full table scans each time. note : there is indexes on each of the columns in the where clauses. is indexing more useful when there are less number of unique values in the column.
Try to see execution plan. May be there is a better plan scaning a table than using an Index. Luis Martin …Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
i want to see if there is a way to avoid full table scans since some queries take 120 mins ( record size is close to 100 million. )
Even if you have indexes sometimes SQLServer may select the table scan. i.e. if it thinks that table scan may be faster than searching the multiple indexes. Based on the statistics if more records to be selected (and if the fields in the select clause are not covered by the indexes) SQL server may choose this.
I suggest 2 methods: Check whether all the fields in the select & where clauses are covered by the indexes. If you are using any sum type calculation, keep this summaried information in another table and update that new table during evry insert, update and delete. G.R. Preethiviraj Kulasingham
Hi ya, try creating a covered index on the entire set of columns, starting with the most selective in the where clause to the least selective, then followed by any remaining columns in the select clause, order by, group by, etc. If you do any aggregation, then consider using an indexed view. ie create the view and then create a unique clustered index on the view Cheers