How to Select Indexes for Your SQL Server Tables
Step Two: What to Do Once You Have Gathered the Necessary Information
Actions for Key Tables
For static tables (tables that rarely, if ever change), you can be liberal on the number of indexes that can be added. As mentioned earlier, too many indexes can degrade the performance of highly transactional tables, but this does not apply to tables whose data will not change. The only consideration possible could be disk space. Set all index fill factors on static tables to 100 in order to minimize disk I/O for even better performance.
For highly transactional tables, try to limit the number of indexes. Always keep in mind that a non-clustered index contains the clustered index key. Because of this, limit the number of columns on your clustered index in order to keep their size small. Any index for busy transactional tables has to be highly justifiable. Choose the fill factor with caution (usually 80 to 90%) for these indexes in order to avoid potential page splitting.
For tables used lot in stored procedures/embedded SQL, these tables play an important role in the total application lifetime as they are called most often. So they require special attention. What is important is look at how tables are being accessed in queries in order to eliminate scans and convert them into seeks. Watch the logical I/O used by “Set Statistics IO on” to help you determine which queries are accessing the most data. Less logical I/O is better than more. Choose clustered index with caution. Depending on how transactional the table is, choose a higher fill factor.
For tables with index size greater then data size implies a lot of indexes, so review indexes and make sure their existence is useful and justified.
For the Top 10 or 15 largest tables, keep this fact in mind when creating indexes for these types of tables, as their indexes will also be large. Also check to see if the tables are static or non-static, which is helpful information when deciding what columns need to be indexed.
For the most frequently called Stored procedures/Embedded SQL, See the Query plans and Logical I/O page use.
SQL Profiler Trace is a very good tool. It tracks calls getting executed in SQL Server at any given point of time, their execution time, I/O reads, user logins, executing SQL statement, etc. It can also be used as debugging tool. An analysis of a Profiler trace is important to identify slow running queries. You can set the duration to > 100ms to see queries which take more then 100 milliseconds to execute.
Using a Covering Index + Non-clustered Index Uses Clustered Index as a Row Locator
One can leverage the fact that non-clustered indexes store clustered index keys as their row locators. Meaning that a non-clustered index can behave as a clustered index if the index has all of the columns referenced in SELECT list, WHERE clause, and JOIN conditions of a query.
In the Orders table the NorthWind database, there currently is a non-clustered index on the ShippedDate column.
Try running the following:
SELECT ShippedDate, shipcity FROM orders WHERE ShippedDate > ‘8/6/1996’
The query plan of statement produces a Clustered Index Scan.
Now add the column shipcity to the non-clustered index on ShippedDate.
CREATE INDEX [ShippedDate] ON [dbo].[Orders] ([ShippedDate], [ShipCity]) WITH DROP_EXISTING
Now run the query again. This time, the query plan of statement produces an Index Seek.
This magic happened because all fields (ShippedDate and ShipCity) in the SELECT and the WHERE clauses are part of an index.
In the Titles table of the Pubs database, check out the following execution plan for this query:
SELECT title_id, title FROM titles WHERE title LIKE ‘t%’
Notice that the execution plan shows an Index Seek, not a Bookmark Lookup (which is what you usually find with a non-clustered index). This is because the non-clustered index on the Title column contains a clustered index key Title_Id, and this SELECT has only Title_Id, Title in the SELECT and in the WHERE clause.
Now the Hard Part Starts
By following the simple steps outlined here, you get useful pieces of information in no time that can help you improve the performance of your database. After you have data in front of you put it to work. These minor inputs will bring a majority of your results.