We have a started experiencing a performance issue with a table that has now grown to over 600,000 rows.

Question

We have a started experiencing a performance issue with a table that has now grown to over 600,000 rows. Performance has become unsatisfactory when accessing this table. I haven’t ever run DBCC INDEXDEFRAG on this table. I was wondering if after running it on this table, if performance will improve? Does the performance problem I am having in any way relate to the SQL Server log files? Do I need to clear the SQL Server log files?

Answer

The slowness you are experiencing in this table can be caused by a number of different reasons. You might want to investigate each of the following possibilities to help resolve your problem.

  1. Indexes do need to be rebuilt on a regular basis in order for them to run as efficiently as they can. If you have never rebuilt the indexes on this table before, then you will want to do this as your first step to resolving your performance problem. I rebuild the indexes on every table in each of my databases on all my servers on a weekly basis. This can sometimes take a long time and use up valuable server resources. Because of this, you should perform your index rebuilds at a time when your server is not very busy. Generally, but not always, performance will increase after your indexes have been rebuilt. For more information on how to do this, see this webpage.
  2. Perhaps the data has changed enough in your database that the current indexing scheme is no longer the best one. Capture a Profiler Trace of the queries run against the database and run the Index Wizard or Database Engine Tuning Advisor against the trace to see if it recommends any changes in indexing.
  3. What is the Buffer Cache Hit Ratio running? Perhaps the tables have gotten so large that you don’t have enough RAM to fit all the data into memory that should be there. Ideally, the ratio should be close to 100%. If the Buffer Cache Hit Ratio is running at less than 90%, you need to add more RAM to your server.
  4. Perhaps your I/O disk subsystem isn’t big enough to handle the extra data. How busy are your disk arrays? You may need to boost the performance of your disk array.
  5. Perhaps your database’s design is not efficient. When there isn’t much data in a database, many design problems don’t show up. But when lots of data is added, then poor database design can rear its ugly head and cause performance problems. Review your database design to see if it is properly normalized.

The SQL Log files are automatically truncated each time you back them up. I am assuming that you are backing them up regularly (like once an hour or so). If not, then you need to start doing so.

Generally speaking, whether or not SQL log files are truncated or not should not affect the performance of a particular table, as you have described in your question.

]]>

Leave a comment

Your email address will not be published.