Need help for large display table | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Need help for large display table

The current software package has over normalized data causing many queries to perform more that 3 joins (being modest). Therefore, we created a larger denormalized tables to target the longer running queries. This works fine and has improved performance but I am a bit skeptical. My skeptisicm is that there are 4-5 columns that are part of the clustered index and the columns involved have a legitimate reason for being part of the index. Thereare also about 4-5 non clustered indexes as well. Is there a better way to index? We are using triggers to populate the table could this degrade performance? The number of datapages is in the tens of thousands so should I recreate the tables and indexes? Need a new set of eyes my head is spinning.
If you are using SQL Server 2000 Enterprise Edition, you can use Indexed Views to accomplish the same goal, with a lot less trouble. You can also consider using SQL Server’s Analytical Services to build a cube of this data for reporting needs. This can be done with any version of SQL Server 7.0 or 2000. If these are not options, consider the following: –Triggers will affect performance. How much they affect performance is hard for me to predict, but there is definetly a performance hit. This hit may or may not be acceptable to you. –DTS can be used to move rows to a new table on a scheduled basis instead of using triggers. –Based on what you have described about indexes, I can’t really make any recommendations, but have you run a Profiler trace and run Index Wizard to see what it recommends for your denormalized table? ——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Thanks for the response. I personally believe that because the clustered index is has many columns forming a composite it makes the insert on the trigger slow. Am I wrong in this assumption? And if so can I simply reduce the number columns in the clustered index to 1. Then create nonclustered indexes, or covering indexes for that atter?
Yes, this is valid reasoning. Also note that whilst your clustered index might not be a huge problem (id err towards a single autoincrement, but without knowing all the details), if you were to keep it youd certainly want to make sure that it is sensible in terms of how it impacts the inserts. If the clustered index is causing new records to be inserted at anywhere other than the end of the table, this could be dragging performance down.
Appreciate it chappy. I have another question. Currently in our test system the offending index which I am concerned about has 14357 dpages witha rowcnt of 33710. Maybe I am missing something but this ratio does not seem logical. Should the fill factor be lowered to provide more room to build? Roght now the fill factor is set to the default 0. Please advise? FYI, There are 5 non clustered indexes.
The clustered index is still a composite but on 2 int columns. -thesauce
Sorry, my previous post is a bit misleading. I meant to say that yes you could reduce the clustered index columns to 1, and create non clustered and it would likely improve performance. What I didnt mean to imply is that a wider clustered index will *necessarily* make the triggers slow, unless the index itself was causing non appending insertions.
I agree with Chappy that a wide clustered index will be slower than a narrower clustered index. But how you index will be based on the types of queries you run. The more indexes you add, the slower any trigger will be. If the clustered index is on an incrementing primary key, the a fillfactor of 0 may be OK, but if your clustered index is not build on an incrementing primary key, then you need fillfactor in the area of 80 or higher. The size of the fillfactor depends on how many inserts there are, plus how often you rebuild the indexes. The more you rebuild them (such as once a night) the smaller the fillfactor can be. ——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Found part of the issue. After running a few handy DBCC commands I discovered that the majority of the indexes have not ever been defragmented. All of the current indexes, except ones that have been dropped and recreated, have a scan density less the 30%. Therefore you can imagine what the rest of teh numbers look like. And this database has been running for about a year now. Therefore my question is this. Why do large corporation refuse to hire DBA’s? Plus I work for an IT consulting firm and have had this client for 6 months. Therefore as a consultant my advice is not cheap therefore when I say ‘You desperately need a DBA’ maybe it might be a good idea to at least contract one out for a month and put the world back into order. Sorry folks, I will get off my soap box. I’m just frustrated. But I’m not one to complain.
-theSauce
Its an interesting point. I think many people do not always comprehend that the performance of an sql server can be improved so much by tweaks and changes. Many developers seem to presume that if the server is up and running, and doing its job, then most of the performance improvements must come from the developers code. Im primarily from a software development background, and I used to have this mindset too. Its only after studying sql server out of interest, for the past couple of years that Ive realised how wrong people can be!
]]>