Design issue | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Design issue

I have a table containing hashes. I have one stored procedure that inserts new hashes, and one store procedure that detects if a certain hash is in the table. When the table gets big (100,000,000+ rows) I find that insert performance is terrible with a clustered index on the hashes, and without a clustered index on the hashes detection performance is terrible. Unfortunately, I need to both insert and detect at the same time with good performance. What are my options? Thanks, — Noam.
How about drop cluster and recreate after insert?
Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
That’s not an option. I cannot stop detection. what I’m trying right now is the following: Each time the table gets to 5,000,000 rows I create a new table and start inserting into that one. I use a view to union the tables and work on the view instead of the original table. I hope this will help. Gonna stress test it soon.
You could use a partitioned view. Kimberly Tripp has some great information on this if you would like to research it further. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
I suspect the poor performance has nothing to do with the clustered index and has to do with additional space being needed for the database. In particular, if you have specified for it to grow by 10% (SQL’s default), it will take a long time to grow when the db needs more space.
I suggest changing to, say, 50M or 100M, which should speed up inserts considerably.
Partitioning is good solution IMO. Partitioned view lower index b-tree depth helping performance, allows maintenance tasks (including index defragmentation) to be completed in maintenance window and allows simpler archiving (recompile view not to include the oldest partition, backup db with oldest partition and truncate the oldest partition so it can be reused). Korzon, can you provide feedback about effect of partitioning you are testing now? I don’t think it is file growth issue and also I think Scott’s solution may make situation worst. It will lead to more frequent file expansion and may cause fragmentation. It is better to anticipate file size in advance and have archiving process that will keep db size under control.

]]>