SQL Server Performance Forum – Threads Archive
Improving UpdatesI have a table with a little over 1 million records in it. This table has 42 indexes though and unfortunately it seems that nearly all are necessary because the table is very wide (174 columns) and used by many users for different things. A simple update statement like the following often takes around 30 seconds to execute: UPDATE docudetail set d_reactid = ” where d_reactid = ‘FO0205’ The d_reactid column has an index which I see is used in the execution plan of this query. However 86% of the time is taken up by the Clustered Index Update. Index fragmentation is currently less than 5%. Is there any way to possibly increase performance of these kinds of queries without dropping some of the indexes or changing the table layout?
I would guess that the wrong index was chosen to be clustered. This index is on d_reactid, right? Does it cover multiple columns, or just d_reactid? Does the table have an identity field? Is that also the PK, or does it at least have a unique constraint? My suggestion would be to change the current clustered index to a nonclustered one, to add a unique identity column if missing, and to make this identity column the clustered index. You’ll have to rebuild the other indexes.
run SET STATISTICS TIME ON before doing the update
i think you will see that very little cpu is used during that 30sec interval there is something else going on in your system,
the relative cost in the plan is not of relevence in this matter
d_reactid has a nonclustered index for just its column. There is another field used as the primary key called d_docuid. It is not an identiy column though it does have a unique constraint.
quote:there is something else going on in your system
I suspect there may be a blocking issue causing this. Profiler is often showing unoptimized queries around the time of slow-downs. I will try running that command though to see what happens.
what is probably happening is that some query used a clustered or non-clustered index scan,
or other high row count operation escalated to a table lock
preventing your update from getting an exclusive lock on the row being updated in this case, the relative cost in the plan is a non-issue
Well, I have some indexes scheduled for tonight for some of those select queries that are possibly causing blocking issues. Now that I think about it, I’ll run a profiler trace for textdata like ‘update docudetail%’ tomorrow and see if there are a whole bunch of updates occuring very fast that aren’t even coming up in my long process trace. That should help me verify the blocking possibility much easier. Thanks for the help. It’s good to have some reassurance with these things.
//Edit Yep, this was definitely a blocking issue. All of the updates seem to be running good now.