SQL Server Performance

Performance Problem with Primary Key Index

Discussion in 'Performance Tuning for DBAs' started by gryphonsclaw, May 13, 2006.

  1. gryphonsclaw New Member

    I have a primary key cluster index on a table (4-5 million rows) which is one of the bigger tables in our DB. Key field is incremented then inserted and never changes. Randomly the queries that use this index will start taking minutes to run instead of seconds, causing the application to deadlock and rollback.

    This has occurred twice now in the last 3 weeks. To fix, I have to drop and recreate the index (Alter Table Drop then Alter Table add). I run the Database Maintenance scripts every 3 days which should be doing a REINDEX on the index.

    Don't think it is a disk, for if I backup the database and copy it over to a test server, problem still exists. Also, have taken everyone off the system and tested with just one user and still exists. It is in its own filegroup which has plenty of space.

    I guess my questions are, is the maintenance plan really working? Running SHOWCONTIG makes me think so for my fragmentation is low?

    Should I just run a REINDEX script and forget the Maintenance plan?

    Do I have another problem, its always the same index?

    Thank you for any help.

    Thanx
    Adam
  2. mmarovic Active Member

    quote:I have a primary key cluster index on a table (4-5 million rows) which is one of the bigger tables in our DB. Key field is incremented then inserted and never changes.
    Is it identity column?

    quote:
    Randomly the queries that use this index will start taking minutes to run instead of seconds, causing the application to deadlock and rollback.
    A few millions is not too much, so there is no reason (if identity column is pk) for that. Are you selecting max(key column) and then use next value to increment?
    Deadlock means you access at least two resources by more then one transaction and each transaction locks the resource another transaction needs and both won't release its locked resource untill it access the resource from another one.
    That can also happen when only one resource (e.g. table) is involved in case of parallel execution plan. That's bug of ms sql server starting with mssql 7.0 and it keeps occuring with each release and service pack. I don't know if it is the case with latest service packs for sql server 2000 and 2005.


    quote:This has occurred twice now in the last 3 weeks. To fix, I have to drop and recreate the index (Alter Table Drop then Alter Table add). I run the Database Maintenance scripts every 3 days which should be doing a REINDEX on the index.

    Have you checked the fragmentation of the index? Typicaly index on column with growing value on each insert without updates should not be fragmented.
  3. gryphonsclaw New Member

    It is not an identity column, the account app is on SQL 2000 but runs in 6.5 compatibility mode. It has a stored proc that actually keeps track of Keys in a Key look up table. So it selects the row in the look up table that corresponds to the table needing a new key, adds one to it, then updates the row in the lookup table and returns the new value to the stored proc that called it. This new value is then used to add a key to the table.

    I could turn off parallel execution. Not sure what kind of a performance hit I would take. I do get some blocking on deadlocking on the server across the board during busy times. Some due to this key lookup table, some due to other things. If turning off parallel execution will fix these things then it would be a good trade off.

    I should have checked the fragmentation while the deadlocks were going on but I put on my emergency firefighter hat and didn't think to do that. I will if it occurs again.

    I narrowed it down to one query that is just horribly written. It is an update statement with a sub select that is in a trigger when inserting a new row. I wrote the manufacturer and they told me it was my hardware which I am pretty beefed with disks RAM and CPU for what we do, they are about worthless. One thing about the deadlocks and the slow query is that they go away when I drop and recreate index. Even if I restart the service or reboot the machine, they do not go away. Its almost like the execution plan gets jacked up and


    Thanx
    Adam
  4. joechang New Member

    try just running update statistics on the key index only, not the entire table, not reindexing

    this is related to a problem MS is aware of, where the statistics only contain a range up to the max value at the time the statistics were updated.
    if rows are added, and the sproc compile with a parameter value higher than when stats were last updated,
    you could get a messed up execution plan if parameter sniffing is involved

    find out if the this proc compiles or recompiles, if so, consider declaring a variable inside the proc, set the variable equal to the input parameter, then use the variable in the SARG instead of the parameter
  5. Twan New Member

    <br />"find out if the this proc compiles or recompiles, if so, consider declaring a variable inside the proc, set the variable equal to the input parameter, then use the variable in the SARG instead of the parameter"<br /><br />oh man Joe, now that's a blast from the past... I'd forgotten all about that trick! <img src='/community/emoticons/emotion-5.gif' alt=';-)' /><br /><br />Cheers<br />Twan
  6. gryphonsclaw New Member

    Thanx, I will try that out.


    Thanx
    Adam
  7. mmarovic Active Member

    <blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by gryphonsclaw</i><br /><br />It is not an identity column, the account app is on SQL 2000 but runs in 6.5 compatibility mode. It has a stored proc that actually keeps track of Keys in a Key look up table. So it selects the row in the look up table that corresponds to the table needing a new key, adds one to it, then updates the row in the lookup table and returns the new value to the stored proc that called it. This new value is then used to add a key to the table.<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br />I guess that's their way to migrate application from oracle. They created oracle sequence manually. That was not the smartest way to do so. That makes sequence table bottle-neck and forces row-by-row inserts on all tables. They were not even able to keep original insert syntax syntax, so I don't really understand why they did so. I can't believe they didn't learn about identity property before coding app for sql server.<br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><br />I could turn off parallel execution. Not sure what kind of a performance hit I would take. I do get some blocking on deadlocking on the server across the board during busy times. Some due to this key lookup table, some due to other things. If turning off parallel execution will fix these things then it would be a good trade off. <br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br />Unless you found that parallelism is responsible for behaviour described, I wouldn't do it. I think Joe identified what the real problem is and his suggestion should solve it.<br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><br />I narrowed it down to one query that is just horribly written. It is an update statement with a sub select that is in a trigger when inserting a new row. I wrote the manufacturer and they told me it was my hardware which I am pretty beefed with disks RAM and CPU for what we do, they are about worthless. <br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br />Oh, man. I feel your pain. [<img src='/community/emoticons/emotion-6.gif' alt=':(' />]
  8. gryphonsclaw New Member

    Here's a weird question. On the Index Primary Key there doesn't appear to be any statistics. Should I have statistics on this or since it is a key field it knows its unique and doesn't need statistics.

    Thanx
    Adam
  9. satya Moderator

    When the DBREINDEX is executed anyway the stats will be updated, for further confirmation you can run UPDATE STATISTICS on the table to get a lookup on the execution plan.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  10. gryphonsclaw New Member

    I had the issue again today. I was able to delve in more. There is a query in the stored proceedure that is switching execution plans to use the primary key on the table, when there is a better index to use. The only variable passed in is a character variable which is always an 'i'. Funny thing is this variable is not used any where in the stored proc. I am not sure if Joe's trick will work now.

    I updated statistics on the key field but that did not change the execution plan back. I had to actually create them for the key field. I did this in Query Analyser by right clicking the table and Manage Statistics. Maybe I am missing something by doing it this way. I also update stats each night and this problem seems to come about a week apart or so. I fixed the problem by rebuilding indexes again.

    What can I do to proactivly prevent this problem? Can I throw index hints into the query to force it to use the indexes? Should I rebuild statistics and indexes more often?

    Also, I am running SP3a and not SP4 do to the accounting system specs. Is this the issue?

    I am going to paste the code below in case it helps to idenitify anything.

    update SLD1
    set FreightAmt = coalesce(
    (
    select sum(twlPackageDetl.FreightAmt)
    from tsoShipLineDist SLD2 (NOLOCK)
    inner join tsoShipLine (NOLOCK) on tsoShipLine.ShipLineKey = SLD2.ShipLineKey
    inner join twlPackageDetl (NOLOCK) on twlPackageDetl.ShipLineKey = tsoShipLine.ShipLineKey
    where SLD2.ShipLineDistKey = SLD1.ShipLineDistKey
    ),FreightAmt) --2006-03-22 - was 0
    from tsoShipLineDist SLD1
    inner join tsoShipLine with (NOLOCK) on tsoShipLine.ShipLineKey = SLD1.ShipLineKey
    where tsoShipLine.ShipKey in (select ShipKey from #twlShipmCharges)

    Thanx
    Adam
  11. cmdr_skywalker New Member

    when rebuilding the index, what fillfactor are you using? If you're performing a lot of insert, try to set the fillfactor to a lower setting 10-50.

    May the Almighty God bless us all!
    www.empoweredinformation.com
  12. mmarovic Active Member

    quote:Originally posted by cmdr_skywalker

    when rebuilding the index, what fillfactor are you using? If you're performing a lot of insert, try to set the fillfactor to a lower setting 10-50.

    May the Almighty God bless us all!
    www.empoweredinformation.com
    It is not good idea for ever incremented primary keys.
  13. gryphonsclaw New Member

    I'm using 100% for the key is a sequential key. So each new row gets the next value.

    Thanx
    Adam
  14. cmdr_skywalker New Member

    It works for me on huge table where I have to perform massive insert.
    BOL:
    "FillFactor - Specifies a percentage that indicates how full SQL Server should make the leaf level of each index page during index creation. When an index page fills up, SQL Server must take time to split the index page to make room for new rows, which is quite expensive...
    Note An explicit FILLFACTOR setting applies only when the index is first created. SQL Server does not dynamically keep the specified percentage of empty space in the pages...
    ...An INSERT or UPDATE made after the creation of an index with a 100 percent FILLFACTOR causes page splits for each INSERT and possibly each UPDATE...
    Important Creating a clustered index with a FILLFACTOR affects the amount of storage space the data occupies because SQL Server redistributes the data when it creates the clustered index..."



    May the Almighty God bless us all!
    www.empoweredinformation.com
  15. gryphonsclaw New Member

    Aren't pages that are inserted with an incremental key automatically put at the end of the current index? Isn't fillfactor just for when inserting in the middle, IE you have 1,2,5 as keys and want to put 3 in then you need a fill factor or you create a page split, but if you have 1,2,3,4 and want to put the next value 5 in, then it just inserts at the end of the index?

    Would this change the execution plan?

    Thanx
    Adam
  16. mmarovic Active Member

    quote:Originally posted by gryphonsclaw

    Aren't pages that are inserted with an incremental key automatically put at the end of the current index? Isn't fillfactor just for when inserting in the middle, IE you have 1,2,5 as keys and want to put 3 in then you need a fill factor or you create a page split, but if you have 1,2,3,4 and want to put the next value 5 in, then it just inserts at the end of the index?
    All you said is true, so there is no need for adjusting the fill-factor for your index.


    quote:
    Would this change the execution plan?
    Fill-factor or fragmentation are not factors that are taken into consideration when query costs are calculated, so they don't affect query optimizer's decision about execution plan.
  17. mmarovic Active Member

    quote:Originally posted by gryphonsclaw

    I'm using 100% for the key is a sequential key. So each new row gets the next value.

    Thanx
    Adam
    100% is the best for sure if an index on sequential key is non-clustered or if it is clustered but rows are not updated after the initial insert. However if there are a lot of updates, and potentionaly large varchar columns are updated and these updates changes significantly colum value size that can force page splits too. I usually use default fill-factor 0 for sequential clustered keys.
  18. cmdr_skywalker New Member

    I am not sure if you should fill-up an index when you know could allocate enough space for later use (that is, lower fillfactor rounds up the estimated amount and fills x percent of that amount). Consider that SQL Server is sitting on top of the OS, that is, SQL server disk write does not specify the contiguous allocation of the actual disk sector unless in a continuous, live transaction. What I am saying is that if SQL requests 100 sector, it will be most likely continuous 100K sector for that current thread (or transaction). However, if in between insert, the OS cannot guarantee the continuous sector unless it is already allocated (when we were designing DBMS, not SQL server though <img src='/community/emoticons/emotion-1.gif' alt=':)' />, for instance, READ 100K FROM SQLFILE STARTING OFFSET 0AH CHUNK 100K and SQLFile has already x% freespace) or no other process(or thread) request such I/O. At anyrate, it was better to leave some space for future insert than using all of them at once. But then again, you can have different operations. Its just something to consider though <img src='/community/emoticons/emotion-1.gif' alt=':)' />.<br /><br /><br /><br />May the Almighty God bless us all!<br />www.empoweredinformation.com

Share This Page