UPDATE problem/Disk Queue Length | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

UPDATE problem/Disk Queue Length

Hi All! I have a web site with seasonal traffic currently in the ‘off-season’ right now so my traffic has died down significantly, almost to nothing. The system is a 2.0GHz Pentium 4, 1Gig RAM, Win 2K Sever also running MS SQL 2000 with all of the latest service packs installed. In the last few days a problem has sprung up where any UPDATE command times out or takes 10’s of seconds to run which on occassion causes my ASP scripts to timeout. This is a big problem because my login script does an update to the user’s login account to a "lastlogindate" field. Not good to have a login timeout! The only thing I’ve been really able to notice when watching perfmon is that the "Avg. Disk Queue Length" shoots up for the duration of the time and drops when the command completes or times out. Restarting SQL Server seems to make the problem go away but it eventually comes back. This happens fairly frequently throughout the day but at seemingly random times. Then it goes away on its own as well. I’ve also seen it after a server reboot, with no traffic, so I know that resources should be available. It seems to affect any update command while selects seem to be unaffected. Any ideas? Thanks in advance!
Larry
I would check for any defragmentation on the disk. Also run hardware diagnistics for the same. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
I’ll do that ASAP! That was a suspicion of mine, but I figured if it was a fragmentation problem, then the UPDATE problem would not be intermittant.
Did you update statistics as norma manteinance plan? Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
I do a Reorganize index and data as part of the maintenance plan. With this option selected, the update statistics is greyed out. I assume this means its not needed with the option I’ve chosen. I did a disk defrag and an index defrag. So far, I haven’t seen the problem again. I’ll have to give it a few more days before I’m confident that the problem is solved. If anyone else has any ideas about this, please let me know. Thanks!
Larry
Happened AGAIN! Both the Avg. Disk Queue Length and the Avg. Disk Write Queue Lenght shot up during the duration of the Update command attempt! Any ideas?!?
The number of page reads and writes that SQL Server performs can be monitored using the SQL Server: Buffer Manager Page Reads/sec and Page Writes/sec counters. If these values start to approach the capacity of the hardware I/O subsystem, try to reduce the values by tuning your application or database to reduce I/O operations (such as index coverage, better indexes, or normalization), increasing the I/O capacity of the hardware, or by adding memory. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
What’s the best way to really know if the page reads/sec and page writes/sec are reaching the I/O limits? I’ve added them to my current monitor. I made an intersting observation that may be a clue. My transaction log has grown to 17GB while my data is only 1GB. Could this have something to do with it or be a clue?!? Thanks again,
Larry
What is your model recovery full? If yes that is the razon for log grow.
Reorganize index, I ask for DBCC RBINDEX. Did you do that? Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Capture the counters specified above along with other Physical disk counters to assess the information (post them here). On the transaction log size how often you perform the transaction log backup and as Luis referred what the recovery model on the database. You can use DBCC SHRINKFILE to shrink the Tlog file size to an optimum size and ensure to maintain regular Tlog backups. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
The recovery model was FULL. I’m not doing banking transactions or anything like that. The data is basically data for online games (not gambling games), so the data is not so critical, although I’m sure users would be upset to lose this data. I think doing bi-weekly database back-ups should be sufficient though to avoid using the Transaction Log much. As part of the maintenance plan, I do a reorganize data/indexes. Doesn’t this rebuild the indexes? This is done once per week. I wasn’t performing a transaction log backup at all since I didn’t really think I’d being using the transaction log much with my application. What does SHRINKFILE actually do? Does it throw away the transaction info? I really don’t want to keep the 17GB of info in the transaction log right now because as I mentioned earlier, this is my dead time of year, so there isn’t much going on on the web site. I’m waiting for the problem to occur again to capture more data. Thanks,
Larry
1) If backup once a day is right to you and don’t backup transaction log every 3 or 4 hours, then change recovery Full to Simple mode and you will not have problems with log.
2) Reorganize is not rebuilt, you should run DBCC RBINDEX for each table in non working times or look at store procedure by Thomas in Sql Scripts in this forum.
3)Shrinkfile reduce all database to really database space. You can check what is used and what is reserved using Enterprise Manager, view, taskpad. In order to manteinance plan I suggest to run checkdb twice a week, rebuild index once a week and update statistics every day on more often tables used, and all database once a week. Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Well, looks like changing from the FULL to SIMPLE recovery plan has solved my problem. The Transaction log is 2MB and I haven’t seen the slow UPDATE issue happen again (so far!). I’m not sure when the transaction log got out of control. My busy season ended a few weeks ago without a hitch, then I started reworking some stored procedures, getting rid of cursor usage. The stored procedures operate on lots of rows so, maybe my testing of these stored procedures led to the problem. I think for my application, bi-weekly backups are good enough and SIMPLE recovery plan should be fine. Luis, to summarize, you’re suggesting the following additional maintenance: 1. Run DBCC RBINDEX for database once/week
2. Update statistics every day for highly used tables and once/week for the entire database Is to do this as part of my maintenance plan? Thanks for the help!
In my oppinion yes.
For 1) take fromhttp://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=444
with that script from Thomas you can choose % of defrag below that, the script run DBCC RBINDEX.
For 2) when you create a manteinance plan ask for update statistics, if you check that change from 10% (standard) to 100% to update statistics full. When RBINDEX run, update statistics automatically. But, because I use the scrip and don’t know witch tables was defragment and witch not, I run Update Statistcs like 2). HTH Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
On the production environment ensure to maintain the full backup cycle if the recovery model is set to SIMPLE from FULL. In the simple recovery model you cannot recover the database upto the point and only source is backups maintenance. Refer to the books online about RECOVERY MODELS topic. As you maintain the recovery model to simple now, its better to schedule the DBCC reindexes and other DBCC tasks every week. THis will ensure the better performance gain as compared to the previous schedule. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
]]>