Small database… huge log use | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Small database… huge log use

I’m running SQL Server 2000 (SP4) on a Windows Server 2003 box, with dual Pentium XEON 3.2ghz and drives configured as raid-5 (standard in our environment). We’re using a product called Worksite, by Interwoven. The actual data in the data file is tiny – production is ~200 MB, while dev is about 80 MB. The developer has written some code in VB.net (below) Dim instance As IEnumerator
instance = Array.GetEnumerator
While instance.MoveNext
myTemp = instance.Current
Console.WriteLine(myTemp.Name)
Try
myFolders.RemoveByObject(myTemp)
Catch ex As Exception
Console.WriteLine(ex.Message)
End Try … that calls the exposed Interwoven API. At that point, for the developer, it’s a black box. The application is running middleware on another server, with a 10-connection connection pool (configured by the developer). What I can see in profiler, however, is that the application is taking a list of around 1800 "folders" (of ~4000) stored in Worksite, (which is a document-management utility), and trying to delete them. The problem is, it fills up the transaction log extremely quickly, is using dynamic SQL to get each row for deleting one at a time, and using select with NOLOCK to try to get the next one to delete. Several problems: it’s doing an update to at least one table each time, to update which folders are "left" and "right" (not actually sure what that means, but … ) for each row… and it’s deleting from multiple tables. e.g. declare @P1 int
set @P1=164
exec sp_prepexec @P1 output, N’@P1 float’, N’SELECT TREE_ID, LEFT_VISIT, RIGHT_VISIT, PRJ_PID FROM MHGROUP.PROJECTS (NOLOCK) WHERE PRJ_ID = @P1 ‘, 1.594400000000000e+005
select @P1 declare @P1 int
set @P1=169
exec sp_prepexec @P1 output, N’@P1 char(6)’, N’SELECT SID FROM MHGROUP.SEARCH_PROFILES (NOLOCK) WHERE (SCOPE = ”FOLDER”) AND (NAME = @P1) ‘, ‘159440’
select @P1
The problem is that we’re getting the Error: 601, Severity: 12, State: 3 Could not continue scan with NOLOCK due to data movement. due to the application code using nolock when it tries to find anything. And, even better, it cycles through all 10 SPIDs each time it gets this error, before finally timing out. And yet, the developer says they can do something similar in Oracle with no issues. Of course, we run Oracle on Unix with very well-appointed CPU and disk arrays… while we run SQL Server on individual Intel boxes. So… yeah. Any suggestions? The developers are looking to me to try to fix it, but I can’t change their code, and they don’t know that anything can change the vendor code. So… anything?
The black box might be a poorly chosen library to interface with SQL Server. Also, IIRC, RAID-5 is not optimal for lots of updates and deletions. Finally, not sure why they would want to setup their own connection pooling – SQL Server usually does a very good job of that all by itself.
Well, as I expected, actually – it was the indexing. The were trying to delete like 1800 "folders" – but multiple tables had multiple indexes. I finally convinced the developer to let me drop the indexes (having already done and tested a drop X and create X script myself) and run his process again. Gee shock – it’s not hammering the tran log and it actually worked in a reasonable amount of time… … mind you, I suggested this on Monday and they didn’t want to do it "because of what the vendor might think". So… mystery solved. That’s what I figured it was, but they originally didn’t want me to drop the indexes… um… okay…
]]>