Insert performance slows after several hours | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Insert performance slows after several hours

An application that is 99% inserts runs for between 6 and 12 hours then performance suffers dramatically. Say every 2 minutes the application inserts 40,000+ records and takes less than 1 minute to do so. After 6 to 12 hours the time increases to 15 minutes. Restarting the database will cause the performance to return. There is a clustered primary key on the tables identity column, and 2 non-unique single column indexes on the table (1 foreign key, 1 on a time field). I’m stumped as to what to look for. Misc info, the database grows ~1.5GB per day and is maintained between 75GB and 150GB (purged monthly).

does the time increase gradually or suddenly
i assume it is not due to file growth?
File sizes are not increasing. I made a graph and the slope was fairly steep but still took a couple hours to level off at 15 minutes.
exactly what is the pattern.
on start it is 1min
on each hour after start what is it until it reaches 15min
quote:Originally posted by Zzak File sizes are not increasing. I made a graph and the slope was fairly steep but still took a couple hours to level off at 15 minutes.
Both transaction log files and data files? How about tempdb files sizes?
Files sizes are not increasing, the db is reusing space from previous purges.<br /><br />Here is a graph of the performance…<br /><img src=’http://home.columbus.rr.com/yunn/images/InsertPerformance4.jpg’ border=’0′ /><br /><br />Why isn’t this image link working?
if i had to guess based on the info provided,
i would guess this is a mem-to-leave issue
if so
setting -g512 would give you more time before bad things happen what does the insert code look like
are there xprocs, sp_cursor etc
Pretty straight forward, it’s a stored procedure with an insert statement. Procedure [dbo].[spInsFloatDataXMArchive0107]
@fkpoint int,
@value float,
@ts datetime
As
BEGIN
SET NOCOUNT ON
INSERT INTO FloatDataXMArchive0107 (fkpoint, data, datatime) VALUES (@fkpoint, @value, @ts)
RETURN
END
Try CHECKPOINT
Why don’t you limit the insert operation by limiting rowcount to 500 or 1000 something to avoid that contention. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
one item to consider is whether this table has a clustered index
if not, is because SQL must spend time to find pages with empty space,
of course, restarting SQL would not fix the problem
The Table has a clustered index on an identity column and a non-clustered index on a foreign key. Restarting this week no longer fixed the problem. This morning running a DBCC DBREINDEX did result in it running faster but took an hour and a half to complete. (during which the system was offline) Now I’m waiting to see how long it lasts before slowing down again while using DBCC showcontig to monitor the index.
you should not have to reindex an clustered index leading with an integer identity
but you could just reindex the nonclustered index since this might a disk io issue
run perf mon for the disk io counters, by individual disk
include reads/sec, read bytes/sec, avg disk sec/read, avg disk queue length
and same for writes
This behaviour is really strange, so I am not sure from where to start. Besides io issue, it it could also be locking issue if there is another long running process reading the data from the same table. Also, record cpu usage, another wild guess is that you may call the stored procedure without specifying the owner (dbo). If you insert 40000+ rows in two minutes row by row, you may try to do it in batches of 1000 or so rows instead.
Did you page splits on the table?
what is fill factor on this table?
Try to increas to the fill factor when you do reindex, this will increase the size of the table but reduce the page splits.
Mohammed U.
As suggested by Joe it is a best point to check the resource usage on the server to get more information on this behaviour. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
Well I know from viewing the server that the cpu usage drops from 30%-40% when running properly to 0%-1% when slow.<br /><br />Here is a graph of performance from this weekend starting Friday morning at midnight. You can see the effect of the dbreindex at around 9:00am Friday and the 8 hours of great performance. It’s running now between 5 & 6 minutes which is an improvement. I did indexdefrags at midnight over the weekend so you can see spikes in performance at 24,48, & 72 hours.<br /><br /><img src=’http://home.columbus.rr.com/yunn/images/HistorianWithIndexDefrag.gif’ border=’0′ /><br /><br />Disk totals looked like this today…<br /><img src=’http://home.columbus.rr.com/yunn/images/DiskCounters.gif’ border=’0′ /><br /><br />and by disk…<br /><img src=’http://home.columbus.rr.com/yunn/images/HistorianPerformanceByDisk.GIF’ border=’0′ /><br />which sadly means nothing to me.<br />
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx should get you the information about regularising these tasks. http://www.microsoft.com/technet/prodtechnol/sql/2005/scddrtng.mspx for SQL 2005. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
Anyone have a good method to determine the best fill factor and pad for an index? What is the indicator for higher or lower? Looking today at 9 hours after a defrag the scan density is already less than 90%.
What is the page split count?
Fill factor is depends on server activity…
If you increase the fill factor… your insert/update will speed up at the same time READ(SELECT) performance will degrade… I will start with 80% on high INSERT/UPDATE server… MohammedU.
Moderator
SQL-Server-Performance.com
Agree on the basis of insert operation is not massive (frequent) in this case.
When the table’s data modified very often, you can decrease the ‘fill factor’ option to 70 percent, for example. Having explained page splits in detail I would warn you in over looking at this point because more free space means that SQL Server has to traverse through more pages to get the same amount of data. Hence try to strike a balance and arrive at an appropriate value.
Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
I would add to do regular transaction log backups (or truncations) as well as the checkpoint recommendation. You also don’t have any explicit transactioning (or error checking for that matter). Check for any blocking/locking as time progresses (inluding in tempdb). Also analyze the various WAITS to see exactly what is causing the delays. This varies from 2000 and 2005 versions. Are you CERTAIN that it is sql server’s fault? Is the calling application bogging down? Memory leaks maybe? Are other users accessing this table at the same time? Not sure of sql version, but check your procedure cache. SQL 2000 has a bug that can blow it out and cause the server to slow/stop responding. SQLGuru
My company sometimes has a similar problem during busy times. The problem happens when, like you, we need to put a lot of records into some of our tables. In our case, the problem was that the statistics on the indexes get out of date. This in turn causes the query optimizer to suddenly begin making poor decisions, and yes it can affect inserts! (at least with SQL 2000). In theory, SQL Server is supposed to recognize that a lot of rows have been added and recalculate the index statistics. We found, however, that If this is actually what is happening to your database, the cure is simple. Just set up a SQL Agent job with the following 2 statements. To test the concept, you can actually just use them in Query Analyzer when things get slow and see what happens. update statistics MyTableName
exec sp_recompile MyTableName During business hours we schedule this job to run several times/hour targeting 3 tables that are important for us. Since update statistics typically works by sampling the table, the job doesn’t use all that many resources even on a big table. Overnight, we run the job over all the tables in the database. However, even before doing all that, you could just check your database properties. For SQL 2000, in Enterprise manager, right click the database name and select "Properties". Click the "options" tab. I would suggest that you check the box labelled "Auto Update Statistics". If it’s not checked now, try it and see if that is sufficient.
Have you referred to the links information mentioned in this thread so far?
quote:Originally posted by Zzak Anyone have a good method to determine the best fill factor and pad for an index? What is the indicator for higher or lower? Looking today at 9 hours after a defrag the scan density is already less than 90%.

Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
]]>