Greetings again, I am plagued by a new problem that is driving me crazy and I would appreciate any suggestions. I have a stored procedure on win2k SP3 SQL Server 2000 SP3 (existed in SP2) that typically (99% of time) runs in 20 seconds - 2 minutes. This time differnce is expected due to variances in the data load. Ocassionally, typically every few days, in an unpredictable fashion, the job will not complete. Normally this is noticed after the job has been running for 1-2 hours at which point we must stop and restart it because the data is very time sensitive. Presumably if we allowed the merger job to run indefinatley it would finish, although we have waited in excess of 2 hours with no completion. I had hoped applying SP3 would correct the problem but it has not. Yes, i have read the 'SP with different run times - help, please ' post from a few days ago. - I do not see any locks that could account for the delay - the data is the same type during these periods - the outage periods are not times of particularly high volume - normally the previous job run will be within normal time excpectations, although it is occasionally very high (14 min today) - I upgraded to sql server SP3 and the problem has reoccured - due to the inconsistant natuere of the problem I have not been able to isolate it in profiler, although that is my next step - there are no errors in the SQL logs or in the event viewer - CPU usage is usually low even during these points Environment details: Win2k, Standard SQL server 2k sp3. There are 3 database servers running seperate instances of the same application. For simplified anaysis of the data a few key tables (5) are tranfered every 2 minutes (servers are offset from each other) to a single analysis server. The tranfer is done via DTS and appends a database ID (tinyint) column to the start of each row, allowing identification of the source. The data ends up in a 'temporary' holding database. A seperate job, running every minute, performs a few additional calculations adding a few extra rows and then moves the data to the 'real' analysis database. once data is moved it is purged from the temporary holding database. Only a few users connect to the system (< 5) and their workload does not seem to correlate to the problems. If the job is stoped and restarted it will complete in a 'normal time' - slightly longer due to the accumpulated data. Any Thoughts? -- Tom Motivation: If pretty poster and a cute saying are all that it takes to motivate you, you probably have a very easy job. The kind robots will be doing soon.