SQL Server Performance Forum – Threads Archive
SP fast in QA Slow in C#A SP (the one from my last thread) is called: a) Via two layers of SPs that is executed in QA and then takes approx. 2 min the first time (I’m guessing it gets compiled) and then much faster. b) Via three layers of SPs from C#. It then takes 20 min. Reading previous threads it seems to be a common problem. I tried some of the suggested solutions: "DBCC FREEPROCCACHE" and forcing recompile.
Since it is the first time the procedure runs these "solutions" did not seem resonable to me, but I figured I should try them. No data is moving over a network or anything so that can’t be the error source. The server is running at 100% so it seems like it is doing something, but evidently not the same thing. Any ideas? /HL
Could have to do with parameter sniffing and bad query plans. This might help:
is the C# going over the network and QA local, ie, could it be a network issue run profiler and look for any difference between how QA and C#,
from QA, everything including sp’s will be SQL Batch.
depending on how you did things in C#, it could come as SQL Batch or RPC. i gather you are running a large query? 2min from QA?
does the execution plan show a hash join? if so were there more the 10K rows in the hash joins?
if so, was there disk activity to tempdb?
Argyle:<br /><br />Thanks for the tip. It did not help in this case, but it was a very interesting topic. The knowledge will certainly be useful as I continue to test "my" system.<br /><br />Joe:<br /><br />Both C# and QA are used over the (intra-)net, but since very little data is sent I do not think it is a factor either way.<br /><br />Large query: yes.<br /><br />There was no large hash joins in the original query.<br />Hmm, I guess that depends on how you mean, there were 4 100k x 10 hash joins (not precise figures).<br />A pretty big merge join though: 1,3M x 33k<br />All this according to the estimated execution plan in QA.<br /><br />The SP when it is executed through C# can’t very well use this execution plan since the processor flats out at 100% for approx. 20 min. All variables passed are exactly the same.<br /><br />What is (the) tempdb?<br /><br />I’ve changed my query a lot, with much success (see below) so I’m not going to run profiler.<br /><br /><br />My original query:<br /><br />INSERT tSigmundTransactionID(AccessKey, TransactionID)<br />SELECT @uiKey, tFinancialTransaction.FinancialTransactionID FROM<br />tFinancialTransaction <br />INNER JOIN tSigmundArticleSet ON tFinancialTransaction.ArticleNumber=tSigmundArticleSet.ArticleNumber AND [email protected]<br />INNER JOIN tSigmundCustomerSet ON tSigmundCustomerSet.CustomerID=tFinancialTransaction.CustomerID AND [email protected]<br />INNER JOIN vSigmundTransactionGroupsByName ON vSigmundTransactionGroupsByName.TransactionTypeID=tFinancialTransaction.TransactionCode<br />LEFT OUTER JOIN tSigmundFTExtension ON tFinancialTransaction.FinancialTransactionID=tSigmundFTExtension.TransactionID<br />WHERE <br />vSigmundTransactionGroupsByName.GroupName = ‘AfterDrawnSales'<br />AND<br />NOT tFinancialTransaction.CustomerID IS NULL<br />AND<br />NOT tFinancialTransaction.ArticleNumber IS NULL<br />AND<br />NOT tFinancialTransaction.TransactioNcode IS NULL<br />AND <br />tSigmundFTExtension.TransactionID IS NULL<br /><br />1) I replaced the view (vSigmundTransactionGroupsByName) with a table (AccessKey [GUID],TransactionTypeID [int]) and join with this instead. This decreased the time to 14 min.<br /><br />2) I tried splitting the join into 5 parts, using 4 well-index tables. This brought the execution time down to 3 min.<br />There was only one problem… I run this SP 15 times in a row (using different parameters). After 1) the total time was something like 20 min (some other stuff was also done), meaning that the 14 last times was rather faster than the first time.<br />With 2) The SP executed in 3 min. every time, thus increasing the total time substantially. The harddrive on the (crappy) server also made very interesting noices using this solution <img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /><br /><br />3) I belatedly realized that the first table (tSigmund_PI_1 (based on tFinancialTransaction)) I used for splitting the join was exactly the same during all the 15 executions so instead of populating it 15 times I populated it once one level up. Then I started "joining" the joins testing different combinations. All the while keeping good indexes on my "waystation" tables.<br /><br />4) It turns out that the optimal solution is splitting the join in 2 parts. <br />One part joining tSigmund_PI, tSigmundArticleSet, tSigmundTransactionSet and tSigmundCustomerSet. The other a join between the first part and tSigmundFTExtension.<br />Interestingly enough tSigmundFTExtension contains nothing the first time the SP is run, but it somehow changes the execution plan fatally.<br /><br />End result: all SPs executed in 8.5 min. The time is evenly spread out between the calls.<br />This is good enough.