IF this | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

IF this

Hello! I read an article in a magazine that said if you have an sprc like the following: create procedure abc
IF(…)
SELECT …statement1
ELSE
SELECT …statement2 you might get better performance if you do create procedure abc
IF(…)
EXEC pqr …
ELSE
EXEC xyz … Where pgr and xyz would do the different SELECT statements. Is this true, and in that case to what extent should it be used?
I have procedures that does a whole bunch of processing back and forth, should they be split in as small pieces as possible with one controlling sp that calls the different ones?? cheers /Linus —
http://anticAPSLOCK.com
As stored procedures are precompiled it can enhanced the perfomance. but you should not over do it then it will be very difficult to manage the code
If parameter value comparison is a part of if-condition or parameter value is used in statements mentioned, using this approach number of recompilations or amount of text that is recompiled may be decreased.
Here’s a fun test. If you run your example, you will find that you do usually get a short recompile of the master query when it chooses which sproc to run. The recompile will be quick though, and relatively painless. The underlying queries, if written correctly, will have no recompile. If you use functions instead of stored procedures, many times you will find there is no recompile at any level. This can provide a VERY slight increase in performance; however, it’s something worth considering. We’ve use this technique quite a bit where I work. I though my coworker was crazy when he told me about this, but we tested it pretty extensively and he was right. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Thanks for your replies, it seems worth evaluating further! But what about SPRC calling overhead, is this something to worry about? I once rewrote two sprc’s
SPRC1
DECLARE cursor1 SELECT A
WHILE
Some A stuff
EXEC SPRC2 SPRC2
dECLARE cursor2 SELECT B
WHILE
Some B stuff to SPRC1
DECALRE custor1 SELECT a joined B
WHILE
IF @someprevvalue <> @currentvalue
Some A stuff
Some B stuff ie, i merged a two nested cursors to one and used variables to check if i was on the first/last "A row"
This gave a significant performance improvement, but perhaps this was more due to cursors than SPRC’s? /L —
http://anticAPSLOCK.com
Getting rid of the cursors was probably your biggest performance gain in that particular instance. What kind of cursor were you using? In general, cursors suck and should be avoided at all costs. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
The cursor i used was "LOCAL FORWARD_ONLY READ_ONLY" on a #-table. I’ve heard cursors have worse performance on temporary tables, but I’ve never examined this further.<br /><br />What i have is a table like<br /><br />row qty assigned<br />1 10 0<br />2 4 0<br />3 12 0<br />4 8 0<br />5 9 0<br />etc<br /><br />and i have a quantity, say 19. row 1 and two should get the assigned field set to 10 and 4, and row three to 5 (ie, the quantity is assigned to the different rows)<br /><br />If there was such a thing as <br />SELECT max(row)<br />FORM table<br />HAVING ROWSUM(qty) &lt; 19 <br /><br />or equivalent, that wouldve been great.<br /><br />Currently, I’m looping the rows to find the break point and with half a million rows it takes some time <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br />I’ve done some tests using a search routine, ie first doing sum(qty) on row &lt; max(row)/2, then checking the upper or lower half’s sum, and so on, to find the breaking point. It’s faster but since the thing is slightly more compilcated in real life i’ve yet to complete those changes.<br /><br />Any other approaches to this problem?<br /><br />/L<br /><br /><br /><br /><br />–<br /<a target="_blank" href=http://anticAPSLOCK.com>http://anticAPSLOCK.com</a>
Not sure that you will gain performance, but you can give it a try: 1. Create additional column in your temp table that will handle cumulative unassigned qty.
2. Update temp table to put correct value in that column.

The rest you can figure out yourself, I guess. The code for step 2 would be something like this:
declare @unassigned int set @unassigned = 0 update #t
set @unassigned = unassigned = @unassigned + qty – assigned

I’ve tried something similar, the problem is the performance loss when calculating the accumulated column.
Either i have to loop to set it (and then we’re back looping) or do an update with a subquery summarizing the rows before.
Neither solution worked much better than to original loop, unfortunately. /L —
http://anticAPSLOCK.com
Not quite sure I understand your explanation. If you updated accumulated totals you don’t need to loop after and you don’t need to summarize again rows before. You can easily identify break point (row). Still not sure this is faster, but you don’t need to do processing you mentioned, so…
Presuming you are ordering by one column, this will also give you kind of a row counter: SELECT Col1, Col2, (SELECT COUNT(*) FROM table AS TMP WHERE TMP.OrderByCol < table.OrderByCol)
FROM table
ORDER BY table.OrderByCol You can get group counters like this: SELECT Col1, Col2, GroupCol, (SELECT COUNT(*) FROM table AS TMP WHERE TMP.GroupCol = table.GroupCol AND TMP.OrderByCol < table.OrderByCol)
FROM table
ORDER BY table.GroupCol, table.OrderByCol I guess you can do a running SUM just by replacing COUNT with SUM, etc. etc.
]]>