Long Transactions | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Long Transactions

If in between my transactions, there are a few stored procedures that needs to be run, and additional codes, would it affect the performance, as compared to creating a transaction per block of code (where it is applicable) and committing it? Will it run faster if I create a transaction per stored proc? Also, isn’t it if you commit a transaction it will commit from the very beginning? Say I have this:
BEGIN TRAN Tran1
save tran tran1_x
–codes goes here BEGIN TRAN Tran2
save tran tran2_x
if errors found rollback tran2_x
else COMMIT TRAN Tran2 –(B) if errors found rollback tran1_x else COMMIT TRAN Tran1 I think if i commit on (B) it will commit to the outermost begin tran, am i right?
If the stored procedure is cached with best execution plan then there will not be any difference in the performance. IN such cases its better to run UPDATE STATS and RECOMPILE stored procedure intermittently to get optimum performance. You can check the estimated execution plan on QA.
HTH Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
ok thank you, and forget about the question on nested transactions, i got it now.
True, SQL allows you to nest transaction operations by issuing nested BEGIN TRAN commands. The @@TRANCOUNT automatic variable can be queried to determine the level of nesting – 0 indicates no nesting , 1 indicates nesting one level deep, and so fourth. Savepoints offer a mechanism to roll back portions of transactions. A rollback to a savepoint (not a transaction) doesn’t affect the value returned by @@TRANCOUNT, either. However, the rollback must explicitly name the savepoint: using ROLLBACK TRAN without a specific name will always roll back the entire transaction.
Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
hi there again! but what about the transaction? if all SPs are optimized, then would a long transaction be a factor? will it help if each SP has it’s own transaction (considering that for now I don’t really need a transaction for each SP). thanks!
quote:but what about the transaction? if all SPs are optimized, then would a long transaction be a factor? will it help if each SP has it’s own transaction (considering that for now I don’t really need a transaction for each SP).

Long transaction may affect concurency. Connection executing the code and other connection could compete for the same resources (rows, pages, tables) and some of them might have to wait untill resources are released. The question is: Do you need everything to be done in one transaction? If you don’t have to have make changes in "all or nothing" manner, then don’t use them. If partial change would leave your data in inconsistent state, then you have choice to either use transaction or to redesign your process (and maybe db).
Long transactions can also be the cause of deadlocks. Look at any documentation on avoiding deadlocks and it will always say ‘keep transactions short’. Tom Pullen
DBA, Oxfam GB
to mmarovic and thomas thank you both for the substantial info, you’ve been a great help.

]]>