SQL Server Performance

CURSORS

Discussion in 'T-SQL Performance Tuning for Developers' started by rajoo, Jan 14, 2003.

  1. rajoo New Member

    Hi

    I read the FAQ on cursors and many other materials on the subject elsewhere and cursors are systematically discouraged because of over usage of resources and performance. However, we have situations where you have to perform updates on results that generate more than 100,000 rows. e.g


    UPDATE table1
    SET table1.columb = table3.columnt
    INNER JOIN table2 ON (table1.id = table2.id)
    WHERE <some condition>
    table1 above can contain more than a million rows. The problem with the above type of update is that it quicky fills up the syslogs table and the process aborts. I read a tip somewhere which recommended the usage of a filter to process about 1000 rows at a time. e.g The above could be rewrittem as



    UPDATE table1
    SET table1.columb = table3.columnt
    INNER JOIN table2 ON (table1.id = table2.id)
    WHERE <some condition>
    AND table1.id BETWEEN 1 AND 1000

    UPDATE table1
    SET table1.columb = table3.columnt
    INNER JOIN table2 ON (table1.id = table2.id)
    WHERE <some condition>
    AND table1.id BETWEEN 1001 AND 2000

    UPDATE table1
    SET table1.columb = table3.columnt
    INNER JOIN table2 ON (table1.id = table2.id)
    WHERE <some condition>
    AND table1.id BETWEEN 2001 AND 3000

    etc.
    etc.

    However, the above solutions is sometimes not feasible or you cannot have a proper column on which to base the filter.

    My question then is, how to you avoid the overpopulation of the syslogs table and use single UPDATE statements as an alternative to cursors.

    Thanks
  2. HarryArchibald New Member

    It is a good idea to try and break down large updates. I generally use the datepart function to break down the number of updates. If the table has no date column, then a character column and the LIKE statement could be used.
  3. bradmcgehee New Member

    A large cursor will be even worse when it comes to the transaction log, as each separate UPDATE will have to be written to the log. As HarryArchibald has suggested, splitting up large transactions is generally your best bet.


    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com

Share This Page