SQL Server Performance

transaction log grows huge

Discussion in 'General DBA Questions' started by avinashram, Apr 27, 2006.

  1. avinashram New Member

    Hi,

    I have a job that copied around 73 million records from primary to reportnig server. The job is scheduled to execute at 230am. Below is the job
    step 1
    ------
    CHECKPOINT
    DBCC SHRINKFILE (c1_LOG)
    DBCC SHRINKFILE (c2_LOG)
    DBCC SHRINKFILE (c3_LOG)
    DBCC SHRINKFILE (c4_LOG)
    DBCC SHRINKFILE (c_LOG)
    DBCC SHRINKFILE (c5_LOG)
    DBCC SHRINKFILE (c6_LOG)

    step 2
    ------
    drop table cards
    go
    select top 0 * into cards from do_not_delete_cards with (nolock)
    go
    insert cards (batch,serial,pin,state,credit,initial,first_date,last_date,chg_tot,calls_tot,last_cli,used,term_date)
    select * from openquery ([100.100.115.51],'select batch,serial ,pin ,state ,credit , initial ,first_date ,last_date ,chg_tot,calls_tot,last_cli,used,term_date from cheers.dbo.cards (nolock)')


    step 3
    -------
    CHECKPOINT
    DBCC SHRINKFILE (c1_LOG)
    DBCC SHRINKFILE (c2_LOG)
    DBCC SHRINKFILE (c3_LOG)
    DBCC SHRINKFILE (c4_LOG)
    DBCC SHRINKFILE (c_LOG)
    DBCC SHRINKFILE (c5_LOG)
    DBCC SHRINKFILE (c6_LOG)

    step 4
    ------
    go
    create index cards_idx1 on cards(serial,first_date,last_date,batch,state)
    go

    step 5
    ------
    CREATE INDEX [BATCH_IDX] ON [dbo].[CARDS]([BATCH]) ON [PRIMARY]
    go
    CREATE UNIQUE INDEX [PIN_IDX] ON [dbo].[CARDS]([PIN]) ON [PRIMARY]
    go
    CREATE UNIQUE INDEX [SERIAL_IDX] ON [dbo].[CARDS]([SERIAL]) ON [PRIMARY]
    GO



    --- The job was executing fine until today morning when i found out that the jobhad failed due to lack of disk space.
    The log file is in the E drive and it has got 30gb of free space. But when i execute the job again, the log file grows and eats up the disk eventually failing the job.
    Is my script above OK? can u plz let me what might be the problem?

    Thanks
  2. mmarovic Active Member

    What is recovery model on your reporting server?

    You need to split insert in smaller batches (e.g. 10000 rows in each iteration) and also schedule frequent tran log backups if recovery model is full. You also should aslo backup transaction log after each index creation if the recovery model is full.
  3. avinashram New Member

    Recovery model - SIMPLE
  4. mmarovic Active Member

    Then just splitting insert into smaller batches should solve the problem.
  5. avinashram New Member

    Ok i shall try splitting and see if it works. Meanwhile any other solutions are also most welcome.
    Thanks marovic
  6. vbkenya New Member

    Try backing up each log WITH TRUNCATE_ONLY before the DBCC SHRINKFILE command.

    Nathan H. Omukwenyi
    MVP [SQL Server]
  7. satya Moderator

    Also run intermittent BACKUP LOG ... TRUNCATE_ONLY in order to reduce the contention on Transaction log. But make sure to perform full database backup once the process is completed for recoverability purpose.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  8. mmarovic Active Member

    Satya, Nathan,

    the recovery model is simple.
  9. avinashram New Member

    can i do a backup log with reuncate onlyon a simple recovery model?
  10. avinashram New Member

    sorry didnt type that properly

    can i do a back up log with truncate only on a simple recovery model?
  11. satya Moderator

    In the simple recovery model, a sequence of transaction logs is not being maintained. All log records before the MinLSN can be truncated at any time, except while a BACKUP statement is being processed. NO_LOG and TRUNCATE_ONLY are the only BACKUP LOG options that are valid for a database that is using the simple recovery model.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  12. avinashram New Member

    Hi Satya,

    I tried to execute backup log <> with truncate_only intermittently, but that didnt help as well. The job when executing step 2 as usual dies down aa result of disk space not available due to the growth of transaction log

    Any advise plz?

    Thanks
    Avinash
  13. mmarovic Active Member

    Satya, Nathan: I didn't know about possibility to backup log with truncate. [<img src='/community/emoticons/emotion-6.gif' alt=':(' />]<br /><br />avinashram: If you try to truncate log during insert or index rebuild it won't help because long running transaction is not complete yet, so the space in the log can't be recycled. That is the reason I recommended spliting insert to smaller batches. My guess that inserting 73 million rows in one transaction is what makes the log so huge.
  14. avinashram New Member

    Marvoic,

    I shall try to insert into batches and update this topic

    Thanks
    Avinash
  15. satya Moderator

    That is the first thing you need to try, as the batch is trying to insert rows then the each row is logged in transaction log generating the error about no disk space as it is outgrown.

    Also you can put the database in BULK-LOGGED recovery model with intermittent backup LOG in order to keep up the size, by all means you need to cut down the insert process with smaller batches.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  16. avinashram New Member

    Hi Sathya, Nathan, Marvoic

    I tried the job by doing the insert in batches - in 8 batches of insert.

    The job worked fine.

    After every insert for a batch i also dbcc shirnkfile .. is it advisable to do? or should i avoid doing it? Otherwise i was able to copy 75 million records in batches on insert

    Thanks
    Avinash
  17. mmarovic Active Member

    quote:Originally posted by avinashram

    Hi Sathya, Nathan, Marvoic

    I tried the job by doing the insert in batches - in 8 batches of insert.

    The job worked fine.

    After every insert for a batch i also dbcc shirnkfile .. is it advisable to do? or should i avoid doing it? Otherwise i was able to copy 75 million records in batches on insert

    Thanks
    Avinash
    No, you should not shrink file. You spend time, cpu and i/o to shrink it, but next batch will force sql server to expand it again. Better leave it as it is. First checkpoint after insert will cause log entries to be marked free for reuse, so the next insert will use the same space.
  18. satya Moderator

    I second Mirko's suggestion, now monitor the log growth for a period of time and set that value to Transaction log, if the disk space is a problem then you can move the Tlog to another drive that has more free space.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  19. sunilmadan New Member

    After every batch try BACKUP LOG <> WITH TRUNCATE_ONLY followed by SHRINKFILE.

    Also, use Bulk-Logged recovery model.

    --
    Sunil
  20. mmarovic Active Member

    No, shrinking the file is bad recommendation.
  21. satya Moderator

    Why do you want shrink the Tlog if the BULK-LOAd and other process are trying to increase the size, rather set a suitable size and avoid such reverse options.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  22. avinashram New Member

    Thanks You People for all the suggesttions. Really helped me in finding a solution to this on-again problem

    Avinash

Share This Page