SQL Server Performance

Log usage and VLF 's

Discussion in 'General DBA Questions' started by techbabu303, Apr 10, 2007.

  1. techbabu303 New Member

    Hi Folks,<br /><br />I have initially posted the results of log space currently used using DBCC SQLPERF(LOGSPACE) , after getting the soem leads and pointers began two week process of monitoring after making some changes to maintainenece paln.<br /><br />Step 1 : Implemented transaction log bacup every one hour interval<br />Step 2 : Move shrink database to once in two weeks , soon going to remove this from plan<br />Step 3 : Monitored the log usage and also loginfo status on vlf.<br /> Results:<br /> DBCC SQLPERF(logspace) <br /> MB %log usage<br /> -----------------------------------<br /> master2.242187537.28223<br /> tempdb0.742187554.736843<br /> model0.742187584.013161<br /> msdb0.742187554.473682<br /> demo_test0.992187538.828739<br /> OnePlan15.99218823.598558<br /> pro_db1.742187541.928249<br /> serv14.80468824.16227<br /> sample12.36718824.32091<br /> <br /> Though values look huge on log usage , when i run "loginfo(&lt;database name&gt<img src='/community/emoticons/emotion-5.gif' alt=';)' />" I do not have many VLF's status as 2 for the above DB's so did not shrink the db's.<br /><br /> Checked the logical fragmentation level of indexes in DB's they are well below 20 mark.<br /><br />Q's?<br /><br /><br />1.Please shed some light when u say log usage is high then which is recommended mark ?<br /><br />2.If the above results makes sense then can i assume safely to ignore the log usage % since I have not many VLF's as of now and do not have shrink the database ?<br /><br /><br />Cheers<br />Sat<br /><br /><br /> <br /><br /><br /><br />
  2. sql_er New Member

    I am not sure whether I understand your question completely. However, we had issues with transaction log as well, and I did learn a thing or two.

    1. It is not recommended to shrink the database, in general (unless you are running out of hard disk space and cannot afford to get more space). And I'm not even sure why you are talking about db shrink. Are you referring to the transaction log shrinking? If so, it is not recommended either, generally (I can provide more details here upon request).

    2. I believe it is suggested to back up the transaction log every 10-15 mins, although it depends on how much data can you afford to lose and how much you can afford your transaction log to grow (since if you don't back up, once %used reaches 100%, transaction log will begin to physically grow)

    3. I'm not sure what you mean by ignoring '% used'. Are you asking whether this info is useful/credible? I believe it is. And if you have some job running (e.g. defragmentation), which is growing the trans log, you can see how %used will keep growing, and if reaches 100%, trans log size will begin to physically grow

    We set up a mechanism recently to monitor trans log growth every 5 mins and back up trans log if it exceeds a certain threshold (I can share the script if you are interested - I find it very useful).

    Also, we noticed, that having INDEXDEFRAG job running on the Publisher (Transactional Replication) caused replication to fail every other time, as transaction log kept growing very fast during the execution of this job. As such, we now manually run INDEXDEFRAG job in the morning, monitoring the transaction log growth and stop it half-way if %used approaches 80%, and then restart again later on [after trans log backup occurred and %used is back to ~1%] ... works great for now (although I'm not particularly fond of manual work - but it is still better than having replication fail and not being able restart it for a while)

    Hope it helps ...
  3. techbabu303 New Member

    Hi <br /><br />1. Yes shrink DB not recommended as you mentioned, got that point.Someone before me had taken lot liberty in applying it without understanding it.<br /><br />2. You are right currently we have transaction log backup every one hour, would be making it to every 15 minutes to 10 minutes but I have to wait till we get upgraded from dev edition to enterprise edition.(Yes crazy is'nt it but we have stingy budget here)<br /><br />3. I know the logusage % is credible data , would be great if you can metion what good recommended threshold is for loge usage %? <br /><br />4.But say the threshold reaches 80% log usage and loginfo(&lt;database name&gt<img src='/community/emoticons/emotion-5.gif' alt=';)' /> on "status" columns show only one "2" which mean start of active log do I still need to shrink database ?<br /> <br /> In between would be great if you can share your sript to monitor log usage.<br /><br /><br /><br />Hope I am clear in stating my questions, Also aware most of you may have answered this Q's before and I have read few of posts but did not get clear understanding on this. <br /><br />Cheers<br />Sat<br />
  4. MohammedU New Member

    Log usage is depends on the database activity, size,db maintenance, backup log and checkpoint....

    I don't see any reason to shrink the log...

    Use the following script to check the log usage...

    CREATE TABLE #tlogspace (
    tdbname char (30) NOT NULL ,
    tlogsize float NOT NULL ,
    tlogspaceused float NOT NULL ,
    tstatus tinyint NOT NULL,
    tlogspaceused_MB as (tlogsize*tlogspaceused)/100
    )

    insert into #tlogspace (tdbname, tlogsize, tlogspaceused, tstatus)
    exec ('dbcc sqlperf(logspace)')

    select * from #tlogspace


    MohammedU.
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.
  5. techbabu303 New Member

    Hi<br /><br />Mohamed appreciate the script , can you shed some light on 4 as above though it may sound as stupid open ended question <br /><br />4.But say the threshold reaches 80% log usage and loginfo(&lt;database name&gt<img src='/community/emoticons/emotion-5.gif' alt=';)' /> on "status" columns show only one "2" which mean start of active log do I still need to shrink database ?<br /><br /><br />Cheers<br />Sat
  6. MohammedU New Member

    When you see 80% log usage run the BACKUP LOG job if the database is in FULL recovery mode, if it is SIMPLE recoery mode run the backup log with truncate_only option to remove inactive log...

    Then if you want shrink the log file to smaller size use the DBCC SHRINKFILE but I will not advise to run SHRINKFILE all the time which impact your server performance due to log will auto grow while running active transactions...

    Note: To keep smaller log file, you can change your tlog backup schedule to 30 minutes from 60 minutes...


    MohammedU.
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.
  7. techbabu303 New Member

    Thx Mohamed for the inputs.

    I think Iam on my way.

    Cheers
    Sat
  8. sql_er New Member

    Sat,

    I am actually not familiar with dbcc loginfo(dbname) command, but am looking into it now.

    As for the script, it is actually partially not mine (so I don't take credit for all of it):

    For this "Monitoring Transaction Log Size" process to work, we need 4 items, all included below:

    1. Table to hold transaction log history
    2. Stored procedure which performs transaction log backup
    3. Stored procedure, which, once executed, inserts the transaction log size information of every database into the transactionlog_history table
    4. A job, which runs every 5 minutes, and backs up the transaction log, should its physical size exceed certain, predefined threshold, and should it use more than a certain percentage of its file size.

    1.
    /*
    This table is created to hold the transaction
    log history information
    */

    CREATE TABLE [dbo].[Logspace_History](
    [UID] [int] IDENTITY(1,1) NOT NULL,
    [DBName] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
    [LogSize] [float] NULL,
    [PrcntUsed] [float] NULL,
    [status] [int] NULL,
    [CreateTS] [datetime] NULL DEFAULT (getdate()),
    CONSTRAINT [PK_Logspace] PRIMARY KEY CLUSTERED
    (
    [UID] ASC
    ) ON [Index1]
    ) ON [Data1]

    2. Backup trans log

    CREATE PROCEDURE BackupTransLog
    @dbName VARCHAR(255) ,
    @physicalTLogBackupFileName VARCHAR(255)

    AS

    BEGIN

    DECLARE @physicalTLogBackupFileNameFinal VARCHAR(255)
    DECLARE @DateTime VARCHAR(255)
    DECLARE @logicalTLogBackupFileName VARCHAR(255)
    DECLARE @backupDeviceName VARCHAR(255)

    SET @logicalTLogBackupFileName = @dbName+'_Log'
    SET @backupDeviceName = @dbName+'LogBackup'

    SET @DateTime = (SELECT REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(17), GETDATE(), 120), '-', ''), ' ', ''), ':', ''))

    SET @physicalTLogBackupFileNameFinal = @physicalTLogBackupFileName + @dbname+'_tlog_' + @DateTime + '.TRN'

    IF EXISTS(SELECT TOP 1 * FROM master..sysdevices WHERE name = @backupDeviceName)
    BEGIN
    EXEC sp_dropdevice @backupDeviceName
    END

    EXEC sp_addumpdevice 'DISK', @backupDeviceName, @physicalTLogBackupFileNameFinal

    BACKUP LOG @dbName TO @backupDeviceName

    END

    GO


    3. Insert into Trans log history table and backup, if necessary:

    CREATE PROCEDURE [dbo].[INS_LOGSPACE](@maxAllowSize FLOAT, @maxAllowPerc FLOAT, @DBName VARCHAR(255) , @TransLogBackupLocation VARCHAR(255))
    AS

    BEGIN
    /*
    This stored procedure, once executed, inserts the current information about
    the transaction log sizes of every database, into the logspace_history table, and backs up the transaction log if its size > some threshold and %used > some other threshold
    */

    DECLARE @logsize FLOAT
    DECLARE @prcntused FLOAT

    CREATE TABLE #temp_logspace
    (
    DBName varchar( 100),
    LogSize float,
    PrcntUsed float,
    status int
    )

    -- populate temp table with trans log info of every db
    INSERT INTO #temp_logspace
    EXEC ('DBCC sqlperf( logspace)')

    -- transfer log size info from temp table into permanent history table
    INSERT INTO logspace_History(DBNAME, LOGSIZE, PRCNTUSED, STATUS)
    SELECT DBNAME, LOGSIZE, PRCNTUSED, STATUS
    FROM #temp_logspace
    WHERE DBNAME NOT IN('PUBS', 'NORTHWIND')

    SET @logsize = (SELECT LOGSIZE FROM #temp_logspace WHERE DBNAME = @DBName)
    SET @prcntused = (SELECT PRCNTUSED FROM #temp_logspace WHERE DBNAME = @DBName)

    /*
    If log size of database in question is greater than a certain size and
    filled up above a certain threshold, backup the database in question,
    and store the trans log information before and after
    */
    IF(@logsize > @maxAllowSize AND @prcntused > @maxAllowPerc)
    BEGIN
    EXEC BACKUPTRANSLOG @DBName, @TransLogBackupLocation

    DELETE FROM #temp_logspace

    INSERT INTO #temp_logspace
    EXEC ('DBCC sqlperf( logspace)')

    INSERT INTO logspace_History(DBNAME, LOGSIZE, PRCNTUSED, STATUS)
    SELECT DBNAME, LOGSIZE, PRCNTUSED, STATUS
    FROM #temp_logspace
    WHERE DBNAME NOT IN('PUBS', 'NORTHWIND')
    END

    -- Clean up
    DROP TABLE #temp_logspace

    END


    4. Job: just set up a job to execute stored procedure in 3 every 5 minutes or so ...

    Script to execute in job, for example: EXEC INS_LOGSPACE @maxAllowSize = 6000.0, @maxAllowPerc = 30.0, @DBName = 'MyDBName', @TransLogBackupLocation = 'E:SQLDUMP'



    Let me know with questions or if anyone has some improvement comments

    Thanks

Share This Page