SQL Server Performance

Database and Logs growing quickly

Discussion in 'Performance Tuning for DBAs' started by pohlsch77, Nov 30, 2004.

  1. pohlsch77 New Member

    I have two databases in SQL that are growing very quickly. The TX logs for these databases are growing quickly as well. I don't have the autoshrink turned on as I have heard that it is bad for file fragmentation. What are some things that I can do to control the database size and TX log size. One database is almost 20GB and that will eat up a lot of disk space when I do backups of the logs and database. Thanks for any help
  2. mmarovic Active Member

    I guess your databases use full recovery model. You should regularly backup databases (e.g. once per day) and probably transaction logs (frequency depends on your usage pattern, it can be every 5 minutes or hourly). Backups will mark completed transactions inactive, so new transactions can reuse that space. For more details read BOL.
  3. satya Moderator

    Does any of the database maintenance plans are enabled for daily execution?
    Check what kind of activity is on the database, as suggested you can schedule the transaction log backup in order to reduce the logical size of Tlog and keep it minimal.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  4. pohlsch77 New Member

    How can you tell what activity is going on in the database? Sorry I am very new to this. Also, can I shrink a database size down without losing any tables etc..? I guess I am asking if there is any consequences for shrinking the database size down. We have a 20GB database that I want to shrink down because when I do backups of the database and TX log, it is also eating up my disk space


    quote:Originally posted by satya

    Does any of the database maintenance plans are enabled for daily execution?
    Check what kind of activity is on the database, as suggested you can schedule the transaction log backup in order to reduce the logical size of Tlog and keep it minimal.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  5. Luis Martin Moderator

    Could you explain what kind of backups do you run?.
    BTW:With shrink you don't loose anything, you loose defrag work but no data.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  6. pohlsch77 New Member

    I am running a database maintenance plan that backs up the TX log and the database and backs them up to disk every night. In turn, with a 20GB database and TX log, that eats up a considerable amount of disk space. These would be a full online backup because the .bak file is 20GB as well for the database




    quote:Originally posted by LuisMartin

    Could you explain what kind of backups do you run?.
    BTW:With shrink you don't loose anything, you loose defrag work but no data.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.


  7. Luis Martin Moderator

    Ok. If you are doing a full backup once a day, recovery model could set to simple.
    In this way, log will not grow like to those values.

    Recovery model = full is neccesary when you backup transaction log said between 1 hour.
    Of course, using model = simple, if crash you have to restore full backup.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  8. mmarovic Active Member

    When you run regular backups you could expect your transaction log to keep the size it reaches at peak activity period. Maybe data modification activities on your server are increasing recently. You can try:

    1. One time db shrink. Before that you can check index fragmentation and see if rebuilding fragmented indexes before the shrink can save you some space.
    2. Increase frequency of transaction log backups to keep (active part of your) transaction log file smaller.

    Another possibility is mentioned by Luis, but I am afraid that your tran log might be backed-up for a good reason. I mean: Is it acceptable to loose all the data changes after last backup? If it is, then go for simple recovery model.

  9. satya Moderator

    What kind of activity you have on that database means what is the purpose of the application.
    Is it a banking application or normal back-office application that is connecting to the database.

    If there are lot of queries with less inserts then you can deploy SIMPLE recovery model as suggested. Otherwise FULL recovery model is only suggested to recover the database upto the point in case of failure.

    Refer to thishttp://vyaskn.tripod.com/track_sql_database_file_growth.htm link for fruther information on monitoring database growth.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  10. pohlsch77 New Member

    I want to do a shrink on this database that I have been asking questions on. There are a lot of inserts into the database everyday and that is why it grows so much. What is the best way to do a shrink. If there are commands, those commands would be appreicated.

    Thanks!!
  11. pohlsch77 New Member

    Also, how far can I shrink this database down to. I need to free up actual usable disk space on this partition that the database is on


    quote:Originally posted by pohlsch77

    I want to do a shrink on this database that I have been asking questions on. There are a lot of inserts into the database everyday and that is why it grows so much. What is the best way to do a shrink. If there are commands, those commands would be appreicated.

    Thanks!!
  12. satya Moderator

    I don't think those inserts are creating such a size, btw do you store any images on the databases?
    If so its better point to move them to a location and store the pointer in the database.

    You must clear out the recovery model deployed on the database and what kind of database maintenance activities are scheduled.

    Ensure frequent transaction log backup is performed and as a test you may need to perform sizing excersize to define the optimum size for next 2 years atleast.

    You can use DBCC SHRINKFILE to shrink the data files and you may lose the performance if your reindexing job is scheduled as a part of optimum tasks.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  13. pohlsch77 New Member

    So if I use the DBCC SHRINKFILE command to shrink the data file for the database called "csdata" would the syntax be DBCC SHRINKFILE (csdata_data, 200)? The database file in this database is what is getting so large. I need to shrink that file and reclaim the disk space that it is taking up. Thanks for your help






    quote:Originally posted by satya

    I don't think those inserts are creating such a size, btw do you store any images on the databases?
    If so its better point to move them to a location and store the pointer in the database.

    You must clear out the recovery model deployed on the database and what kind of database maintenance activities are scheduled.

    Ensure frequent transaction log backup is performed and as a test you may need to perform sizing excersize to define the optimum size for next 2 years atleast.

    You can use DBCC SHRINKFILE to shrink the data files and you may lose the performance if your reindexing job is scheduled as a part of optimum tasks.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  14. Luis Martin Moderator

    You can use that command, but the question is why grow to much?.
    If you shrink and after rebuild index, data will grow again.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  15. pohlsch77 New Member

    These databases get customer files dumped into them that can have millions of records. What happens is that these text files are put into our sql database and it creates a whole bunch of tables. The more records in the file, the more tables it creates. There is no good way for me to estimate how big the database can get because these files vary in size and need to run them through the sql databases for production. We do direct mailing here and these are files that contain like names, addresses, city, street, and zip. I looked at doing the shrink the Enterprise Manager and will only let me shrink it to a certain size. I need to be able to shrink it beyond the minimum size that it says in Enterprise manager.



    quote:Originally posted by LuisMartin

    You can use that command, but the question is why grow to much?.
    If you shrink and after rebuild index, data will grow again.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.


  16. Twan New Member

    Hi ya,

    you can shirck the file size only down to that data that has been allocated to each table. If you have lots of space reserved but unused, then you may want to check for fragmentation within tables to rectify this side? You say that you're database grows due to large numbers of inserts, but if you're looking at shrinking files, then the data must get deleted again?

    have you thought about creating a separate database for each of the set of files that you're importing, then once you've massaged the data nad put it into your own database structure then you can delete that database again. It would be worth putting this 'intermediary type' database into simple mode to avoid huge logs

    Cheers
    Twan
  17. mmarovic Active Member

    I have to support similar functionality and our solution is:

    We have separate stage db in simple mode. We neither drop nor shrink this db because next time this space will be used again anyway and we don't want to spend resources for reacquiring space for that purpose again. We also don't want fragmentation so we set db size to max estimated size. Having db in simple mode we reduce log size (compared with full mode). When we use data in stage db to insert/update main db, we batch changes. This way we don't have long running transactions, so when next transaction log backup of main db starts it is able to mark most of the log produced as inactive (with exception of current not too long transaction), so this way we keep main db log small enough too.
  18. pohlsch77 New Member

    The tables in the database that are created when customer files are run through them can be dumped after a month. We deleted a bunch of tables 2 days ago and the size of the databases have not grown since. So that looks like our clue to maybe controlling the size of the database that way because we are not for sure if we want to put a maximum size on it. If the database reaches that size, it will stop the files from running through the database. Another quick question, does SQL work like MS Exchange in that if delete a bunch of items and free up space in the database, it will just get re-used again? I am guessing this is how SQL works as well since I have not seen the database file grow since we removed a bunch of tables and data.
  19. mmarovic Active Member

    quote:Another quick question, does SQL work like MS Exchange in that if delete a bunch of items and free up space in the database, it will just get re-used again? I am guessing this is how SQL works as well since I have not seen the database file grow since we removed a bunch of tables and data
    That was good guess. Yes, mssql server is reusing the space.
  20. pohlsch77 New Member

    My Transaction logs for the databases continue to grow though. What is the best method for controlling the growth of the transaction log?
  21. mmarovic Active Member

    Have you tried the solution I mentioned?
  22. satya Moderator

    First think you need to do perform regular Transaction log backup which takes care of logical size of Transaction log file and ensure the database maintenance tasks are scheduled for optimum performance.

    Use DBCC SQLPERF(LOGSPACE) for growth percentage at a period of time.

    You must consider the tips and tricks defined in the reply for your question.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  23. pohlsch77 New Member

    If you backup Transaction logs every hour during production, that adds up to a lot of files. If the Transacation logs are a day or two old, then in theory you do not want to have a log file that is that old. So do you just delete the old ones then?
  24. pohlsch77 New Member

    Also what are the hazards of having to regularly having to do a DBCC shrinkfile on the TX logs? From what I understand, it wipes out everything in log except the active Transcations correct? Sorry for all these questions, but I am very new to this and trying to understand the correct way on maintaining these databases and log files
  25. satya Moderator

    what are the hazards of having to regularly having to do a DBCC shrinkfile on the TX logs?
    Nothing specifically but if the database is updated regularly then it will have performance affect on increasing the Tlog again and again, to avoid this its better to keep the queries in transactions controlled and in bits and pieces.

    How do you perform the backup of the Transaction log, are you using WITH INIT clause or appending the logs each time?

    Once you perform the BACKUP LOG statement the SQL server will write the transactions to the disk and logical space of TLog will be cleared for next set of transaction actions on the database, so the whole scenario is dependant on the Transaction log backup schedule.

    Ensure the disk space where the Tlog file is stored has optimum disk space to avoid database downtime.


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  26. pohlsch77 New Member

    I have a TX log backup maintenance plan set to run each day where it backs up the TX log every 8 hours. I adjusted to backup every 8 hours due the recommendations that I have received from the list on doing regular TX log backups. Each night I do a full backup to tape of these databases with Veritas Backup Exec 9.1. So I TX logs during the day and if for some reason I need to restore the database, I can do it from tape. In the maintenance plan, I have it set to remove files older than 2 days for the TX logs. I just set this plan yesterday, so I should see TX log files with the date of two days or older to be removed from the backup directory then?
  27. satya Moderator

    I suggest to reduce the interval to 4 hours atleast and test the process, the more intervals of Tlog will have less issues while recovering point in time for a database during failure.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

Share This Page