Large Database Backup and Recovery | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Large Database Backup and Recovery

Hello I am currently in charge of researching new backup/recovery for our company database. Currently the database is 80GB it resides on a quad processor box, and am supposed to plan for growth to 500GB in the next 4 years.
c: operating system > Raid 0
D: database > Raid 5
L: log > Raid 1 We backup database to file, and then backup file to tape. The database bacukp job takes a little over and hour. The restore takes almost 4 hrs. If you have to retreive backup for tape we are talking 4hrs to restore the file and 4hrs to restore file to database(we don’t restore directly from tape). I am thinking of dividing database into three filegroups on seperate arrays > RAID5’s I am having trouble locating documentation on dealing with SQL databases of this size so any information or recommendations would be greatly appriciated.
–NAS or SAN is currently not in the budget.
I have simmillar problems with my production databases (15 databases of 10GB each) and I’m considering to use a compression tool like SQLZIP to reduce the restoring time. SQLZIP will reduce the overall size of the backup file which reduces the time taken for restoring the file from the tape and the time taken for copying the file to database server over the network. And to reduce the time taken for backup, I’ve changed my backup strategy to take one full backup once a week and a differential backup every night. I believe, Filegroup level backup may reduce the backup time and size of individual backup files, but I believe it’s not going to reduce the restore time, because you have to restore all the backup files any way.
Where do you write the backups to disk? If at all possible it is best to put these onto a seperate volume on their own RAID channel. Write speed is imperative, so RAID 0 or RAID 1+0 would likely be best. I personally hate RAID 5 for database volumes, and try to opt for RAID 1+0 if the budget allows it. This would likely help the performance of the restores also. We have generally found that dividing large database files into 3 to 5 seperate physical files helps with performance, and that may have some effect on your restores also.
And I do have these databases on an EMC SAN. But, that’s not helping me to avoid taking these backups. Because, In the event of any disk failures, I can’t afford to loose more than an hour’s worth of data and the only way I can achieve this is by taking hourly log backups.
Thanks for the input I will try and answer questios I have recieved. I write my backups to the Raid 5 for data and the log. I have read a great deal on the benifits of RAID 10 but my senior DBA seems reluctant to switch. Not sure if it is worth the hassle of the change. I was under the impression that by using multiple filegroups each on different arrays, that I could restore from them at the same time. Please let me know. We have considered diffential’s but worry about getting a currupt diff and loosing a entire week. thoughts please
impression that by using multiple filegroups each on different arrays, that I could restore from them at the same time.
Yes, you can do that with normal RESTORE statement. _________
Satya SKJ

One last question If you where a DBA for my company and you had to plan for a 500 GB sql database. What backup plan would you recommend? It is critical that I can restore database in less than 4 hrs.
I would thorougly follow this KBA;EN-US;q307775 to maintain DR facility. As far as backup strategy is concerned I would take Tlog backup every 30 mins (if not every 15 mins)
and full backup every night. _________
Satya SKJ

Think about using SqlLite tool from dbassociate. They work great, not only they reduce the backup size by 50-80%, they also reduce the backuptime by 50%. They are not very expensive and easy to manage. I tested this tool on one of our test system, with 28 GB database and the backup size was reduced from 26 GB to 5.1 GB, not only that the backuptime also reduced to 18 minutes from 30 minutes.
On a RAID5 environment, you don’t gain much performance by using the filegroup.
In my case we were not able to spend the money for the tool, so I came up with the idea of creating another database on the same server, which now has the historical data only. I picked some of the Historical tables from the production system and moved them to the new database and created the same name view on the production database, saving me from the hassel of asking developers to change their code or even the stored procedure. This way the 28GB database was reduced to 3 + 25 GB, with 3 GB getting backed up every day + every hour and the 25GB historical piece getting backed up only once a week. So far it’s working great.
True, but make sure the third party tool fixes in the environment and sometime they prove to be another overhead on these big databases. _________
Satya SKJ

Hi Guys I have been reading this discussion and find very useful for the related subject. The only part I do not understand is RAID… could any one explain in detail about RAID and what RAID system would be ideal for 20 GB database with incremantal data of 1 GB every day with heavy data manipulation on database. I also want to know how we install and configure SQL Server 2000 on server with specific RAID system.
Thanks….,289483,sid87_gci1192794,00.html Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
This posting is provided �AS IS� with no rights for the sake of knowledge sharing.