Should I change File Growth size? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Should I change File Growth size?

I’m inserting 4G of data into a permanent table. Should I change the File Growth size for both data and log before executing the Insert statement? Is it correct that if I insert 4G of data, the log will also grow by 4G? Right now, I’m seeing wait type: WRITELOG in the Process Info. I already set the database using SIMPLE recovery. What else can I do besides doing the following: Adding additional I/O bandwidth Balancing I/O across other drives Placing the transaction log on its own drive? I get the above informaion fromhttp://sqldev.net/misc/waittypes.htm.

If filegroup is setting autogrow, is not neccesary to change file manually.
Balancing I/O across others drives is good if you are talking physical drives. In that case you can put tempdb and log in other drive than data drive.
Luis Martin
Moderator
SQL-Server-Performance.com
Hi Luis, Since I’m inserting 4G of data, if the database is automatic growth but the growth size is small like 100MB, I guess this will hurt performance since SQL Server needs to keep allocating space during the insert. If I set the growth size to something like 2G or even 4G temporarily, I guess this may speed up something. Is there any performance tuning I can do for this simple insert: insert into table1
from select * from table2 order by column2, column3 Does it matter whether table2 is a permanent table or temporary table for this particular insert statement if there is only one hard drive? If table1 and table2 are both permanent tables, I guess where tempdb is located for should not be matter.
You are rigth about space. I don’t know how big is you database, so if you have 10% of grow automatically make a calculus to find out how many MB will grow next time need space.
If database is small and you have no problem with physical space, then grow manually to your needs. About tables, the question is if that select will execute more than once. If no, I prefer to have a permanent table to work with it (in terms of indexs and so far) have one drive or more. About tempdb, is not used only by this query. Tempdb are use by all system, so I prefer to place in different drive. Luis Martin
Moderator
SQL-Server-Performance.com
To answer to the original question, it is advisable to grow the database in advance rather than it slowing down the insert operation. I personally recommend this as i want to decrease the time of the operation as much as possibel whcih decreases the possibility of operation failing due to any reason. About the log file size, if you keep on taking regular transaction log backups and there are small transactions in the whole operation, the log file will not grow that much. Few things that can be done-
* Place the tables and the log file on as many physical drives as possible.
* Drop any indexes on the destimation table.
* Do the insert operation in baches as against one batch, keeps transaction and hence transaction log space small proided you are taking backups.
* Since there is an Order By operation, keep the tempdb on saperate drive. Having table2 as permanent table should help as the processing of data from temp tables is not so efficient. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
gaurav_bindlish, I guess you mean I should execute backup log … with truncate_only (my understanding no_log and truncate_only is same). Which one has better performance: 1. Insert 4G of data and then backup log … with truncate_only
2. Insert some data and then backup log… with truncate_only. Repeat the steps until 4G of data is inserted. Are you saying that the Order By clause will cause SQL Server to use tempdb? Is this depend on whether the machine has enough ram and virtual memory? Does select * from a permanent table run faster than select * from a temporary table assuming that the permanent table and temporary table are same in structure and size? If yes, why?
I guess you mean I should execute backup log … with truncate_only (my understanding no_log and truncate_only is same). I am not saying use with Truncate_Only option. This option is useful when you don’t want to maintain the log file and don’t care if you would have to restore the database to the point of failiure. If this is the case then use the option. Which one has better performance: 1. Insert 4G of data and then backup log … with truncate_only
2. Insert some data and then backup log… with truncate_only. Repeat the steps until 4G of data is inserted.
Inserting data in batches and taking transaction log backups will gaurantee that the log file size will not increase too much. As file expansion is a big overhead, doing the same in batches is definately going to help. Are you saying that the Order By clause will cause SQL Server to use tempdb? Is this depend on whether the machine has enough ram and virtual memory? SQL Server uses TempDB for any sorting, grouping operations. If the data is too much for SQL Server to handle it will use TempDB for the same in which case physical memory will make a difference. SQL Server does not use the page file until it is under very severe memory pressure. Does select * from a permanent table run faster than select * from a temporary table assuming that the permanent table and temporary table are same in structure and size? If yes, why? In my tests, I have seen operations on TempDB tables having significnatly bad performance than the tables in permanent databases. However I don’t have an explaination why. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Gaurav, For my case, I don’t care about failure so I use truncate_only. I don’t quite understand what you mean by the statement in italics: SQL Server uses TempDB for any sorting, grouping operations. If the data is too much for SQL Server to handle it will use TempDB for the same in which case physical memory will make a difference. SQL Server does not use the page file until it is under very severe memory pressure.

In italics are your questions in post before.
Sorry, man. Luis Martin
Moderator
SQL-Server-Performance.com
Luis, The entire paragraph is from Gaurav’s reply. I just italics one statement out of the paragraph so he knows which statement I’m referring to.
Okay here is the explaination. If the data on which SQL Server is working needs to be sorted and is of sixe greater than the buffer pool size, the data will be written to TempDB and sorted from there. Actually I think this always happens, no matter buffer pool is available or not. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Gaurav,<br /><br />Thank you for the clarification. I hope I’m not asking too much.[<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />]
No problem! Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
]]>