If you set your SQL Server 7.0, SQL 2000, and SQL Server 2005 databases and transaction logs to grow automatically, keep in mind that every time this feature kicks in, it takes up a little extra CPU and I/O resources. Ideally, you want to minimize how often automatic growth occurs. One way to help do this is to size the database and transaction logs as accurately as possible to their “final” size. Sure, this is virtually impossible to get right on target. But the more accurate your estimates (and sometimes it takes a some time to come up with a good estimate), the less SQL Server will have to automatically grow its database and transaction logs, helping to boost the performance of your application.
This recommendation in particular is important to follow for transaction logs. This is because the more times that SQL Server has to increase the size of a transaction log, the more transaction log virtual files that have to be created and maintained by SQL Server, which increases recovery time should your transactions log need to be restored. A transaction virtual file is used by SQL Server to internally divide and manage the physical transaction log file. [7.0, 2000, 2005] Updated 2-20-2006
In SQL Server 7 and later, database and log files can be set to grow automatically. The default growth amount is 10%. This automatic growth number may or may not be ideal for your database. If you find that your database is growing automatically often (such as daily or several times a week), consider changing the growth percentage to a larger number. Each time the database is increased, SQL Server will suffer a small performance hit. By increasing the amount the database grows each time, the less often it will have to grow.
If your database is very large, 10GB or larger, you may want to use a fixed growth amount instead of a percentage growth amount. This is because a percentage growth amount can be large on a large database. For example, a 10% growth rate on a 10GB database means that when the database grows, it will increase by 1GB. This may or may not be what you want. For example, a fixed growth rate, such as 100MB at a time, might be more appropriate. [7.0, 2000, 2005] Updated 2-20-2006
One of the downsides of using SQL Server’s ability to grow automatically or to shrink database and transaction log files is that it leads to file fragmentation. I am not talking about internal SQL Server fragmentation, but operating system file fragmentation.
For example, every time a SQL Server file automatically grows, it must find unallocated space on the disk array. More often than not, the space it allocates is not contiguous with the current file. In fact, the space it allocates may come from multiple locations on the disk array. And if you have SQL Server automatically shrink its size, the space it de-allocates will leave fragments of unallocated disk space throughout the array. Most likely, these fragments of unallocated disk space will be filled up with other files as time goes by. The more this happens, the worse the file fragmentation occurs, which in turn leads to lower performance as SQL Server. The more fragmented the files, the longer it takes the array to locate and read the data.
One option is to turn these two features off. This way, you don’t have to worry about fragmentation caused by files automatically growing or shrinking. In fact, I always turn off the automatic shrink feature. I do this not only to help reduce fragmentation, but to reduce the overhead incurred by having this feature turned on.
On the other hand, I prefer to leave the automatic growth feature enabled. Why? Because I don’t want any database or transaction log to fill up unexpectedly, causing major headaches. So how do I help prevent disk fragmentation if I leave the automatic growth feature on? I do two main things. First, I create the database originally as close to its final size as I can guess. Of course, you can’t always guess accurately, and the automatic growth feature may need to kick in at some time. So the second thing I do is to set the automatic growth feature to automatically grow in larger units that the default value. Depending on the size of the database, I have it automatically grow about 20% each time. This way, the automatic growth feature won’t kick in as often, and because of this, file fragmentation will be less (although not eliminated).
Of course, another way to deal with file fragmentation is to run a defragmenter tool periodically.
So if I run a defragmenter tool regularly, why do I even need to bother with the two suggestions above about how to configure automatic file growth and shrinking? I do this because fragmentation still occurs using my suggestions, but at a much slower rate. Because of this fragmentation, I still need to run the defragmenter tool. But since there is less to defragment, I don’t have to run tool as often, and when it does run, it doesn’t have to run as long.
In any event, if you currently have both automatic growth and shrinking turned on, and you aren’t using a defragmenter, then your database’s I/O performance is probably being hurt because of file fragmentation. [7.0, 2000, 2005] Updated 2-20-2006
If a database will be used for read-only purposes only, such as being used for reporting only, consider changing the “read-only” setting to on (the default setting is off). This will eliminate the overhead of locking, and in turn, potentially boost the performance of queries that are being run against it. If you need to modify the database, you can also turn the setting off, make your change, and then turn it back on. [6.5, 7.0, 2000, 2005] Updated 2-20-2006
When “auto create statistics” is turned on for a database (which it is by default), statistics are automatically created on all columns used in the WHERE clause of a query. This occurs when a query is optimized by the Query Optimizer for the first time, assuming the column doesn’t already have statistics created for it. The addition of column statistics can greatly aid the Query Optimizer so that it can create an optimum execution plan for the query.
If this option is turned off, then missing column statistics are not automatically created, when can mean that the Query Optimizer may not be able to produce the optimum execution plan, and the query’s performance may suffer. You can still manually create column statistics if you like, even when this option is turned off.
There is really no downside to using this option. The very first time that column statistics are created, there will be a short delay as they are created before the query runs the first time, causing the query to potentially take a little longer to run. But once the column statistics have been created, each time the same query runs, it should now run more efficiently than if the statistics did not exist in the first place. [7.0, 2000, 2005] Updated 8-21-2006