SQL server sizing – memory,tempDB | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

SQL server sizing – memory,tempDB

Hello All,
Am starting up with a SQL database, I am wondering are there any kind of relations or formulae for Memory and tempDB requird for SQL.
Any kind of relations between connections or any other parameters with memory, likeways same for tempDB.
Thanks in advance
Phani

Welcome to the forum!
No.
Generally more memory more performance. About tempdb one recommendation is to set initial size big. That prevent waist time when SQL need to increase tempdb size.
Also live memory configuration dynamically.

The optimal amount of memory and tempdb size depends entirely on your data, schema and queries. In general, the more data you have, the more memory helps. Some queries use tempdb intensively, others don’t use it at all.
You can get a handle on these numbers for your particular system by looking at SQL’s performance counters.

set tempdb size with 40% of max database in server.

Agree with that!

Thanks much guys . i was out for a trip for a week ..couldn’t reply .
.I agree that tempDB reqs and all depend onDB size and query , schema etc.but suppose the user has a very large database like 25TB with OLAP workload then what would be starting point ..may in this case also can i take 40% ..
Phani

The size of the main DB doesn’t directly affect how much tempdb space you need.
It depends on the queries that you’re issuing, including things like whether you’re using static or keyset cursors, temporary tables, explicit tables, sorting, etc.
Here’s a link to a whitepaper about tempdb that might help:
http://technet.microsoft.com/en-us/library/cc966545.aspx
The best place to get an estimate is from the current production system, if there is one.
If you’re just looking for a guess, 40% of the main DB size seems on the high side to me, assuming your queries are reasonably well optimized. OLAP / ROLAP should involve bulk table accesses, and ideally should avoid tempdb for best performance. I would probably start out with more like 15%, and then increase it later if that wasn’t enough.

TempDB datafiles should be atleast equal to the number of processors to reduce resource contention.
It is advisable to set the recovery model as Simple for TempDB database to reclaim log space.
Mani

TEMPDB is most important factor in performance from SQL 2005 version onwards, so if the SQL Server instance TEMPD is spread across multiple files then you need to adjust this percentage based on the speed of the I/O subsystem on which the tempdb files are located. This is required to avoid any sort of latch timeout or file initialization you need to limit the AUTOGROW operation depending upon your disk setup/configuration. As you may be aware in SQL Server engine/architecture the data files can be initialized instantaneously. This allows for fast execution of the previously mentioned file operations, as such it is an advantage of instant file initialization reclaims used disk space without filling that space with zeros. … see this http://sqlserver-qa.net/blogs/perftune/archive/2009/07/28/5269.aspx blog post.
Related http://sqlserver-qa.net/blogs/perftune/archive/tags/tempdb/default.aspx tags for your understanding.

It is not necessarily the case that you should create as many tempdb data files as you have CPU cores. Whilst having multiple files can be helpful under certain circumstances, this can also be a hindrance. If a query experiences memory pressure (e.g. with very large tables or large sorts within a QEP), then it may need to write data to tempdb as memory becomes scarce. If you have lots of tempdb data files, the writes are scheduled on a round-robin basis; because these files are still within a single filegroup (they have to be with tempdb), the round-robin scheduler has to calculate constantly which file to write to next, and this can become a noticeable overhead. If there is only a single tempdb file in the tempdb filegroup, this scheduling problem doesn’t arise. Another problem comes from the phenomenon of rather random IO patterns if the buffer pool has to free up space using the lazywriter. This can put strain on the IO subsystem, which only gets worse as you create more tempdb files, and can noticeably slow things down. Multiple files can be useful if you see evidence for lots of PAGELATCH WAITS on tempdb, but really the best advice is to start small where multiple files are concerned, and only increase if your performance counters suggest that you really need to.
]]>

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |