SQL 2000 Containers | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

SQL 2000 Containers

Hello, I am trying to determine if I should use one or multiple containers for a new SQL server I am bringing online at my company. I will have about 150 users completing several transactions at a high rate, as well as FTP runners updating tables at a medium rate. 65% of the users will hit the same tables as the FTP runners. Looking at this from a programmer#%92s point of view, I#%92d like to keep them in the same container, but will I have a performance loss as transactions are continually being attempted? Concerns are:
Maintenance The new server is a quad processor with 2GB of ram. Any help is greatly appreciated.

What do you mean by "Container"? (I’m not familiar with that term and SQL Server 2000)
Sorry, when I say Container, I mean Database, under the expandable menu in Enterprise Manager. Thanks
1. Performance: Many simultaneous transactions affecting the same objects may lead to locking and/or deadlocking of resources at various levels of granularity depending on the actual operations being performed. I can suggest putting the indexes separate from the tables (using filegroups), using lower transaction isolation levels and/or table hints in your queries to mitigate this. Multiple containers per se will not greatly improve performance unless you follow it up with an appropriate RAID implementation, good queries that can take advantage of parallelism and disparate storage and proper server tuning. 2. Connections: You might not have a problem here unless your applications do not take advantage of connection pooling or your applications + FTP runners do not close connections as required. Connections on their own have nothing to do with the containers. 3. Maintenance: Backups and database file shrinking would benefit from having multiple containers(again filegroups but may apply to databases). Using multiple databases will force you to think about consolidation and partitioning later
Nathan H. Omukwenyi
MVP [SQL Server]

Now were cooking!!! In most cases, a database should represent a single logical set of tables that correspond to some business function. It sounds like you are talking about a single logical business function so I would think 1 database would be the right way to go. Splitting things to multiple databases doesn’t impact performance (unless I’m miss something, and if I am I’m sure that others will correct me.) Performance comes from the design of the database, and the way that it is accessed. There are several good articles in this forum if you search for "performance" that will help you understand many of the issues that go into making things perform well. The biggest is going to be indexing. You should thoroughly understand how to create the proper indexes for the tables that are going to be used most frequently. Secondly, if you have the physical environment to support you can take advantage of using FILEGROUPS (see Books on Line) so that you can split the indexes for your tables onto a different disk than your data will reside on. You should also create the database so that the LOGFILE is on a different disk than the data or the indexes. If this is a system that is being written internally you can also provide feedback to the development team that they should write as much in Stored Procedures as they possibly can and have the application call the stored procedures instead of just sending ad-hoc queries. Again there are tons of articles that you’ll be able to find about INDEXES, PERFORMANCE and using multiple disks/FILEGROUPS on this site by doing a search, or in google. I hope this helps,