Performance problem for a planning application | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Performance problem for a planning application

Sorry for the long post… We are developing a shrink wrap planning application using SQL Server as our data store and are considering alternate designs to optimize performance. Here is some background on the app:
–> Conceptually, the product is quite simple; the end user enters/imports data to create a model describing their business and we optimize on several variables to make suggestion about how to improve efficiency.
–> The end user will create many different models to do "what if" analysis. Some of these may be small, but some may have millions of records.
–> There will be hundreds of models, but the models do not share data with each other – each is its own universe as far as the application is concerned.
–> Each system will have about 10 users, but the users will be able to perform bulk operations (import, export, etc.). From what I have read (on this site at least), this is not a typical SQL Server application. There are two competing database designs: 1) In the first design, all the models reside together in a single database. There is an a MODEL_ID field in each necessary table to separate the different planning models from each other. This is nice from an administrative point of view because there is only one database (its easy to back up, optimize indexes, etc). However, performance is very poor as the tables become very large, even if the actual model being manipulated by the user is very small. There is only so much that good database design can do to fix the performance issues. For example, indexing on the MODEL_ID key does not seem to help performance. 2) In the second design we are considering, each model resides in own database. This is very fast as the tables are much, much smaller. However, there are several problems with this approach. All stored procedures, UDFs and views have to be duplicated. We will have to require the user to install an individual SQL Server instance to contain our data as we will be potentially dynamically creating hundreds of databases. I wonder if I am overlooking any ways to make the first design work? I can’t seem to come up with any indexing scheme that makes this option perform well, but is there something else I could use? Do you think db admins will be horrified by the second option? There are good performance reasons to make each model its own database, but at first glance it will look like a big mess. It will certainly make administration more difficult, but nothing insurmountable. Have you heard of other applications that use hundreds of databases – is this common? Thanks! Regards,
Alex Brown

Have you heard of other applications that use hundreds of databases – is this common? Hundreds of databases in one application, not for me.
If you are shure one database cause poor performance, how about something in the middle?.
I mean, can you considerate severals plannig in one database but not all? Luis Martin …Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
Bertrand Russell
i would design the tables with fields to accommodate multiple databases, and multiple models within one database. There is some overhead associated with each database, not much for 2-20 dbs, but several hundred could be an issue. this is not only a performance issue. but also one of consuming the 32-bit address space (unless you are on a 64-bit platform).
for performance and efficiency, pack as many models into a single db until the server is heavily loaded or if the db size is such that backup time is an issue. then separate out into separate dbs, and servers as necessary
Thanks for both replies. Specifically, what is the issue with 32 bit address space? My understanding is that there is no practical limit on the number of databases (32K or some such) that SQL Server allows. I guess there would be memory overhead associated with each database, but I am not sure how much of an issue this would be. In practice, most models will probably end up on the server essentially as archives. The most likely usage of the product is that only a few models will be actively used at a time. One option that I hadn’t thought of until now is to detach models that are not frequently used until they are requested by the user. That seems like a reasonable compromise…

32K may not seem like much, but sql server needs most of the address space for the buffer cache, and preferably a very large chunk of the lower 3GB, this means that you need to be very economical in the use of address space for other items.
even though each db only consume 32K, each db will likely be hit with the same sprocs, but each db will need separate proc cache entries because they are not in the same db, pretty soon, you’re burning a large chunk of the lower 3GB for non-data buffers
]]>