SQL Server Performance Forum – Threads Archive
SQL Server 2000 Enterprise: Optimization of table
Hello, I have a question about SQL Server 2000 (Enterprise Edition): is there a limit for the number of records in a table? If no – what do I have to take care of when the number of records amounts 1 or 2 millions? How could I optimize table design to make simple SELECT-Statements be executed in an acceptable period of time?(definition of field data types, indices, etc.) I hope I’ll find somebody here who knows the situation and could give me advice. Additional information: – The SELECT statements will only include this one table (no JOINs – I hope so …). – However the WHERE clause could include 6 or 7 fields.
Data types of requested fields: mixed ! (DATETYPE, INTEGER, CHAR, VARCHAR, etc.) – The WHERE clause will be generated by the user of an ASP (3.0) – application.
The user gets a form to select different filters and specify the search items.
The he submits the form by clicking a button to get the result list. Thanks in advance. Regards Thomas
Have a look at "Spezifikationen der maximalen Kapazität" (so heißt das Thema in BOL auf Deutsch, glaube ich). Theoretically only the available storage space limits the number of rows in a table. You’ll find that 1 or 2 million rows is not much of an issue for modern RDBMS these days anyway. You’ll also find, that "acceptable period of time" is a very broad term and each user defines this differently.
Usually you would design a table as compact as possible. That means no VARCHAR(100) columns for zip code or country, for example. But to say something specific, you would need to supply more informations. Everything we can say now are just commonplaces, that might or might not hold true for you. Regarding your WHERE clause: This sounds like a case for dynamic SQL to. Usually it is frowned upon, but in such situation of complex searches it is mostly the clear winner in terms of performance. Do you know this one?http://www.insidesql.de/content/view/295/3/ —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
The biggest factor effecting your performance is going to be the indexes. Even in ad hoc situations you should be able to identify the fields that will "most likely" be selected/input to and can create indexes to your data based on those. Even if the user inputs information to 7 different fields, as long as you’ve indexed 1 one of them the system can quickly narrow the data down from millions to a more reasonable sub-set using the index and then lookup the data in the table and filter through the rest of the fields. If this table that will be queried will have very little inserts/updates/deletes occuring you can get more liberal in creating indexes. If you have a lot of inserts/updates/deletes occuring then you should consider the fact that more indexes will cost more time during the inserts/updates/deletes so you’ll have to balance the two.
Hello, thanks! In a few days I know more… then the asp (3.0, "old asp") frontend is ready for testing the performance. Maybe I could reduce the length of the fields… I will see. At the moment there’s only one index set – the primary key.
But I’m going to check which fields could be indexed additionally. The problem could be that many fields of the table are "text fields" ("char", "varchar", "nvarchar").
Are there any general "guide lines" for defining datatype + length of "text"-columns in tables containing many rows? Regards Thomas
]]>