SQL Server Performance

tabel size versus performance ?

Discussion in 'SQL Server 2008 General Developer Questions' started by Poul-Erik Jensen, May 16, 2011.

  1. Poul-Erik Jensen New Member

    I have to register som messages that originates from 2 systems.
    The messages will fit into the design of 1 tabel. On the other hand if I make 2 tabels (1 for each system) then each tabel will only have half the amount of records. Which design is best if I have to consider performance first of all ?
    regards
    Poul-Erik
  2. FrankKalis Moderator

    Welcome to the forum!
    Generally speaking it is adviseable to keep data that logically belongs together in just one place. Makes coding, querying, and maintenance a lot easier. There certainly might be exceptions to this rule, but unless you have a very good reason to deviate from this rule, you should stick to it.
    As for performance: How many rows do you expect? With an appropriate structure and indexes even the biggest number of rows should provide acceptable performance if you keep it in one table. If not, then there are other options, such as partitioning, etc...
  3. Poul-Erik Jensen New Member

    We will have about 1 mill records.
    But we will need to be able to add 1000 records in 1 second as a demand from the customer.
  4. FrankKalis Moderator

    With "only" 1.000.000 rows there is even no need to think about such optimisations such as partitioning. We have +1.4 billion rows in a single table and, at peak times, insert tens of thousands of rows a second. For most of our queries the response time is < 2 seconds for returning a few thousand rows. And this all runs on commodity hardware.
  5. Jahanzaib Member

    Go to Table Partitioning,If you are doing heavy DML on this table then you can take advantage of table partitioning because when you perform DML then SQL Server create particular Partition lock not table lock and other partitions will be available for the query,you can query the data for the specific file group partition also it will response fast and performance must be increase guaranteed, you have to do

    ALTER TABLE TABLENAME SET LOCK_ESCALATION = AUTO;

    if you will not do this then SQL Server will create Table Lock on every DML
  6. satya Moderator

    There is no easy answer for this to say that you can achieve relevant levels of optimization on the process, as explained by Frank it is the programming methods you must tweak to insert the number of rows as per the condition and also keep up the design of table schemas to take utmost advantage.
  7. mmarovic Active Member

    Totally agree with Frank. No need for partitioning or anything special. 1 M is small table nowdays and 1k inserts per second is very achievable.

Share This Page