SQL Server Performance

write intensive architecture

Discussion in 'SQL Server 2005 Performance Tuning for DBAs' started by Jack Vamvas, May 28, 2007.

  1. Jack Vamvas Member

    I'm looking at setting up a SQL Server 2005 app that deals with a very write intensive system . Currently , it's a very simple table structure , TableA, Table B, Table C . No indexes.
    There will be approximately : 100,000 flat rows , coming in every 10 minutes.For a total of 10 times - therefore by the end of the process there will be approx I million rows.

    The data will arrive as XML Files , and there will be a BULK INSERT to get the rows into TABLE A .

    The data will be processed and INSERTED into TABLE B , and there will be some more filtering , to INSERT into TABLE C .

    All this needs to occur within 3 minutes . There is no plan to have any indexes , as the overhead is overkill.

    Question: from an architectire perspective , is it better to split the 10 instances into 10 different tables , then concatenate results or leave all the results i.e 1 million in the one table , and then run the sql statements.

    Also, are there any other good tips for very write-intensive systems , such as above?

    Need an IT job? -http://www.ITjobfeed.com

  2. joechang New Member

    3 min is not a problem, 10 seconds might be a problem

    see my posts on storage configuration
    if you do not have enough disks, that could be a problem

    i assume A & B are intermediate results? ie you will delete them?
    then use temp tables, and drop the tables after each use

    there does not appear to be reason the final table cannot be a single table

  3. Jack Vamvas Member

    yes, A & B are intermediate results. Good idea, about deleting them to free up space.
    Assuming the table rows are about 2000 characters each, is there an "optimal level" whereby the performance starts to degrade afterwards. i.e is there a suggested max row numbers in a sqlserver table?

    Need an IT job? -http://www.ITjobfeed.com

  4. joechang New Member

    its not about space
    its about the option to delete rows or truncating the table or dropping the table
    deleting rows, ie DELETE TableA WHERE col = X

    with or without the where clause is very very expensive.
    truncating or dropping is relatively inexpensive.

    SQL Server can handle very large tables, i have seen >3TB

    the BULK INSERT into A is not a problem, because it is bulk logged, assuming recovery model is simple

    how is B populated? SELECT xx INTO B FROM A

    for 2000 bytes per row, 100K rows, thats 200MB per batch
    I am thinking with a reason storage config, that should be ok,
    if not, it will require special considerations, ie, not free advice

  5. Jack Vamvas Member

Share This Page