write intensive architecture | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

write intensive architecture

I’m looking at setting up a SQL Server 2005 app that deals with a very write intensive system . Currently , it’s a very simple table structure , TableA, Table B, Table C . No indexes.
There will be approximately : 100,000 flat rows , coming in every 10 minutes.For a total of 10 times – therefore by the end of the process there will be approx I million rows. The data will arrive as XML Files , and there will be a BULK INSERT to get the rows into TABLE A . The data will be processed and INSERTED into TABLE B , and there will be some more filtering , to INSERT into TABLE C . All this needs to occur within 3 minutes . There is no plan to have any indexes , as the overhead is overkill. Question: from an architectire perspective , is it better to split the 10 instances into 10 different tables , then concatenate results or leave all the results i.e 1 million in the one table , and then run the sql statements. Also, are there any other good tips for very write-intensive systems , such as above? ___________________________________
Need an IT job? –http://www.ITjobfeed.com
3 min is not a problem, 10 seconds might be a problem see my posts on storage configuration
if you do not have enough disks, that could be a problem i assume A & B are intermediate results? ie you will delete them?
then use temp tables, and drop the tables after each use there does not appear to be reason the final table cannot be a single table
yes, A & B are intermediate results. Good idea, about deleting them to free up space.
Assuming the table rows are about 2000 characters each, is there an "optimal level" whereby the performance starts to degrade afterwards. i.e is there a suggested max row numbers in a sqlserver table? ___________________________________
Need an IT job? –http://www.ITjobfeed.com
its not about space
its about the option to delete rows or truncating the table or dropping the table
deleting rows, ie DELETE TableA WHERE col = X with or without the where clause is very very expensive.
truncating or dropping is relatively inexpensive. SQL Server can handle very large tables, i have seen >3TB the BULK INSERT into A is not a problem, because it is bulk logged, assuming recovery model is simple how is B populated? SELECT xx INTO B FROM A
or: INSERT B SELECT xx FROM A for 2000 bytes per row, 100K rows, thats 200MB per batch
I am thinking with a reason storage config, that should be ok,
if not, it will require special considerations, ie, not free advice
thanks for the information ___________________________________
Need an IT job? –http://www.ITjobfeed.com