queries number per minute | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

queries number per minute

Hi, I was wondering whether SQL server could handle about 100,000 INSERT / DELETE queries in 5 minutes ? I have an application with this amount of outputs to write to DB (insert 100,000 records in 5 minutes). In case SQL server CAN handle these numbers, will it be possible to add another search (SELECT) query for each of the 100,000 INSERT queries, or will this slow down or kill the SQL server ? My records are about 15 columns, half of which are text (up to 65K charachters length)and the rest are short number fields.
Queries are as efficient as possible using indexes etc. thanks
Yafit
What sort of budget you’ve to spend on H/w to handle this volume?
Ofcourse SQL Server can handle, check the TPC.ORG for latest information. _________
Satya SKJ

Like satya has said, SQL Server will have no problem handling this load, if the hardware and the database design is up to the task. You will want top speed disk I/O for the best performance. I don’t fully understand your second question about SELECT. Can you elaborate on it? —————————–
Brad M. McGehee, MVP
Webmaster
SQL-Server-Performance.Com
10x for your reply;
What I meant in my second question is that:
I would like to execute a search query (read) over the whole table before each INSERT (write) command.
talking about 100,000 search queries and 100,000 INSERT queries in 5 minutes, will sql server have problems in writing or some kind of bottlenack or carzy overhead ?
Is there a way to optimize this kind of task ? OLAP / layers etc.
10x alot
yafit
quote:Originally posted by bradmcgehee Like satya has said, SQL Server will have no problem handling this load, if the hardware and the database design is up to the task. You will want top speed disk I/O for the best performance. I don’t fully understand your second question about SELECT. Can you elaborate on it? —————————–
Brad M. McGehee, MVP
Webmaster
SQL-Server-Performance.Com

Oh yeah Analysis services will handle that sort of queries without any issues, but it purely depends on the hardware and SARG conditions you use. Optimisation of queries is most important to get better performance for such a volume. _________
Satya SKJ

I can tell you that a 2P Xeon 2.0GHz can handle over 2,000 single rows inserts/sec from a client running 1 thread for a small column sizes,
with multiple threads over 5,000/sec
more if multiple inserts are consolidated into a single batch however, your issue is with row size
100,000/5min = 333/sec which is no big deal 7 columns of 65K each is 455K per row, which is a big deal
so you will be be sending something like 150MB/sec over the network plus overhead.
(i am correct in assuming the data is coming from a different system?)
this will probably take multiple gigabit ethernet links in parallel, figuring about 30-50MB/sec each.
Furthermore, network load balancing cannot distribute a single stream over multiple lines, so your client needs to be multi-threaded both the data and log needs to be able to support the same 150MB/sec write load plus overhead. my thinking is 4-5 of the new model disk drives striped together in RAID-0, double for RAID-1+0, don’t bother with RAID 5
Assuming that the INSERT/DELETE statements depend in some way on the interspersed SELECT queries that you will sending to the server then your worries will not only be limited to server hardware optimization but also:
(a) The efficiency of your client application (Threading,how you are linking to the database etc.)
(b) How you manage locks in the database(s)
(d) Database physical architecture(files, filegroups,growth parameters etc.) Optimize everything else before looking for bigger faster hardware. By the way, have you tried running these 100,000 queries on your current hardware setup?
]]>