Tons of single INSERTs…. | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Tons of single INSERTs….

Greetings, I have an application that parses huge flat log files, similar to the ones IIS has. From time to time the application has to dump partially aggregated data to sql server database, but it results in about 100k+ of inserts per log file.
Such big amount of inserts hurt performance pretty much – while the application resources are freed after the log file is parsed, SQL server memory only grows and is not freed anyhow even after application shutdown.
I`ve created temporary files to be used as BULK insert source for some of the tables, but some tables are referenced by other, so BULK insert most likely won`t work here.
From what I`ve described, what would be your general performance considerations or maybe there’re some obvious ways to bang this out?
Thanks in advance!
Consider dumping the data into a staging table using BULK INSERT, and then move the rows into the target tables in batches. Roji. P. Thomas
http://toponewithties.blogspot.com

how many inserts/sec are you getting
Just tried bulk insert – I`ve generated a sample table with 2 columns, no indexes or anything special, and a bulk tab delimited data, about 150 mb of it. When loading with default options my transaction log grows from blank to 147!! mb,primary file group from 31 mb to 340!! and sql server consumes about 300mb RAM, which is not freed after bulk insert is over and the client (QA) has disconnected.
What am I doing wrong?? UPD:
found out how to avoid tran. log growth (depends on batch size), but the RAM problem is so much pain…
quote:Originally posted by Roji. P. Thomas Consider dumping the data into a staging table using BULK INSERT, and then move the rows into the target tables in batches. Roji. P. Thomas
http://toponewithties.blogspot.com

<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by joechang</i><br /><br />how many inserts/sec are you getting<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />it depends on log file density, I`m not sure I can come up with a definite value.<br />Let me count <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />
After replacing all inserts for 2 tables with bulk insert, my application makes about 35k in 54 seconds, so about 650 inserts/sec on a sample data, typically it’s from 1.5-3 times more. I should also mention that I wrap the inserts with stored procedures, so the code actually calls an SP which does the insert. Each SP is responsible for inserting a unique value to comparatively small table (not more than 3-5k of rows), so it first checks if the object already exists and returns it’s PK if yes, else inserts and returns scope_identity()
quote:Originally posted by joechang how many inserts/sec are you getting

why are you concerned about SQL Server memory usage?
it is supposed to use memory,
if you do not want to use too much, set an upper bound
I am because this application is going to be used on very tight environment, such as VPS with 400-500mb total ram.<br />I saw an option to limit memory growth, but given the situation, won`t SQL server run out of it and stop serving the requests or it will reallocate it somehow or it’s not my problem? <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> I mean, if I were to bulk insert 300 mb of data with 300mb sql server ram limit, would it be possible?<br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by joechang</i><br /><br />why are you concerned about SQL Server memory usage?<br />it is supposed to use memory,<br />if you do not want to use too much, set an upper bound<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote">
what is vps?
there is no hard relation between the memory for sql and the size of the data,
ie, no hard need to keep all data in memory
if you have a reasonable disk system,
vps is a virtual machine, for example ms virtual server 2005. So what I got is that if sql server has reached max. memory limit it will utilize disk i/o more?
quote:Originally posted by joechang what is vps?
there is no hard relation between the memory for sql and the size of the data,
ie, no hard need to keep all data in memory
if you have a reasonable disk system,

running sql with disk io in vps is serious bad news
how come?
quote:Originally posted by joechang running sql with disk io in vps is serious bad news

any more ideas? anyone?
any help or suggestions would be so much appreciated.
thanks in advance!
sql is about doing disk io
every vm performance analysis i have seen says it sucks at io
]]>