SQL Server Performance

Performance on insert

Discussion in 'SQL Server 2005 T-SQL Performance Tuning' started by schalling, Nov 15, 2006.

  1. schalling New Member


    How should I go about inserting data into a table, when inserts happen every other second or so..

    Should I fire an stored procedure everytime or collect data for say 100 row and insert those through dynamic sql or use DTS/SSIS?

    Performance is my main issue.

    I'm using SqlServer 2005.

    Hope someone can help.
  2. cmdr_skywalker New Member

    Can you specify what type of insert is running (batch/single, application/trigger/stored procedure) and what's the performance issue (ddl or read)?

    May the Almighty God bless us all!
  3. schalling New Member

    Im not sure what you mean, but my situation is as follows:<br /><br />An application inserts a single row (with around 10-40 kb data) in a single table every second or so.<br /><br />I need the best possible perfomance on the inserts commands. Select commands (and other) are executed by an agent running through the rows in the table after the application has stopped and data has been added - at this point performance has little relevance (yet good perfomance is always nice to have <img src='/community/emoticons/emotion-5.gif' alt=';)' />.<br /><br />Ive been told that bulk inserts are always preferred over single inserts no matter the kind of database. <br />I was thinking of collecting a few hundred rows and inserting them in a bulk insert. The data is in memory (in objects). Do I really need to write data to a text file first to make a bulk insert or is there some other way?<br /><br />To recap the questions: "Should I use bulk insert in the described scenario for best performance? - and if so do I really need to write data to file first before bulk inserting?"<br /><br />Thanks,<br /><br />schalling
  4. jezemine New Member

    If you are coding against the CLR 2.0, you can use the SqlBulkCopy class to do bulk inserts without the need to write a file. It takes an IDataReader as a parameter, as well as some other overloads. <br /><br /<a target="_blank" href=http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx>http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx</a><br /><br />Since you have the data in memory as a collection of objects, it would be a simple matter to implement an IDataReader that wraps your collection for you to pass to SqlBulkCopy. <br /><br />Also, if I understand you correctly, nobody is reading from the table while the inserts are happening. If that's truly the case, you could drop all indexes on the table for the inserts (this will improve insert perf) and then recreate them when you need to select from the table. inserts will be fastest if there are no indexes.<br /><br />However, I have the feeling you haven't prototyped this yet. Until you have written some code and tested it, it's really impossible to know where the perf problems will be, or even if there is a perf issue. The only way to know is to measure, which in sql means using profiler and showplan. 99% of the time, even experienced devs are wrong when you ask them where the most time is spent in their code. And that 1% that is right, is only right because they profiled their code [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br /><br /><hr noshade size="1">SqlSpec - a fast, cheap, and comprehensive data dictionary generator <br />for SQL Server 2000 and 2005 and Analysis Server 2005 - www.elsasoft.org <br />
  5. schalling New Member

    You are quite rigth about my prototype. Its still only in "brainwave" code <img src='/community/emoticons/emotion-5.gif' alt=';)' /><br /><br />These questions are just so that my prototype needs as little redesigning as possible.<br />I'll take your advice on the SqlBulkCopy and the indexes and then take it from there.<br /><br />Thanks for all the help.
  6. TheSQLGuru New Member

    Dropping/adding indexes is completely non-functional if the table has a sizable number of rows. Why drop/add indexes for a 5M row table (which is logged on both the drop and the add, btw), just to add a few thousand rows?

    I don't think SSIS is useful here either, unless you collected a very large number of rows and then fired off a bulk load type process. But that could just as easily be done with the BULK INSERT TSQL statement.

    As for the SQLBulkCopy usage, that could be effective if you do it completely from a memory stream. I would compare this model to a simple sproc call model as well as to a direct ado/ado.net command execute of a crafted INSERT statement to see which wins for you.

    For maximum INSERT performance you should NOT have a clustered index of any sort on the table. HEAPs work best in this case, although you could have lower performance for certain queries and other activity on the table. You should also keep non-clustered indexes on the table to the minimum required to optimize other important actions. Avoid foreign keys and other constraints as well if practical.


Share This Page