SQL Server Performance

General Performance Test Results

Discussion in 'SQL Server 2005 General Developer Questions' started by bitspinner, May 21, 2006.

  1. bitspinner New Member

    I'm looking for links that will provide general information on actual SQL performance.

    I need to answer questions like: how will transaction speeds be impacted if I scale CRUD operations on a flat file from from 1,000 1000-byte records to 100,000 1000-byte records?

    I respect that "it all depends" but I'm hoping there's some general info out there to support high level DB design decisions.

    Any guidance on this would be appreciated.

    Thanks!
  2. Adriaan New Member

    Flat files?
    Microsoft SQL Server is a database server, not a file-sharing application - not sure where and how flat files would come into play. Do you need to import, process and finally output data in an external format?

    Transaction speeds?
    Perhaps you mean execution time?

    If you are working with properly designed staging tables for the processing, and you use stored procedures to do the processing, then I would expect the import/export routines to take up a bigger part of the execution time as the number of rows grows larger. Then things would depend more on the speed of the import/export routines than on the actual processing (if properly designed).
  3. bitspinner New Member

    Thanks for the message. I should have put a "newbie alert" (with respect to DB's) at the front of my post. As you can see, I don't even know the terminology to ask the question with.

    I'm building an app that I hope to productize, with limited development resources (my ability and budget). I would like to build a pilot that has sufficient capacity to achieve certain business goals (number of users, amount of data they can access). I've got one shot to get the DB design right.

    I'm looking for some raw performance data to understand, literally, how scaling from 1,000 x 1000 byte records to 100,000 1000 byte records is going to affect response time. The specific problem is extracting 1000 records at a time, working on them in VS2005 Datasets, and then updating them back to disk through datatables. I've got flexibility in which I might break the 100,000 records into smaller units if the file size is going to contribute to response time issues.

    I don't have the first clue about how these numbers impact might response time. For all I know, it could make no difference, or a critical difference.

    I'll build my own performance tests if necessary, though I'm hoping there's something out there that reports on performance in different scenarios that I can extrapolate from.

    Any suggestions on that?

  4. joechang New Member

    predicting performance for anything other than the tpc-c and tpc-h benchmarks is very complicated matter

    there is no meaningful material other than mine on this topic, so unless you want to spend big bucks, its cheaper to try it out for yourself

    look on sqldev.net for a ppt on bulk loads
  5. bitspinner New Member

    Thanks for your message. That was very helpful. I did set up my own test and as I was going through it it gave me a chance to contemplate all of the complicating factors. I found acceptable response time in my test, and then realized I was on a "dedicated" machine (my test box) with no other apps running. I fired up a bunch of program loads and certainly got different results! However now I know two things, one for sure: I'm probably going to be OK (because my target numbers are very aggressive, and my response time tolerance it big) and when it gets to serious production, I'm going to have to add some more expertise to the mix.

    Grazie!

Share This Page