SQL Server Performance

Does 1 thread = 1 cpu hit?

Discussion in 'Performance Tuning for Hardware Configurations' started by cyclone_dba, Jul 24, 2006.

  1. cyclone_dba New Member

    Does anyone know if 1 thread always equals on cpu hit?

    My company is highly batch intensive, but the jobs running in batch are serialized so that only one job is running at a time. On our 4 processor servers, looking at task manager or any other monitoring tool it appears that we hit only one cpu. When a job finishes and another starts up it cycles to the next cpu. The server registers almost nothing on any other cpu during batch. The server is setup to use all available processors, but it really looks as though throwing additional cpu at my problem is a waste of resources.

    We are having issues at night getting through our batch window before the system needs to be up in the morning. My network admins want to throw bigger hardware at it, but I am unsure it will fix the solution with what I am seeing via monitoring. I am suprised that SQL Server does not take care of multi-threading to the processors by default, but I find no reference out there on the inner architecture of the database engine nor of tuning to do for this.

    Any help on things to try would be greatly appreciated.

  2. merrillaldrich New Member

    You say that they are "serialized" ... how / why? If each one doesn't get a parallel execution plan, and you are forcing them to run in order, then it seems logical that only one CPU would get hit hard.
  3. merrillaldrich New Member

    FWIW - One worker process in SQL Server will run on one or more CPUs depending on its execution plan, and whether parallelism is enabled inside SQL Server. The query optimizer decides for each one if it seems to be economical to "parallelize" the query.

    If you have queries that are CPU bound (that is, there's enough capacity for disk IO, etc. but the CPU is the limiting factor) then it probably makes sense to try running them simultaneously to make use of the processors. If the process is limited by IO, then running more at the same time probably would not help as the disks can't keep up anyway.

    You can look at the execution plan for your specific queries to see if they are each using more than one CPU, or you can look at the process list and see if there are multiple entries for the same SPID, which will also show if you have parallel processing for single queries.
  4. cyclone_dba New Member

    What I mean by serialized is that we only run one job at a time.

    An example of what our job schedule could look like:
    1. Pull in and load orders into the system.
    2. Initial process of orders for validity.
    3. Autoprocess orders through the system.
    4. Generate letters, invoices, and accounting transactions on the ledger.
    5. Export reports and output datasets to other systems.

    Each job only runs by itself, due to the locking and updating that it does on the system. Our business is very process oriented and data from one day must by processed that day (e.g. all orders must be processed on the day they are received)

    What a typical trace of SQL Server would result in is (batch run from 6PM to 6AM):
    Job 1: ran 1 hours and used cpu 1, cpu 2-4 remain untouched
    Job 2: ran 3 hours and used cpu 3, cpu 1,2,4 remain untouched
    Job 3: ran 5 hours and used cpu 3, cpu 1,2,4 remain untouched
    Job 4: ran 2 hours and used cpu 1, cpu 2-4 remain untouched
    Job 5: ran 1 hours and used cpu 4, cpu 1-3 remain untouched
  5. joechang New Member

    assuming someone did a good job in making the batch step efficient,
    then it is probably too small to get a parallel plan
    even then, it can be difficult to get a parallel plan to actually give you a significant gain

    ideally, you would like to configure the process to allow multiple simultaneous jobs each processing a separate set

    what processor and freq are you on?
    if you cannot change your batch process, then you could only gain by going to the fastest possible processor, which is usually a 1/2 proc system
  6. merrillaldrich New Member

    It would require some coding effort, but one approach could be to structure the jobs you list so that they can run in parallel:

    1. Load SOME orders
    2. Process those for validity while loading more
    3. Auto-process the first set, check the second for validity, load a third set

    etc.

    If the incoming orders have some logic to their order, it seems like this wouldn't be too hard: you could make a job that processes a specific sub-set of the orders, and kick that off, then a second job with the same logic on another subset, then a third, and so on up to the number of physical CPUs.

    This assumes of course that the jobs can be made to not block one another, and that the advantage of running them in parallel is large enough to net an overall speed increase.
  7. cyclone_dba New Member

    I am seeing the issue on two different servers.

    The first server is a quad server running 2Ghz Xeons, 16GB RAM, and SAN attached storage with different Raid setsdisks for data, log, and tempdb. Storage equates to about 1TB.

    The second server is a dual server running 3Ghz Xeons, 8GB RAM, and SAN attached storage with different Raid setsdisks for data, log, and tempdb. Storage equates to about 750GB.

    The network admins have the DBAs testing a quad server running 2.8 Xeons that are dual core, 16GB RAM, and SAN attached storage with different Raid setsdisks for data, log, and tempdb. However, because of the issues with serialized batch jobs, there is little improvment in processing times.

    I have been trying to get our developers to modify the batch programs to allow multiple versions of the same job to be running at once. However, too much work and too few people mean it is not getting done.
  8. joechang New Member

    in this case, i do not think you can gain much on the brute force hardware approach

    even if you cannot divide and parallelize the steps per merrill's suggestion, it is possible you could improve the efficiency of each individual step.
    start with #3, then #2

Share This Page