SQL Server Performance

Dual CPU always 50% total

Discussion in 'Performance Tuning for DBAs' started by darrenwhite, Aug 22, 2003.

  1. darrenwhite New Member


    I have a 1GB database on a dual 2.4Ghz Xeon with 2.5GB memory. Windows 2K Server SP3/ SQL7 SP4.

    What I am finding is that when very long queries or views are run, I see the total CPU usage (CPU 0 + CPU 1) only ever reaching 50% during that time.

    For example, if CPU 0 were at 61%, then CPU 1 would be 39%: Average for both is thus 50% (100%/2). Then if CPU 0 were to dip to say, 19%, CPU 1 would be at 81%.

    Looking at the perfmon trace over time, the two CPUs seem to exactly mirror each other, only ever touching each other's trace lines at the 50% mark.

    The total CPU usage (0+1) isn't a solid line at 50% though, but it NEVER goes above the 55% mark. Of course, when the query finishes, both CPUs settle down and seem to become independent of one another.

    Anyone know why the above happens? Is there a reason why I never get near 100% utilisation from both processors. Is this a known issue?

    I get the results above on our test server also.

    Darren
  2. satya Moderator

    Have you got any performance bottleneck with the current behaviour?

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  3. gaurav_bindlish New Member

    Even if there are any performance issues, have you eliminated the locking in the database and disk I/O as bottlenecks? What is the Processor Queue Length during these periods? Is it more than 2?

    I would recommend an audit of the server as suggested inhttp://www.sql-server-performance.com/sql_server_performance_audit.asp

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  4. darrenwhite New Member

    I have been running a particular task for the last 24 hours and monitoring the performance.

    Processor queue average is 1.03.
    Disk IO is also low.
    How do I measure locking in perfmon (or otherwise) whilst this task is running?

    Cheers,

    Darren



    quote:Originally posted by gaurav_bindlish

    Even if there are any performance issues, have you eliminated the locking in the database and disk I/O as bottlenecks? What is the Processor Queue Length during these periods? Is it more than 2?

    I would recommend an audit of the server as suggested inhttp://www.sql-server-performance.com/sql_server_performance_audit.asp

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

  5. satya Moderator

    Capture SQL Server : Locks object and also from QA run SP_LOCK, SP_WHO or SP_WHO2 to see the current activity.

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  6. darrenwhite New Member

    I've had a look at the locks object.

    It would appear that only "Lock requests per second" is the only counter that is above zero for any of the items listed.
  7. gaurav_bindlish New Member

    What is the value of average wait time and lock waits /sec?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  8. satya Moderator

    This parameter shows the number of lock requests per second that resulted in a deadlock.
    To resolve you need to improve I/O or add memory and also try to improve those slow/deadlocking queryies.

    HTH

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  9. darrenwhite New Member

    Average wait time = 0
    Lock waits/sec = 0

    Darren




    quote:Originally posted by gaurav_bindlish

    What is the value of average wait time and lock waits /sec?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

  10. darrenwhite New Member

    Are you saying that lock requests per second means "lock requests per second that resulted in a deadlock"?

    SQL is only using (dynamically)1.8GB out of 2.5GB on the server.
    I/O isn't a bottleneck.

    Are you saying that I can get rid of this odd dual processor issue solely by changing the queries that I am running?

    Darren



    quote:Originally posted by satya

    This parameter shows the number of lock requests per second that resulted in a deadlock.
    To resolve you need to improve I/O or add memory and also try to improve those slow/deadlocking queryies.

    HTH

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com

  11. joechang New Member

    if you are just running a single expensive query, and you have disabled parallel plans or for some other reason a parallel plan was not used,

    then the query will probably run on just a single cpu,
    if you are doing inserts/update or deletes, the main thread will be on a single processor, but the log writes may be on the other cpu
  12. gaurav_bindlish New Member

    Coming to the original question. What makes you think the CPU usage should be more than this? Do you face any performance issues?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  13. darrenwhite New Member

    I don't think there are any true performance issues, but I was wondering if this 'inverse-mirrored' is to be expected in any way.

    Or is it indictative of a problem that, when solved, would increase performance.

    What it boils down to, is that SQL server is never using more than 50% of the available CPU time. I know that on a single processor machine, it uses nearly 100%. Therefore, is there a way of utilising more of the two CPUs?

    Darren



    quote:Originally posted by gaurav_bindlish

    Coming to the original question. What makes you think the CPU usage should be more than this? Do you face any performance issues?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

  14. gaurav_bindlish New Member

    Try running another batch / bulk operation parallely. See if the processor utilization goes above 50%.

    BTW are you monitoring the processor utilization for all the processor together? What does the performance monitor show for the _Total counter?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  15. darrenwhite New Member

    The total is always 50%.

    Can I not a ssume that because I am seeing activity on both processors, that the current job is running in parallel?




    quote:Originally posted by gaurav_bindlish

    Try running another batch / bulk operation parallely. See if the processor utilization goes above 50%.

    BTW are you monitoring the processor utilization for all the processor together? What does the performance monitor show for the _Total counter?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

  16. gaurav_bindlish New Member

    What I meant was running another job while this job is running. What I meant by the word 'Parallely' was 'Concurrently'.

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  17. Luis Martin Moderator

    What about: the SQL d´nt need more processor than 50 or 55%?

    Luis Martin
  18. Twan New Member


    "Can I not a ssume that because I am seeing activity on both processors, that the current job is running in parallel?"

    no you can't assume that... A single thread may skip between the two processors if there is any IO occurring or its timeslice ends...

    I agree with others that the most likely reason is that your query does not use a parallel plan.

    Are you able to post:
    - an example query that you think should use both processors
    - the output of a SHOWPLAN_ALL
    - the output of sp_configure

    Cheers
    Twan
  19. Leon New Member

    Darren, I don't know if you are still monitoring this topic, but I have run in to this many times. I have built huge queries that pull together over 40 tables. Usually these queries are running on a dual processor. I found that when the proc % was at a constant 50 percent, SQL was creating a query plan. I ran a query once that I had to stop after 60hours.

    The way I have found to get around this is to make sure all tables that are being joined have primary keys based on the join statement. Then I updated the statistics prior to running thes huge queries. Also, instead of using a join statement of a.fld1 = b.fld1 and a.fld2 = b.fld2 I use a.fld1+a.fld2 = b.fld1+b.fld2. These joins usually end up being Hash joins.

    Now let me say that I am not a SQL expert and some of you reading this will proably be able to shoot lots of holes in it. But, I get some great response from these queries.

    The one thing that I would love to find out is how to make that 50% a higher number. I usually run the queries on a test machine that is doing nothing else. Any help from you SQL brains would be appreciated.

    Leon

Share This Page