SQL Server Performance

High CPU usage process: % Processor Time

Discussion in 'Performance Tuning for DBAs' started by ranjan_scjd, Jul 27, 2005.

  1. ranjan_scjd New Member

    Hi All

    Could anyone please help me out in this issue. I have one production database of 15 GB size. I have a scheduled job running in the mid night and it is taking 8 hours to run.In this is doing around 50 lacs of records insert operations. All querries running at that time are all optimized because almost all using the clusetred indexes for any search criteria.

    But in this 8 hours of time interval I am geting every 5 mins one MOM alert from that server saying that:

    Severity: Error
    Status: New
    Source: Process: % Processor Time: sqlservr
    Name: SQLSERVR Process > 90% CPU for 15 minutes
    Description: Process: % Processor Time: sqlservr value = 92.5761777071311. The average over last 3 samples is 92.5762.

    Can anyone please me out in this issue as I facing a lot difficulties.

    Thanks
    Ranjan Rashmi Sahoo



    Ranjan
  2. satya Moderator

    I presume the insert is performed using bulk insert, if not could you please make sure to insert the records in small batches that will not stress out the CPU of servr during the operation.


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. Twan New Member

    Hi Ranjan,

    You'll need to give us more details about what the inserts are doing, etc. Is this is stored proc, an external program, any code that you can show? How heavily indexed are the tables that you are inserting into?

    50 million rows in 8 hours is around 1736 per second, which seems pretty quick if this is a single process.


    Is this a single CPU box? if not, then there must be some selects doing parallelism, otherwise only one CPU would max out on inserts

    Cheers
    Twan
  4. deepakontheweb New Member

    <br />Ranjan: If you have correctly optimized queries then, Speed on insert majorly depends on disk activity (its rpm, RAID level); placement of data/ log files and capacity of CPU.. That we don't know as of now.. Still if you want to reduce CPU usage then consider faster disk or run process in batches instead of all in one go. [ use 'waitfor delay' between insert's ]<br /><br /><br /><img src='/community/emoticons/emotion-1.gif' alt=':)' /><br />50 million rows in 8 hours is around 1736 per second, which seems pretty quick if this is a single process. <br /><br />50 Lacks means, 5 million.. which is 173 records per second.. <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br /><br /><br /><br />Deepak Kumar<br /><br />--An eye for an eye and everyone shall be blind
  5. gurucb New Member

    Since you are inserting high number of rows it almost should be a disk issue that is reflected a high CPU usage.<br /><br />But here is 1 pointer that i can think of:<br /><br />you have a clustered index on the table and when you are inserting rows then i would think apart from inserting it would also have to take care of page splits...<br /><br />Just to check if this is issue: Am not sure either <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />Before running this job Reindex the index with low fill factor to minimise page splits....
  6. ranjan_scjd New Member

    Hi All

    Thanks to all for your quick reply. Let me describe my problem in more details:


    WE are having one ERP on SQL server 2000 database. In that we are having one job for security refresh permissions.
    The DB server configuration is:
    RAID 5 controller disk, 2 GB Ram, 4 dual Zeon processors with 2 GHz speed.

    The Job Description:
    This Job is a stored procedure. The job is checking employees security permissions to the components of the ERP. In that if first keeps records in one temporary table while processing. After completion of processoing it is inserting all the records from temporary table to the master table.

    1. There are bulk insert operation happening.
    2. Is there any way to bulk insert from one table to another table (Not from one table to one fle). All the Batch insert or bulk insert operations takes files as processing a huge set of data.
    3. For Heavy insert operations on a table high value of fill factor will be better or low fill factor.Which kind of fillfactor will avoid the page splits.




    Ranjan
  7. ranjan_scjd New Member

    Hi Satya

    This job is a stoored procedure. And here only insert operation happening from one temperary table to one master table.

    Regards
    Ranjan...


    quote:Originally posted by satya

    I presume the insert is performed using bulk insert, if not could you please make sure to insert the records in small batches that will not stress out the CPU of servr during the operation.


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

    Ranjan
  8. satya Moderator

    You can take help of BCP in this case instead of BULK INSERT as you can use BCP IN and OUT to import/export the data. The fillfactor will only matters during the case of reindexing and if you have minimal disk space to manage.

    How often the database consistency check jobs run?

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  9. mmarovic Active Member

    Lower fill factor may help, but I doubt that you have clustered index that makes data rundomly written all over the table and my guess is that you have composite clustered index.
  10. satya Moderator

    Composite clustered index will make things worse if the fillfactor is not addressed properly, in this case disk space monitoring is a must and should process.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

Share This Page