SQL Server Performance

SQL Server CPU and Java Stored Procedures

Discussion in 'Performance Tuning for DBAs' started by patr667, Jul 3, 2008.

  1. patr667 New Member

    We are currently experiencing a problem where CPU is up and down but running a steady 80%. One application running WebSphere, so the jdbc connections call and use dynamic stored procedures. Initially the stored procedures were coming and going faster than I could count...but I have that fixed and recompile are now 0. RPC complete times are usually around 100 occasionally 300, all being way to high. 7 millisecond response times when invoked hundreds of times equate to a user experience of 20-30 seconds.
    Can't run profiler traces through index optimizer as it can't read the stored procedures and make suggestions.
    Nightly reindex, and 100% sample for stat updates
    Buffer Cache hit ratio is constant > 99.8% for the past 4 days. Disk Queue is 0 for all disks.
    Microsoft SQL Server 2000 - 8.00.2187 (Intel X86) , running windows 2003 enterprise 4gig ram, pae, awe and 3GB
    Any help would be great...just as an example here's a query...
    SELECT PARENTIDKEY,CONTRACTKEY,COMPANYKEY,BACKENDKEY,RELATIONKEY,ORIGINATINGOBJECTTYPE,
    RELATEDOBJECTTYPE,RELATIONROLECODE,RELATIONDESCRIPTION,NAMEFROMRELATEDOBJECT,KEYFROMRELATEDOBJECT,
    STARTDATE,ENDDATE,DEPENDENT,INTERESTPERCENT,VOLUMESHAREPCT,SEQUENCE,RELATEDREFID,RELATEDREFIDTYPE,
    PRIMARYADDRESSID,PRIMARYPHONETOCALLID,PRIMARYEMAILID,BENEFICIARYSEQNUM,BENEFICIARYSHAREMETHOD,
    BENEFICIARYCOMMONDISASTERPERIO,BENEFICIARYDESIGNATION,IRREVOKABLEIND,DURATION,ID,ORIGINATINGOBJECTID,
    RELATEDOBJECTID,PARTICIPANTID,SECONDARYADDRESSID,DATAREP FROM "RELATION" WHERE PARENTIDKEY = @P1 AND CONTRACTKEY = @P2 AND COMPANYKEY = @P3 AND BACKENDKEY = @P4
    This query has the following from profiler...Duration 219, CPU 15, and reads 22492
    Relation table has a measly 406400 rows and a clustered index on parent, contract, company, backend, and id
    Any help would be fantastic!
  2. martins New Member

    Hi,
    In my opinion it seems like you have eliminated the possibility of a hardware bottleneck. There are still a lot of physical reads though, and I would suggest that you now look at your indexes to improve performance. If you could maybe post the layout of the above table, all existing indexes and the execution plan of the above query we could try and make suggestions accordingly.
  3. patr667 New Member

    The optimzer is choosing a Clustered Index Seek this morning..only one index.
    Heres the table and index build statments. Thanks for your help.
    CREATE TABLE [nbapend].[RELATION] (
    [PARENTIDKEY] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [CONTRACTKEY] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [COMPANYKEY] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [BACKENDKEY] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [RELATIONKEY] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [ORIGINATINGOBJECTTYPE] [bigint] NULL ,
    [RELATEDOBJECTTYPE] [bigint] NULL ,
    [RELATIONROLECODE] [bigint] NULL ,
    [RELATIONDESCRIPTION] [bigint] NULL ,
    [NAMEFROMRELATEDOBJECT] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [KEYFROMRELATEDOBJECT] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [STARTDATE] [datetime] NULL ,
    [ENDDATE] [datetime] NULL ,
    [DEPENDENT] [tinyint] NULL ,
    [INTERESTPERCENT] [float] NULL ,
    [VOLUMESHAREPCT] [float] NULL ,
    [SEQUENCE] [smallint] NULL ,
    [RELATEDREFID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [RELATEDREFIDTYPE] [bigint] NULL ,
    [PRIMARYADDRESSID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [PRIMARYPHONETOCALLID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [PRIMARYEMAILID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [BENEFICIARYSEQNUM] [smallint] NULL ,
    [BENEFICIARYSHAREMETHOD] [bigint] NULL ,
    [BENEFICIARYCOMMONDISASTERPERIO] [smallint] NULL ,
    [BENEFICIARYDESIGNATION] [bigint] NULL ,
    [IRREVOKABLEIND] [tinyint] NULL ,
    [DURATION] [smallint] NULL ,
    [ID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [ORIGINATINGOBJECTID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [RELATEDOBJECTID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [PARTICIPANTID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [SECONDARYADDRESSID] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [DATAREP] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]
    GO
    ALTER TABLE [nbapend].[RELATION] WITH NOCHECK ADD
    CONSTRAINT [PK_RELATION] PRIMARY KEY CLUSTERED
    (
    [PARENTIDKEY],
    [CONTRACTKEY],
    [COMPANYKEY],
    [BACKENDKEY],
    [ID]
    ) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO
  4. moh_hassan20 New Member

    Is COLLATE SQL_Latin1_General_CP1_CI_AS the same as server, or different ?
    why do you use collate with most of fields.?
  5. martins New Member

    The fact that your query is using a clustered index seek is perfect for good performance.
    What I can see as possible reasons for the poor performance are the following:
    • The composite clustered index on your table consists of fairly wide varchar fields. Varchar's will not perform as well as numeric data types when used in an index, especially not when you have 5 like you do. By this I am not trying to say that this primary key is not valid (and possible to change), but it can be a contributing factor.
    • The query posted above seems to select all possible fields within the table. You will probably find that the bookmark lookup in the execution plan takes a considerable percentage of the time. If possible, only select fields required from the table and not everything.
    • Your table is very wide, meaning that not many records will fit on a page of 8KB. I would say that this could very much be the main reason for all the physical reads. Unfortunately there isn't an easy way around this. You can possibly pin the table to memory, but this might affect other queries negatively. You can look at refining the design of this table, like normalizing it further if possible.
    Unfortunately no easy solutions here, but I hope this helps point you in the right direction towards a possible one...
  6. patr667 New Member

    The collation is the server default.
    I totally agree wit the three points you have made and have quoted them to managers...unfortunetly this is a half canned app...meaning a vendor provided the base code and we run mods and input to the base. The queries that are bad are all base. I'm looking at query response times of subsecond, but a change has definitley occurred in the past month regarding performance. CPU is running at 80% no matter if I have 1 user or 100...of course when no one is in, cpu is normal...with the marker being SQL Server executable. I did order an additional 4 gig of ram that will be here Wed or Thursday at the latest. If there was a stat to find, to help identify what sql is doing, I haven't been able to find it. I've pinned key tables in memory in QA but haven't moved to so in production as it wasn't doing disk I/O anyway. I've tried adding some indicies, which reduced I/O but increased CPU consumption as expected. The few indexes that were put on are really duplicates anyway as the PK makes up the where clause...meaning any additional keys were simply duplicates that weren't being choosen anyway. Basically I'm in a bad way to go, with what appears to be a no way out...and worst of it all they are looking at backending the thing with Oracle if I can't figure it out.
  7. martins New Member

    Yes, you are in a bit of a bad situation. I'm not convinced that extra memory will solve the problem, but it might allow you to pin this table to memory as well.
    With all of this happening going to Oracle will also not solve anything. The only thing you might still be able to do is to place the data for this table on a dedicated local drive if possible. Can you make the database read-only...or add with (nolock) to the queries?
  8. patr667 New Member

    I could add the no lock, I have been trying to avoid any changes to the base code so the developers wouldn't need to be involved. Making the leap to the next upgrade becomes that more intensive when all the queries have been modified...there are roughly 1500 dynamic quieries in the baes alone. Wouldn't be able to go read only as data is inserted into the db toward the end of the transactions.
    Might be able to create two datasources one that sets the transaction isolation level to be read only and one that does updates...keeping the individual queries the same.
  9. martins New Member

    Adding "with nolock" or setting the isolation level might help things for a start, but as long as the query requires the whole record from the database you won't get around the huge amount of physical reads.
    Gotto love it...databases getting blamed for badly written applications [+o(]

Share This Page