SQL Server Performance

performance

Discussion in 'Performance Tuning for DBAs' started by bfarr23, Aug 31, 2004.

  1. bfarr23 New Member

    We have a main web site sp that performes at an average of 300ms. every now and then it will jump to 5 seconds, 6, 7, etc. When it gets to these high exec times, I just run a query from profiler(any sp at all) and perf. goes back to normal.

    i know something coming into the DB(some combination of parameters) is causing this issue.

    how to i track this problem down?



  2. satya Moderator

    How about routine DBCC & database health checks?
    Try scheduling recompilation of SPs over the weekend for performance gain.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. bfarr23 New Member

    DBCC CHECKDB reported no errors.

    this sp has dynamic sql and it rarely recompiles.


  4. FrankKalis Moderator

    Can it be that SQL Server needs the time to create a new execution plan for the dynamic query?

    --
    --Frank
    http://www.insidesql.de
  5. bfarr23 New Member

    when it creates a new execution plan that means it has recompiled.

    this sp rarely recompiles.

  6. FrankKalis Moderator

  7. satya Moderator

  8. bfarr23 New Member

    full reindex is run daily at 4am.

    this sp was fine for the last 6 months. now acting up like this about 2-3 times a day..






  9. Chappy New Member

    What does the stored procedure actually do?

    If its inserting data, could it be that your database or transaction log are set to grow by an unsuitable amount (ie, the default is 10% I think). This could cause a performance delay if your database is large and its needing to grow the files. Most likely not the cause but worth checking
  10. Luis Martin Moderator

    Shot in the dark: Database has autocreate statistics on?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  11. bfarr23 New Member

    The SP selects data. And we log the parameters in the same DB. That's it. Pretty simple.

    Auto_Create is on.
    Auto_Update is off.

    I run full re-index plus sp_updatestats daily at 4am.

    DB is 1500MB. Has 3GB free to grow. (does not grow much)
    DB log is 20MB. 2GB free to grow. Simple recovery Model. (we do not do transactions; mainly selects).
    TempDB 10MB. 90MB free.


    So, as I said this was fine for 6 months. Now 2-3 times a day the SP will go from 300ms to 5,6,7 seconds. Stays there until I run a query from profiler. Sometimes it resolves itself over time. 90% of the time I have to run a query to get it back to usual exec times.




  12. Luis Martin Moderator

    Last week I had similar issue with a query.
    I look execution plan and find a new index to implementate.
    When I try to create new index in one table message was: up to 254 indexs/statistics by table is permitted (Windows 2000, SQL Standard)
    So I remove some old statistics, and query back to normal time without new index.
    May be one of your tables has olds statistics and there in no place to new one, but like I said, is a shot in the dark.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  13. Chappy New Member

    What does performance monitor show during the time of poor performance? Monitor a few basic cpu/disk/memory counters and then you can get more specific if these flag something up
  14. bfarr23 New Member

    Auto_Create stats is on. I run sp_updatestats daily at 4:15am.
    why would a table have old stats?


    The CPU averages 30%. When the sp goes to 5,6,7 seconds, the CPU goes to 100% and stays there until I run a query. Then the SP is back to the 300ms execution times. CPU back to 30% average.









  15. bfarr23 New Member

    i observe the execution plan while its at the 300ms times.
    also during the 5-7 sec times.

    nothing stands out here to me. should i be looking for something in particilar?


  16. Chappy New Member

    if execution plan does not appear to change then its probably related to physical hardware such as disk queue, or maybe blocking. Run sp_who2 during bad performance to investigate
  17. Luis Martin Moderator

    About old Statistics.
    I mean, if you have in one table index+statistics = 254, there is no place for new statistics. Tha's why I delete old (first in time created) statistics.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  18. derrickleggett New Member

    bfarr, I've seen the exact same problem that Luis is talking about. You might just try deleting all the old statistics and running UPDATE STATISTICS. Then I would take the auto_create statistics off and just do it once a day.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  19. bfarr23 New Member

    ok. I will try this.

    So if one table index + stats = 254
    - how do I find out if this is the case?
    - does 254 mean total of "UNIQUE indexes or constraints per table". (so i assume that stats are apart of this??(as I definetly do not have close to 254 indexes or constraints).


    how do i delete old stats?


    "I would take the auto_create statistics off and just do it once a day."
    - do what? run sp_updatestats? This only 'updates' that stats. What about the 'create' part??


    thx.

  20. derrickleggett New Member

    bfarr, I'm smoking crack. How's that for an answer.

    Leave the auto_create on. Turn the auto_update off. (You might already have this off.) Delete the statistics you don't need, or you want to recreate. Once a day, run sp_updatestats on the database.

    To delete stats, you use DROP STATISTICS
    To look at information on statistics, use: DBCC SHOW_STATISTICS

    I have had only a few instances where I have manually manipulated statistics. In other cases, it's not necessary.

    1. Nightly maintenance that runs UPDATE STATISTICS.

    2. When I have a new table that will be hit a LOT and grow extremely rapidly the first couple weeks.
    --We had a few times where we had one table that grew millions of rows in the first month of release. Statistics are significantly different at the beginnings of this growth stage. They later even out as the data gets bigger. We ran UPDATE STATISTICS on the table 30 minutes a day for the first week. After that, we ran it ever couple hours. It's now in the nightly process.

    3. Inability to create a new index or constraint on a table. This normally only happens on very poorly designed tables anyway. The "DBA" before me had a table with over 180 columns. He then had a table_expanded with several more. Needless to say, just the single column statistics prevented us fro doing anything on the stupid table.

    4. Had this issue. Dropped all the statistics off the table and let them be recreated. I believe there's a post in www.sqlteam.com about the issue if you want to do a search there.

    Also, you have looked at blocking and know absolutely sure there's no blocking going on when this is happening, correct?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  21. bfarr23 New Member

    thanks Derrick.
    appreciate this.

    i will look again for blocking. i check through EM and sp_who but have seen no blocking before.


    but this seems like a blocking issue. But why would running a SP fix the problem? interesting.


    i will keep looking and trying different things.

  22. derrickleggett New Member

    You sure it is, or is it just the timing?

    SELECT * FROM sysprocesses WHERE blocked <> 0
    --Will give you blocked processes. You can use better ways, but I'm lazy and tired tonight.

    SELECT * FROM sysprocesses WHERE status NOT IN ('background','sleeping')
    --Will give you active processes.
    --You can also use our favorite tool Profiler to help you out.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  23. Luis Martin Moderator

    This would help to find how many statistics and index do you have per table.

    select SO.Name as Tabla, SI.name as 'Index', SFG.groupname as 'Filegroup'
    from sysobjects as SO
    join sysindexes as SI
    on SO.Id = SI.id
    join sysfilegroups as SFG
    on SI.GroupId = SFG.GroupId
    where so.name = 'Table-Name'


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  24. bfarr23 New Member

    select SO.Name as Tabla, SI.name as 'Index', SFG.groupname as 'Filegroup'
    from sysobjects as SO
    join sysindexes as SI
    on SO.Id = SI.id
    join sysfilegroups as SFG
    on SI.GroupId = SFG.GroupId
    where so.name = 'Table-Name'

    - returns nothing.


    SELECT * FROM sysprocesses WHERE blocked <> 0
    - no issues seen here. ran it many times.

  25. Luis Martin Moderator

    Did you replace Table-name for your table?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  26. bfarr23 New Member

    oh boy. i looked at that hard.

    i went through my tables. lastgest row count returned was 33 rows.
    far from the 254 max.
  27. bfarr23 New Member

    i know i can pin-point this with paramter isoation & testing, etc.
    but we get over 3 million hits per day.
    about 25-30 parameter choices for the search.

    hard to nail down a pattern there.

  28. bfarr23 New Member

    i really do not want to drop the stats Derrick unless I have proof that the current ones are bad. Our data has not grown much at all.
    As this will affect optimizer's plan and could make things very bad for us.

    the problem is only seen on all our live db servers.

    risky for me.
  29. joechang New Member

    a recompile can be cause by inserting 6, 500, every next 20% rows into a temp table (presumably same for a permanent table?), other stuff like DDL and some SET conditions
    compiles could be the initial compile, a plan aging out of cache.
    presumably a dynamic sql statement needs to be compiled every time, but i do not know if that is reported against the stored proc or just the statement?
    has anyone compared straight dynamic SQL in a stored proc against a parameterized SQL statement?, that might enable SQL Server to cache atleast a few plans.

    back to the topic. if a single table has a combined 33 indexes + stats, and you are executing a dynamic SQL statement on this table with arbitrary search parameters (or search columns that change from call to call)
    SQL Server must compile for the best plan each time.
    It will check plans against all relevent indexes and statistics to determine the estimated row count and best plan,
    so with a lot of indexes & stats, that could be one expensive compile.
    still, none this explains why the problem goes away.
    I assume you are copying a proc out of profiler and executing it from QA?
    are there different SET conditions that causes a recompile when you do this?
    i am inclined to go with luis on deleting old statistics, or even all stats no associated with indexes, then disable auto-create (just on this table?), then i would look at how indexes are actually used, and delete the unused one,
    every unused index just makes compiling a plan an order of magnitude (2x?) more difficult
  30. bfarr23 New Member

    Joe you are exactly right.

    When I take an SP from profiler and run it in QA there are recompiles right after. The SP in question recompiles twice and the nested SP recompiles about 10 times.

    So this action puts SQL Server optimizer back into it's 99% plan choice.(the 1% being when the SP times go from 300ms to 5-7 seconds).

    So the question is now what is causing sql server to change it's plan.

    I will setup a profile trace on RPC Completes and SP Recompile. I can only trace on stuff over 2 seconds as if I go below that it will be way too much data collected.

    I assume that when things go slow there will be some combination of parameters that causes SQL Server to recompile and choose a different execution plan.




  31. bfarr23 New Member

    i am also curious why this SP is not recompiling each and every time. There is tons of dynamic sql in there.
  32. derrickleggett New Member

    The nested SP recompiles 10 times???? What in the world is this SP doing if it has that much dynamic SQL and an SP that runs that many times. You're not running a cursor also are you? Would it be possible to post the SP(s) and explain what they are doing along with a sample execution plan?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  33. bfarr23 New Member

    no cursor.

    actually the nested SP does not have that much sql.

    the main SP has more dynamic sql. Why is sql server keeping this main SP cached plan if it has so much dynamic sql? (i assume it's because i am using sp_executesql)

    EXEC sp_executesql @SQL, N'@PermitShowPublicICX INT, @ApplicationID INT...


    So neither of these 2 are recompiling often. As of right now I have only seen recompiles when its ran through QA. Trace still runing to look for more recompiles. hopefully that will be the problem.



  34. joechang New Member

    what does profiler sp:cache insert show? the dynamic sql must cause compiles, just not recompiles
  35. bfarr23 New Member

    i will check that out Joe. Thx.

    one thing i noticed today was a clustered index scan occurring on our Address table when it joins with Organization table for a few queries. This scan was with the hash join, which is very cpu intensive. I noticed a good boost in performance when i put in an index hint on this join and now the optimizer does a nested loop with a bookmark lookup - hint is indeed faster in this case and set statistics io & time shows that is the case. Based on my analysis, i want the optimizer to always use that index. So, I put it to live today and the site has been performing better.


    i am still awaiting the slowdown though. unless this change stops it from happening.







  36. joechang New Member

    what was the estimated row count (rows actually needed) from Address and Org tables.
    the SQL Server optimizer overweights the cost of loop joins when more than 131 rows (for big tables) and bookmark lookups, basically assuming a random disk I/O is required for each row.
    more specifically, is there skewed data distribution, some queries with < 130 est. rows some much more?
    potentially you could delete the statistics, and disable auto create for the source table (1st table from which a SARG is applied) to fake out the optimizer into uses a plan that assumes 1 row, which then defaults to the loop join & bookmark
  37. derrickleggett New Member

    You might want to use INNER LOOP JOIN instead of an index hint. See if this has the same affect. If it does, use this instead so you aren't dependent on the index name and can change, drop, etc if you need to. Try this if you have done what joe suggested and it didn't work.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  38. bfarr23 New Member

    i tried 2 examples Joe.

    ex. 1
    Address - row count = 12
    Organization - row count 12.

    ex. 2
    Address - row count = 60
    Organization - row count 60

    so both of these case are < the 131 cost point.

    i also tried replacing the index hint with the INNER LOOP JOIN as Derrick suggested. SQL Server did not like that change at all. I did not check IO or Time stats but the SP examples went from about 500ms to 11 seconds after this change. I will try making that change again and then re-indexing and run sp_updatestats to see if that makes it faster. but i doubt it will drop from 11 secs to 500ms. i will test this.


    i am not a fan of putting in hard-coded items and Derrick pointed out. But the index hint is working for me right now.
  39. derrickleggett New Member

    It does work sometimes where all else fails. We use it on like 4 or 5 procs here out of 7000 or so.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  40. bfarr23 New Member

    site has not had any perf. drops since the index hint was implemented.
    actually perf. is a little better.

    avg. search now is 200ms.

    Pretty good for over 3 million web site searches a day.

    still not a fan of hard-coded index hint. but hey, it is working for me.







  41. cosaco New Member

    just a note
    is a network or hardware problem, or locking, whatever it be, is aleatory, not
    referred to the table or the query, otherwise, you will get the same bad result, forever

    look at the system event, maybe you can find, some recurrent events there,
    like a disk problem, or a heat problem, i sugest, some hardware diags,
    specially controler and disk,

    you dont said if your server is dedicated or has aditional functions.

    i suggest you put your performance-monitor, with disk queues, paging space, network errors, etc
    at 5 seconds intervals, maybe you can get a surprise.
  42. bfarr23 New Member

    been there, did that Cosaco.

    thx. though.



Share This Page