SQL Server Performance

Stored Procedure Performance

Discussion in 'T-SQL Performance Tuning for Developers' started by rajasekharcj@gmail.com, Dec 14, 2005.

  1. I have a performance issue in my production database. I have a global search procedure where all the user use this very freequently for all types of searches. This procedure is taking 10-20 seconds initially and after 2-3 weeks the same is taking 2 to 5 mins!!! <br /><br />Once after restarting the sql server services the procedure is taking normal time. I tried by clearing the LOG but no result. If I restart the Services then only the problem is solving. Now I added a task "Restarting the services for every 2 weeks" to my task list <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />Can any one help me out what is the reason behid this and how to resolve this problem?<br /><br />Thanks for your help in advance<br />Raj
  2. Luis Martin Moderator

    How big is tempdb after 2 weeks?. Have you space in tempdb disk?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  3. druer New Member

    Does it just abruptly go from 10-20 seconds to 2 to 5 minutes? Or does it start going to 30 seconds after a day or 2, then to 1 minute after 5-6 days etc?

    Try to recompile the stored procedure and see if that resolves the issue before restarting services etc as something to try.
  4. Hi Martin,

    Thanks for your reply.

    I have enough space in the disk almost 500 GB. And the tempdb database file size properties are automatically grow file and unrestricted file size. And Transaction log size also unrestricted.
  5. It is gradually increasing the time. I recompiled the procedure but same result.
  6. Luis Martin Moderator

    Now, do you have maintenance plan including defrag and update statistics?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  7. No I dont have any maintainence plans
  8. Luis Martin Moderator

    I suggest to run maintenance plan to defrag indexes.
    To do that in easy way, find in articles (our forum) one by Tom Pullem about automatic defrag indexes.

    My suggestion is to run Tom procedure on non working hours and 2 or 3 times a week.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  9. druer New Member

    Luis,

    I had a strong suspicion indexes and or statistics could be at issue and likely had no plan in place, however, my head kept asking the same question "Why would restarting the services correct the problem and cause the stored procs to run faster again?" I don't think a shutdown/restart of services would correct index fragmentation in anyway, but I don't know.

    Is it possible that the disk i/o is really backing up so the tran log has a lot of uncommited "stuff", that there becomes so many locks, causing blocking, and then the shutdown allows things to catch up and be commited during the shutdown and startup? I have zero proficiency in watching performance monitor statistics to know what to recommend to watch to see if that could be the case.

    I understand the concept of the sysindexes table not being totally 100% accurate with the statistics regarding tables. Is it possible that they become so out of tune with reality that it could cause this kind of problem, and then during the shutdown they are brought back to reality so the problem goes away?

    Could it be some type of memory problem where things have to constantly get swapped out to the disk and the system can't keep anything in cache stable for long? I could understand how things would run faster after a startup then, but once the paging in/out starts occuring I can't figure why it would continue to degrade instead of reaching a point of thrashing and just stay there.
  10. joechang New Member

    if restarting the service fixes a problem, it is mostly likely a problem with the virtual address space, look in the SQL Server logs, for entries like: failed to allocate 64K-128K
    also run perfmon at 30-60sec interval from the time SQL starts to when this problem is evident, logging under the process object -> the sqlservr instance, for the virtual bytes, private bytes and working set counters
  11. mmarovic Active Member

    Of all suggestions I think the best chance to be true are first and last one. First one, because sql server restart recreates tempdb, that might be heavily fragmented in the meantime. What are tempdb initial size and files growth settings?
  12. vsnreddi New Member

    if any modifications happened in your procedure..try 2 save latest execution plans.<br />if any new data added in procedure related tables, try 2 recreate the index ordo update statistics.<br />if not working above well try to create the procedure with different name and see..<br /><br />Thanks<br /><br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by rajasekharcj@gmail.com</i><br /><br />I have a performance issue in my production database. I have a global search procedure where all the user use this very freequently for all types of searches. This procedure is taking 10-20 seconds initially and after 2-3 weeks the same is taking 2 to 5 mins!!! <br /><br />Once after restarting the sql server services the procedure is taking normal time. I tried by clearing the LOG but no result. If I restart the Services then only the problem is solving. Now I added a task "Restarting the services for every 2 weeks" to my task list <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />Can any one help me out what is the reason behid this and how to resolve this problem?<br /><br />Thanks for your help in advance<br />Raj<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br /><font color="blue"><br /><b><br />SURYA<br /></b><br /></font id="blue"> <br /><br />
  13. Thanks for all your replies

    Based on your comments and in my observation I understand that tempdb is the cause of problem. The tempdb size is increased almost 20 MB in one day. I guess after few weeks it is reaching huge size (I have enough space in the disk 500GB and the database size is unrestricted) and causing the poor performance. When i restarted the services, sql server is clearing the tempdb database and taking to normal size. Is this correct?

    I dont have any maintainence plans on tempdb database, how to clear the tempdb database without restarting the services?

    Raj
  14. mmarovic Active Member

    TempDb might be the cause for your problem, not tempDB size itself but the way how you handle it. 20 MB increase is not much. Is it 20 GB actually?
  15. Luis Martin Moderator

    No way to reduce tempdb without restarting services.

    But, suppose you have no good indexes in that store procedure and suppose the sp generate a lot of sorts in tempdb.
    I still think you need to review indexes and maintenance plan.

    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  16. Hi Martin,

    I found this articlehttp://www.sql-server-performance.com/tp_automatic_reindexing.asp for reindexing. I have a quick question, this will not create any other problems after reindexing the production database (when no users are connected)?

    And also in my observation the procedure is taking time because of one FULL OUTER JOIN to a VIEW which is contain one complex query on main transaction tables with unions. I am thinking to use a temp table instead of directly applying FULL OUTER JOIN on the view. What I mean is first i will insert the data from VIEW to Temp table by applying conditions, and then i will use the temp table for OUTER JOIN. This is the correct way of doing OR is there any other way to avoid FULL OUTER JOIN?

    Thanks for your help
    Raj
  17. Luis Martin Moderator

    I have running that sp in 4 clients in productions servers (of course on windows time, no users at all) and no problem.

    Also the firs time, may be, will take a little longer, but after only defrag those indexes below, said 90% of fragmentation.

    One example, one of my clientes has 70G database. I run sp 3 times on laboral week and other on week end. Average time 2 hours.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



Share This Page