SQL Server Performance

what I have learned and still learning

Discussion in 'Performance Tuning for DBAs' started by v1rtu0s1ty, Dec 14, 2004.

  1. v1rtu0s1ty New Member

    Hey guys,<br /><br />I just want to share back from all the things you have taught me here. I learned how to troubleshoot and find culprits. But still, I am in the process of learning why things affect performance. This quick HOWTO will enable newbies like me to start doing investigation related to slow performance. This document I wrote was learned from all the links that SATYA gave me before. <img src='/community/emoticons/emotion-5.gif' alt=';)' /><br /><br />But guys, I also need your input as to why our CPU on TEST server is at 100% when running that archive tool. You will see what I mean once you read the whole document below.<br /><br /<a target="_blank" href=http://restricted.dyndns.org/troubleshootmssql.html>http://restricted.dyndns.org/troubleshootmssql.html</a><br /><br />Please help.<br /><br />TIA,<br /><br />V1rt
  2. satya Moderator

    Well documented process and the initial thoughts about the performance lack is:

    How about TEMPDB placement and any restricted settings on growth?
    How about memory settings?
    How about data and log files placement on hard disk?

    I feel this is a third party tool only vendor can give insider information about how the application is handling the connections and queries.

    As you're aware PROFILER and PERFMON are the available and best tools to capture any activity during the performance stress. Moreover I would suggest to run server traces to capture the activity rather than running PROFILER tool from a client machine or on server itself. This will help to reduce any contention of performance that is already pounding the resources.

    I shall look into the document in detailed and comeback if any feedback, and I'm sure you will get valuable suggestions from our peers.

    HTH


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. v1rtu0s1ty New Member

    About the tempdb, I will look at the growth increments later since I need to VPN to do that. Any recommendations? Am I correct that it's much more preferable to use the MB in growth rather than the % growth?

    Memory is well configured. PAE and AWE are set properly. The Peak memory is way below the phyical memory which is really nice.

    Log file and data file are on separate disk.

    Server trace? I thought doing profiler is also considered server trace. Btw, I'm the only one connected to the db server. And I get very few traffic in profiler. If I recall correctly, I have only seen about 1000 rows from 4pm to present. And the last row that I am seeing there is the INSERT to enclosure.

    Thanks!
  4. v1rtu0s1ty New Member

  5. Twan New Member

    HI ya,

    it would be useful to know the statement that you suspect is causing the problem, and how long that statement takes by itself.

    if it is quick by itself then it would be useful to see the procedure code (or app logic) around this statement, i.e. if it is in a loop then to see all the code for that loop

    the html page is great for correllating the spid,kpid,etc. but doesn't have any info about the actual statement to troubleshoot it further...?

    Cheers
    Twan
  6. satya Moderator

    Can you specify about the hardward of the SQL server and RAID configuration, and for the performance aspects its better to run server side traces if you're already having issues with current long running queries.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  7. mmarovic Active Member

    Code of the statement causing the problem would be the most useful as already mentioned. Also, if this table is not queried during the process, having table lock would be helpful.
  8. v1rtu0s1ty New Member

    Ok, this is the last query that Profiler is showing. Currently, it's still in the same query. I verified it from Process info in EM.

    SC is the source database -> db ID 6
    SCARC is the target database -> db ID 7

    I escalated this query to the vendor last week and was asking them why there were doing a select against the target table. They told me it's fine. But your TABLE LOCK make sense to me. Unfortunately, they do a COMMIT TRAN at the end. That's what they told me. So if we do a TABLE LOCK, then I might get the old value.

    INSERT INTO SCARC.SCArc.ENCLOSURE (BARCODE, ENCLOSURENUMBER, MEDIUMNUMBER, ENCLOSUREDESCRIPTION,ENCLOSURELOCATION,FIELDE1,FIELDE2,FIELDE3,EDITDATE,EDITEDBY,ISARCHIVED,DOCUMENTLINK) SELECT ENCLOSURE.BARCODE, ENCLOSURENUMBER, MEDIUMNUMBER, ENCLOSUREDESCRIPTION,ENCLOSURELOCATION,FIELDE1,FIELDE2,FIELDE3,ENCLOSURE.EDITDATE,ENCLOSURE.EDITEDBY,-1,DOCUMENTLINK FROM ENCLOSURE, FOLDERS , TUAUXILIARY WHERE FOLDERS.BARCODE=TUAUXILIARY.BARCODE AND FOLDERS.CREATEDATE<='9/10/2004' AND TUAUXILIARY.CURRLOCATION='FS/F/O' AND ENCLOSURE.BARCODE=FOLDERS.BARCODE AND ENCLOSURENUMBER NOT IN ( SELECT E.ENCLOSURENUMBER FROM SCARC.SCArc.ENCLOSURE E, FOLDERS , TUAUXILIARY WHERE FOLDERS.BARCODE=TUAUXILIARY.BARCODE AND CREATEDATE<='9/10/2004' AND TUAUXILIARY.CURRLOCATION='FS/F/O' AND E.BARCODE=FOLDERS.BARCODE)

    Thanks guys for helping me again.
  9. v1rtu0s1ty New Member

    Btw, I also created the blocker script found in this link,http://support.microsoft.com/kb/271509/EN-US/

    It did not find any blocking.
  10. v1rtu0s1ty New Member

    Hi Satya,

    Server is a 1.6Ghz Pentium with 8gigs of RAM. About the RAID, i have no idea how it was configured. Will there be any difference in the capture if I do server side tracing as compared to client side tracing?

    Going back to the growth in the tempdb, what do you think about the current settings?
  11. mmarovic Active Member

    Explanation of why they don't use table lock doesn't make sense to me. I'll take a look at query. What I've got from it so far is bad table design (they join on barCode column) and bad process design. It looks like they select all rows from source tables up to the specific date filtering out rows that are already in destination and then insert them. It would be much more efficient if they use some kind of control structure to mark already processed rows in source db (flagging source table rows or better writting last processed date or enclosure number) and next time they archive just rows not processed before.
  12. v1rtu0s1ty New Member

    When you say control structure, is it going to be a temporary table where their script can query if a barcode has already been inserted previously?
  13. mmarovic Active Member

    Some apps use archive flag in tables they archive, so they archive rows where archive flag is set to 0 (false). Another possibility is to store last ID processed (if there are no updates) or last archive time in control table. So next time you can archive just rows added from the last time you performed archiving. This approach is used the most as far as I know.<br /><br />Anyway, I'm not quit sure they are going to redesign process on your request <img src='/community/emoticons/emotion-5.gif' alt=';)' />
  14. satya Moderator

    In theory, SQL Server escalates rowlocks and page locks into table locks when transactions exceed a
    dynamically determined escalation threshold. This threshold is not user configurable

    In practice, we've all seen SQL Server transactions that acquire hundreds or even thousands of page locks during an update transaction without ever seeming to escalate to table locking. And the table locking must be used with optimum care, otherwise the other processess will be affected.

    Onthe part of Tempdb nothing to say and I would suggest thishttp://www.sql-server-performance.com/tempdb.asp link for better performance.

    HTH

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  15. mmarovic Active Member

    Forgot to mention, in each incremental archiving scenario deletes need to be marked, and actuall deletes have to be done by archiving process.
  16. v1rtu0s1ty New Member

    follow up question, when archiving, what should be the recovery model on the target database?
  17. Twan New Member

    recovery model will depend on the amount of data you're prepared to lose...

    if you have simple then you can only go back to the last backup. If between the last backup and this failure you've also deleted the original data from the source database, then you can't re-archive that data

    basically if the data changes/is important to you/is difficult to recreate then use full or bulk logged

    Cheers
    Twan
  18. satya Moderator

    For huge data import BULK LOGGED recovery model is suitable, and make sure to maintain frequent transaction log backups to keepup the size of Tlog file.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

Share This Page