SQL Server Performance

Slow Performance when server at (near) max memory

Discussion in 'Performance Tuning for DBAs' started by ebol, Apr 29, 2004.

  1. ebol New Member

    I mean REALLY slow. In one case, a normally 15 min batch query ran in 57 hrs. The difference was that when the SQL server service was "fresh" meaning it had been recycled and was at less that it's max memory, everything ran great. However, when the memory grew to it's max (as SQL server never lets go of memory once it's allocated) it suddenly acted like a dog. The dev I'm working with thinks it may have resorted to table-scan joins rather than indexed-based ones.

    Other info:
    This is a test only box...so for the most part the load is very controlled and not excessive...other than the fact that I'm running some intensive batches that call a series of SPROC's and to large (650k or so) inserts on a 5 column table.

    SQL Cache Memory:
    In my efforts to trouble-shoot this problem, I have discovered my event log is riddled with the following event every few seconds:

    Event Type:Information
    Event Source:SysmonLog
    Event Category:None
    Event ID:2031
    Date:4/29/2004
    Time:9:46:45 AM
    User:N/A
    Computer:QATST05
    Description:
    Counter: \QATST05SQLServer:Memory ManagerSQL Cache Memory (KB) has tripped its alert threshold. The counter value of 888 is under the limit value of 5000.

    I see this both in "dog mode" and when the service is fresh. I cannot find any documentation on this issue with regards to how I can affect the size of this cache...only the total SQL Server address space. Any ideas what might cause this problem and how to fix it.

    Eric
    SQL back-end tester
  2. Luis Martin Moderator

    What SQL and SP do you have?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
  3. derrickleggett New Member

    Set the maximum size of your memory low enough you still allow room for everything else to operate. Increase the size of your paging file. See if this helps. The big thing is to make sure you're not choking out everything else with SQL Server.

    How much RAM do you have? What kind of box? What sizes of databases? What kinds of t/sec are you looking at?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  4. ebol New Member

    Sorry, forgot to mention my enviroment...
    Win 2k Server sp2
    SQL 8.00 sp3a (Standard Edition)
    Dual Xeon 2.4 Ghz with 4gb RAM

    I have maxed out the memory at 4gb (ok, 3.7...) but since it is Standard Edition, it will only go up to 1.7. Since I have 3.7 on the server, that leaves me with plenty of RAM for the OS.

    THIS JUST IN: I was reading in an earlier thread (sp_recompile) about UPDATE STATISTICS and realized this may be my problem. As the day progressed, my 30 min. or so query began running at 2+ hrs. So I decided to throw the update stats clause in my loop (loops every 100k records searched ... not 100k inserts though) and reran the query...I'm back at 35minutes! However, my server is not in the max memory state yet, so I'm holding back hope still.

    Eric
    SQL back-end tester
  5. derrickleggett New Member

    I would leave at least half a gig, but that's just me. Is this database pretty new. Sometimes this happens because the datasets are really small on a new database and increase rapidly. It really messes with the statistics.

    When we released our last system, we had UPDATE STATISTICS run every hour for the first month. You wouldn't believe how much it sped up the system. After the first month we were able to go to the daily schedule.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  6. ebol New Member

    Oh, my current test db is about 30gb in terms of the backup image.

    t/second ... transaction per second I'm assuming (sorry, I'm a bit new in the hard-core SQL world). Donno...probably should though. This leads to a new question...any idea why my SQL counters in perfmon sometimes just dissapear? Literally they are not in the listbox in "Add Counters" dlg.

    thanks for the help,

    Eric
    SQL back-end tester
  7. ebol New Member

    derrickleggent,
    Thanks for the info. I always get confused about what a gb is (nominal or binary...10^9 or 10.24^9 respectively) but let's just speak nominal. So I have 4gb of physical RAM on Win2k. I'm using up 2gb for SQL. That would leave me with 2gb for the OS et al, right? Or am I missing something.

    Eric
    SQL back-end tester
  8. derrickleggett New Member

    This is a test only box...so for the most part the load is very controlled and not excessive...other than the fact that I'm running some intensive batches that call a series of SPROC's and to large (650k or so) inserts on a 5 column table.


    What kind of indexes do you have on this table? Can you post the table design, indexes, and the procedure?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  9. gaurav_bindlish New Member

    Okay here is the thing.

    For the memory issue, SQL Server takes up all the memory available on the server till the point it cann't take any more. This is by design but it releases the same as soon as some other application on the same box or the OS needs it, hwich does not seem to be the case in your case.

    For the high memory operation, i think you have found one of the clues that the statistics need to be updated. May be because the statistics were wrong, query optimizer decided to use a bad plan for the same. The plan can be verified by running the query in QA.

    How are you doing the insert? For this big insert, I would recommend using batches to insert the data and call checkpoint and transaction log backup after each batch. This would mean that less data and log will need to be handled in memory and hence you should see big perf gain.

    If you are insering this much data, I think it would be advisable to drop the indexes on the target tables before doing the insert as this will make the inserts faster.

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

    The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
  10. ebol New Member

    This table that is inserted to is a simple one...
    PK, FK int SearchId
    PK, FK int DocumentId
    PK, FK int KID
    FK, int StatusId
    FK int AuditId

    Indexes on the clustered primary key only.

    The sproc builds a query string and executes using sp_EXECUTESQL. It basically does a join of several steps that compose of dumping each step to a temp table ( this was done because to remedy this problem because the dev thought it might have gotten too complex and resorted to table scans on the joins ). This query is itself executed in steps of 100k records ( records searched that is, not inserted ). Within this look is where I added my UPDATE STATISTICS.

    Oh, actually there are two parts of above mentioned query... primary hit (this is a search routine that dumps results to above table) and then a "family group" insert that inserts "family members" of the primary hits for that interval pass. In this case, it does a join on the above table with DocumentId, SearchId, and KID. But that's already the clustered index, so I'm not sure another index should be created. However, I'm not an expert as am still learning. That said, I belive this is an effient index because the primary hits will all be physically adjacent as they are inserted first in the loop and there will not be intermittent "group" doc ( in this case a group doc is defined as KID = 0).

    Eric

    Eric
    SQL back-end tester
  11. derrickleggett New Member

    You should really post the procedure. This:

    The sproc builds a query string and executes using sp_EXECUTESQL. It basically does a join of several steps that compose of dumping each step to a temp table ( this was done because to remedy this problem because the dev thought it might have gotten too complex and resorted to table scans on the joins ). This query is itself executed in steps of 100k records ( records searched that is, not inserted ). Within this look is where I added my UPDATE STATISTICS.

    is really an inefficient way to write a query. Using dynamic SQL is a security risk and does not perform as well as regular SQL. You can probably write it as one insert statement with all the criteria built in. This is harder to read, but performs much faster. We've had some processes that were written like this where I work now. When we converted them, they ran much faster. The indexes were also in much better shape then they were with the multiple inserts, updates, etc.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  12. ebol New Member

    Ok, here it is... remember, I'm QA, not the dev so please blast away <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br />Eric<br /><br />CREATE PROCEDURE adspPSearchRecipient<br /><br />@PSearchID INT<br /><br />AS<br /><br />/*<br />***************************************************************************************<br />**<br />**Description:<br />**<br />**Performs recipient search and builds a collection<br />**<br />**<br />***************************************************************************************<br />*/<br />BEGIN<br />SET NOCOUNT ON<br /><br />DECLARE<br />@ReturnCodeINT<br /><br />/* search variables*/<br />,@IncludeGroupsBIT<br />,@ParentSearchBIT<br />,@TargetTableNVARCHAR(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br />,@TargetIdINT<br />,@keysetINT<br /><br />,@DelayCHAR(9)<br />,@MessageLevelINT<br />,@DocIntervalINT<br />,@DocIDMinint<br />,@DocIDMaxint<br />,@DateMindatetime<br />,@DateMaxdatetime<br />,@UploadIdINnvarchar(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br />,@SourceIdINnvarchar(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br />,@PSearchIdINnvarchar(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br />,@CollectionIdINnvarchar(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br />,@UploadExcludebit<br />,@SourceExcludebit<br />,@PSearchExcludebit<br />,@CollectionExcludebit<br /><br /><br />/* throttling variables */<br />,@CurrIDINT<br />,@MaxIDINT<br />,@EndIDINT<br /><br /><br />/* audit variables */<br />,@InsertAuditIDINT<br />,@moddateDATETIME<br />,@moduser NVARCHAR(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /><br /><br /><br />/* dynamic sql strings*/<br />,@sqlINSERTNVARCHAR(4000)<br />,@sqlSELECTNVARCHAR(4000)<br />,@sqlDELETENVARCHAR(4000)<br />,@sqlFROMNVARCHAR(4000)<br />,@sqlWHERENVARCHAR(4000)<br />,@sqlStringNVARCHAR(4000)<br />,@sqlStringGroupNVARCHAR(4000)<br /><br /><br /><br />/* initialize search settings */<br />SELECT@keyset= keyset<br />,@DocInterval= DocInterval<br />,@Delay= Delay<br />,@MessageLevel= MsgLvl<br />,@includegroups= includegroups<br />,@ParentSearch= ParentSearch<br />,@TargetTable= ISNULL(TargetTable,'PSearchJoinDocument')<br />,@TargetId= CASEWHENTargetId &gt; 0 <br />THENTargetId <br />ELSEPSearchID <br />END<br />,@DocIDMin= DocIDMin<br />,@DocIDMax= DocIDMax<br />,@DateMin= DateMin<br />,@DateMax= DateMax<br />,@UploadIdIN= UploadIdIN<br />,@SourceIdIN= SourceIdIN<br />,@PSearchIdIN= PSearchIdIN<br />,@CollectionIdIN= CollectionIdIN<br />,@UploadExclude= UploadExclude<br />,@SourceExclude= SourceExclude<br />,@PSearchExclude= PSearchExclude<br />,@CollectionExclude= CollectionExclude<br />FROMPSearch<br />WHEREPSearchID= @PSearchID<br /><br /><br /><br />/* Get documentids */<br /><br /><br />/* Build SQL statements */<br />IF@TargetTable = 'DocumentJoinCollection'<br />BEGIN<br />SET@sqlINSERT= 'INSERT INTO ' + @TargetTable + ' (documentid, auditid, CollectionId) '<br />SET@sqlSELECT= 'SELECT DISTINCT tq.documentid, @insertAuditID, '<br />+ CONVERT(VARCHAR(10),@TargetID) + ' '<br />SET @sqlFROM= 'FROM#TempDocIDtq '+ CHAR(13)<br />+ 'JOINdocumentd WITH (NOLOCK) '+ CHAR(13)<br />--+ 'ONtq.ifilename= d.locationfilename '+ CHAR(13)<br />+ 'ONtq.documentid= d.documentid '+ CHAR(13)<br />SET @sqlWHERE= 'WHEREd.documentid between @StartID and @EndIDand d.converted&lt; 50 '+ CHAR(13)<br />+ 'AND (d.documentid NOT IN(SELECTdocumentid FROM' + @TargetTable <br />+ ' WITH (NOLOCK) WHERE CollectionID= '+ CONVERT(VARCHAR(10),@TargetID)+ ') '<br />+ ') ' + CHAR(13)<br /><br />END<br />ELSE<br />BEGIN<br />SET@sqlINSERT= 'INSERT INTO ' + @TargetTable + ' (documentid, KID, auditid, PSearchID) '<br />SET@sqlSELECT= 'SELECT DISTINCT tq.documentid, tq.KID, @insertAuditID, ' <br />+ CONVERT(VARCHAR(10),@TargetID) + ' '<br />SET @sqlFROM= 'FROM#TempDocIDtq '+ CHAR(13)<br />+ 'JOINdocumentd WITH (NOLOCK) '+ CHAR(13)<br />--+ 'ONtq.ifilename= d.locationfilename '+ CHAR(13)<br />+ 'ONtq.documentid= d.documentid '+ CHAR(13)<br />SET @sqlWHERE= 'WHEREd.documentid between @StartID and @EndIDand d.converted&lt; 50 '+ CHAR(13)<br />+ 'AND (d.documentid NOT IN(SELECTdocumentid FROM' + @TargetTable <br />+ ' WITH (NOLOCK) WHERE KID = tq.KID and PSearchID= ' + CONVERT(VARCHAR(10),@TargetID)+ ') '<br />+ ') ' + CHAR(13)<br />END<br /><br /><br /><br />IF@ParentSearch = 1<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.ParentID= 0 '+ CHAR(13)<br />END<br /><br /><br />IF @DocIDMin IS NOT NULL<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentID&gt;= ' <br />+ CONVERT(VARCHAR(10),@DocIDMin) + CHAR(13)<br />END<br />ELSE<br />BEGIN<br />SET @DocIDMin = 0<br />END<br /><br /><br />IF @DocIDMax IS NOT NULL<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentID&lt;= ' <br />+ CONVERT(VARCHAR(10),@DocIDMax) + CHAR(13)<br />END<br />ELSE<br />BEGIN<br />SELECT @DocIDMax = MAX(documentid) from document<br />END<br /><br /><br /><br />IF(@DateMin IS NOT NULL) AND (@DateMax IS NOT NULL)<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'and ((d.datecreatedBETWEEN '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' AND ''' + + CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datemodifiedBETWEEN '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' AND ''' + + CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datesentBETWEEN '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' AND ''' + + CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datereceivedBETWEEN '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' AND ''' + + CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ ') or (d.datecreatedIS NULL and d.datemodifiedIS NULL and d.datesentIS NULL and d.datereceivedIS NULL)) '<br />END<br />ELSE<br />IF@DateMin IS NOT NULL<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'and ((d.datecreated&gt;= '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' '<br />+ 'or d.datemodified&gt;= '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' '<br />+ 'or d.datesent&gt;= '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' '<br />+ 'or d.datereceived&gt;= '''+ CONVERT(VARCHAR(10),@DateMin,101) +''' '<br />+ ') or (d.datecreatedIS NULL and d.datemodifiedIS NULL and d.datesentIS NULL and d.datereceivedIS NULL)) '<br />END<br />ELSE<br />IF@DateMax IS NOT NULL<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'and ((d.datecreated&lt;= '''+ CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datemodified&lt;= '''+ CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datesent&lt;= '''+ CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ 'or d.datereceived&lt;= '''+ CONVERT(VARCHAR(10),@DateMax,101) +''' '<br />+ ') or (d.datecreatedIS NULL and d.datemodifiedIS NULL and d.datesentIS NULL and d.datereceivedIS NULL)) '<br />END<br /><br /><br />IF @UploadIdIN IS NOT NULL<br />BEGIN<br />IF @UploadExclude = 1<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.UploadIDnot in (' + @UploadIdIN + ') '<br />END<br />ELSE<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.UploadIDin (' + @UploadIdIN + ') '<br />END<br />END<br /><br /><br />IF @SourceIdIN IS NOT NULL<br />BEGIN<br />IF @SourceExclude = 1<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.SourceIDnot in (' + @SourceIdIN + ') '<br />END<br />ELSE<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.SourceIDin (' + @SourceIdIN + ') '<br />END<br />END<br /><br /><br /><br />IF @CollectionIdIN IS NOT NULL<br />BEGIN<br />IF @CollectionExclude = 1<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentIDnot in (SELECT Documentid from documentjoincollection (nolock) where collectionid in (' + @CollectionIdIN + ')) '<br />END<br />ELSE<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentIDin (SELECT Documentid from documentjoincollection (nolock) where collectionid in (' + @CollectionIdIN + ')) '<br />END<br />END<br /><br /><br />IF @PSearchIdIN IS NOT NULL<br />BEGIN<br />IF @PSearchExclude = 1<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentIDnot in (SELECT DISTINCT Documentid from PSearchjoindocument (nolock) where PSearchid in (' + @PSearchIdIN +<br />') UNION SELECT DISTINCT Documentid from PSearchjoindocumentstage (nolock) where PSearchid in (' + @PSearchIdIN + ')) '<br /><br />END<br /><br />ELSE<br />BEGIN<br />SET @sqlWHERE= @sqlWHERE<br />+ 'ANDd.DocumentIDin (SELECT DISTINCT Documentid from PSearchjoindocument (nolock) where PSearchid in (' + @PSearchIdIN + <br />') UNION SELECT DISTINCT Documentid from PSearchjoindocumentstage (nolock) where PSearchid in (' + @PSearchIdIN + ')) '<br />END<br />END<br /><br /><br /><br /><br /><br />EXECUTEadspPSearchBuildSQL@PSearchID= @PSearchID<br />,@sqlFROM= @sqlFROMOUT<br />,@sqlWHERE= @sqlWHEREOUT<br /><br /><br /><br />IF DATALENGTH(@sqlINSERT + @sqlSELECT + @sqlFROM + @sqlWHERE) &lt; 8000<br />BEGIN<br />SET @SqlString = @sqlINSERT + @sqlSELECT + @sqlFROM + @sqlWHERE<br />END<br />ELSE<br />BEGIN<br />PRINT@sqlINSERT + @sqlSELECT + @sqlFROM + @sqlWHERE<br />SET@ReturnCode= 100001<br />PRINT 'RAISE ERROR 100001 - "Sql string exceeds length" '<br />RETURN @ReturnCode<br />END<br /><br /><br />IF@includegroups &gt; 0<br />BEGIN<br />/* Get the family groups */<br /><br />--get groupid's<br />SET@sqlSELECT= 'SELECTdistinct dg.groupid into #group FROM'<br />+ @TargetTable + ' j (NOLOCK) '<br />+ 'JOINdocumentsourcegroupdg(NOLOCK)'<br />+ 'ON j.documentid= dg.documentid '<br />+ CASEWHEN@TargetTable = 'DocumentJoinCollection'<br />THEN' AND j.collectionid= '<br />+ CONVERT(VARCHAR(10),@TargetID)<br />ELSE' AND j.KID &gt; 0 AND j.PSearchID= '<br />+ CONVERT(VARCHAR(10),@TargetID)<br />+ ' WHERE j.DocumentID between @StartID and @EndID '<br />END<br />+ CHAR(13)<br /><br />--get docid's<br />SET@sqlSELECT= @sqlSELECT<br />+ 'SELECT DISTINCT dsg.documentid '<br />+ 'into #doc '<br />+ 'FROM DocumentSourceGroup dsg (NOLOCK) '<br />+ 'WHERE dsg.isDuplicate = 0 '<br />+ 'AND dsg.groupid IN(SELECTgroupid FROM#group) '<br />+ CHAR(13)<br />+ 'drop table #group '<br />+ CHAR(13)<br /><br />--delete removed docs (converted&gt;50)<br />SET@sqlDELETE= 'DELETE #doc where documentid in '<br />+ '(SELECTdocumentid '<br />+ 'FROMdocument WITH (NOLOCK) '<br />+ 'where converted &gt;= 50) '<br />+ CHAR(13)<br /><br />--delete prior selected docs<br />SET@sqlDELETE= @sqlDELETE<br />+ 'DELETE #doc where documentid in '<br />+ '(SELECTdocumentid '<br />+ 'FROM' + @TargetTable + ' (NOLOCK) '<br />+ CASEWHEN@TargetTable = 'DocumentJoinCollection'<br />THEN'WHERE CollectionId = '<br />ELSE'WHERE PSearchID = '<br />END<br />+ CONVERT(VARCHAR(10),@TargetID) + ') '<br />+ CHAR(13)<br /><br /><br />SET@sqlINSERT= 'INSERT INTO ' + @TargetTable <br />+ ' (documentid, auditid, ' <br />+ CASEWHEN@TargetTable = 'DocumentJoinCollection'<br />THEN'CollectionId) '<br />ELSE'PSearchID, KID) '<br />END<br />+ CHAR(13)<br />+ 'SELECT documentid, auditid=@insertAuditID, '<br />+ CASEWHEN@TargetTable = 'DocumentJoinCollection'<br />THEN'id=' + CONVERT(VARCHAR(10),@TargetID)<br />+ ' '<br />ELSE'id=' + CONVERT(VARCHAR(10),@TargetID)<br />+ ', kid=0 '<br />END<br />+ 'from #doc '<br />+ CHAR(13)<br />+ 'drop table #doc '<br /><br /><br /><br /><br />IF DATALENGTH(@sqlSELECT + @sqlDELETE + @sqlINSERT) &lt; 8000<br />BEGIN<br />SET @SqlStringGroup = @sqlSELECT + @sqlDELETE + @sqlINSERT<br />END<br />ELSE<br />BEGIN<br />PRINT@sqlINSERT + @sqlDELETE + @sqlWHERE<br />SET@ReturnCode= 100001<br />PRINT 'RAISE ERROR 100001 - "Sql string exceeds length" '<br />RETURN @ReturnCode<br />END<br /><br />END<br /><br /><br /><br /><br />IF @MessageLevel &gt; 0<br />BEGIN<br />PRINT'********************************Insert'<br />PRINT@SqlString<br /><br />IF @includegroups &gt; 0<br />BEGIN<br />PRINT'********************************Insert Groups'<br />PRINT@SqlStringGroup<br />END<br /><br /><br /><br />END<br /><br /><br /><br />/*Quit if msglvl = 2 - "no exec"*/<br />IF @MessageLevel = 2<br />BEGIN<br />RETURN 0<br />END<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />/* Build recipient list */<br />CREATETABLE#recipientstring <br />(recid INT IDENTITY, leftx NCHAR(1), recipientString NVARCHAR(4000) )<br /><br />INSERT INTO#recipientstring( leftx, recipientstring )<br />SELECT DISTINCTLEFT(d.displayname,1), d.displayname <br />FROMpersonjoindocumentdWITH (NOLOCK) <br />UNION<br />SELECT DISTINCTLEFT(d.emailAddress,1), d.emailAddress<br />FROMpersondWITH (NOLOCK)<br /><br /><br />SELECT@ReturnCode= @@ERROR<br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />RETURN@ReturnCode<br />END<br /><br /><br /><br />CREATE CLUSTERED INDEX idx2 on #recipientstring (leftx)<br />CREATE UNIQUE INDEX idx1 on #recipientstring ( RecId )<br /><br /><br /><br />/* Throttling for online concurrency */<br />WAITFOR DELAY @Delay<br /><br /><br /><br /><br />/* Build Recipient String Hit List */<br />CREATE TABLE#temp<br />(RecIdINT,KIDINT)<br /><br /><br />/* Build Document List */<br />DECLARE@leftxNCHAR(1)<br /><br />SET@leftx= ''<br /><br /><br />WHILE(SELECT COUNT(*)<br />FROM#recipientstring <br />WHEREleftx&gt; @leftx<br />) &gt; 0<br />BEGIN<br /><br /><br />SELECT@leftx= MIN(LEFT(leftx,1))<br />FROM#recipientstring<br />WHEREleftx&gt; @leftx<br /><br /><br />INSERT#temp<br />SELECTDISTINCT<br />t.Recid<br />,k.KeywordsID<br />FROM#recipientstringt(NOLOCK)<br />JOINkeywordsk(NOLOCK)<br />ONCHARINDEX(k.keyword,t.recipientString,1) &gt; 0<br />ANDkeyset= @keyset<br />WHEREt.leftx= @leftx<br /><br /><br />SELECT@ReturnCode= @@ERROR<br /><br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT'First! = ' + @leftx<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />END<br /><br /><br />PRINT'Progress: ' + CONVERT(NVARCHAR(25),GETDATE(),120) + ' - Recipientstring First Character: ' + @leftx <br /><br /><br />/* Throttling for online concurrency */<br />WAITFOR DELAY @Delay<br /><br /><br />END<br /><br /><br /><br /><br />CREATE INDEX idx1 on #temp ( RecId , KID)<br /><br />SELECT@ReturnCode= @@ERROR<br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT'create index!'<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />RETURN@ReturnCode<br />END<br /><br /><br /><br />/* Throttling for online concurrency */<br />WAITFOR DELAY @Delay<br /><br /><br /><br />CREATE TABLE #tempDocID (DocumentID INT, KID INT)<br /><br />/* Insert DocumentID from #recipientstring */<br />INSERT INTO#TempDocID(DocumentID, KID)<br />SELECT DISTINCTpd.DocumentID,t.KID <br />FROM#tempt<br />JOIN#recipientstringts<br />ONt.Recid= ts.Recid <br />JOINpersonjoindocumentpdWITH (NOLOCK)<br />ONts.recipientstring= pd.displayname<br />UNION<br />/* Build Email Address query */<br />SELECT DISTINCTpd.DocumentID,t.KID <br />FROM#tempt<br />JOIN#recipientstringts<br />ONt.Recid= ts.Recid <br />JOINpersonpWITH (NOLOCK) <br />ONts.recipientstring= p.emailaddress <br />JOINpersonjoindocumentpdWITH (NOLOCK) <br />ONp.personid= pd.personid <br /><br /><br /><br />DROP TABLE#recipientstring<br />DROP TABLE#temp<br /><br />CREATE INDEX idx1 on #TempDocID ( DocumentID, KID )<br /><br />SELECT@ReturnCode= @@ERROR<br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT'create index! on #tempdocid'<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />END<br /><br /><br /><br /><br />/* Remove excluded Documents */<br />DELETE#TempDocID<br />FROM#TempDocIDt<br />JOINdocumentd(NOLOCK)<br />ONt.documentid= d.documentid<br />ANDd.converted&gt;= 50<br /><br /><br /><br /><br />/* Remove pre-existing Documents */<br />IF@TargetTable = 'DocumentJoinCollection'<br />BEGIN<br /><br />DELETE#TempDocID<br />FROM#TempDocID t<br />JOINDocumentJoinCollection tg WITH (NOLOCK) <br />ONt.documentid = tg.documentid <br />--ANDt.CollectionId= tg.CollectionId<br /><br />END<br />ELSE<br />IF@TargetTable = 'PSearchJoinDocument'<br />BEGIN<br /><br />DELETE#TempDocID<br />FROM#TempDocID t<br />JOINPSearchJoinDocumenttg WITH (NOLOCK) <br />ONt.documentid = tg.documentid <br />ANDtg.PSearchID = @TargetID<br />AND t.KID = tg.KID<br /><br />END<br />IF@TargetTable = 'PSearchJoinDocumentStage'<br />BEGIN<br /><br />DELETE#TempDocID<br />FROM#TempDocID t<br />JOINPSearchJoinDocumentStagetg WITH (NOLOCK) <br />ONt.documentid = tg.documentid <br />ANDtg.PSearchID = @TargetID<br />AND t.KID = tg.KID<br /><br />END<br /><br /><br /><br />SELECT@CurrID= MIN(DocumentID)<br />,@MaxID= MAX(DocumentID)<br />FROM#TempDocID<br /><br /><br /><br />/* Doc Interval Loop */<br />/* Doc Interval Loop */<br />/* Doc Interval Loop */<br />WHILE@CurrID &lt; @MaxID<br />BEGIN<br /><br />SET@EndID= @CurrID + @DocInterval - 1<br />PRINT'Doc Interval:' + CONVERT(VARCHAR(10),@CurrID) + ' - ' + CONVERT(VARCHAR(10),@EndID) + 'Start: ' + CONVERT(VARCHAR(20),GETDATE(),121)<br /><br /><br />EXECUTE adspiAudit @insertAuditID OUT, 'SM', 'ST', @modUser OUT, @modDate OUT<br /><br />BEGIN TRANSACTION<br /><br />EXECUTE sp_EXECUTESQL@stmt = @SqlString, @params = N'@StartID INT ,@EndID INT, @insertAuditID INT', @StartID = @DocIDMin , @EndID = @EndID, @insertAuditID = @insertAuditID<br /><br />SELECT@ReturnCode= @@ERROR<br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT@SqlString<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />ROLLBACK TRANSACTION<br />RETURN @ReturnCode<br />END<br /><br />COMMIT TRANSACTION<br /><br /><br />INSERT PSearchJoinAudit values(@PSearchID,@insertAuditID)<br /><br /><br /><br />/* throttling */<br />WAITFOR DELAY @Delay<br /><br /><br /><br />IF@includegroups &gt; 0<br />BEGIN<br /><br /><br />EXECUTEadspiAudit@insertAuditID OUT, 'SM', 'SG', @modUser OUT, @modDate OUT<br /><br />BEGIN TRANSACTION<br /><br />EXECUTE sp_EXECUTESQL@stmt = @SqlStringGroup, @params = N'@StartID INT ,@EndID INT, @insertAuditID INT', @StartID = @DocIDMin , @EndID = @EndID, @insertAuditID = @insertAuditID<br /><br />SELECT@ReturnCode= @@ERROR<br /><br />IF@ReturnCode &lt;&gt; 0<br />BEGIN<br />PRINT@SqlStringGroup<br />PRINT'ReturnCode = ' + CONVERT(VARCHAR(10),@ReturnCode)<br />ROLLBACK TRANSACTION<br />RETURN @ReturnCode<br />END<br /><br />COMMIT TRANSACTION<br /><br /><br />/* throttling */<br />WAITFOR DELAY @Delay<br /><br /><br />END<br /><br />SET@CurrID = @EndID + 1<br /><br />exec ('UPDATE STATISTICS ' + @TargetTable)<br /><br />END<br />/* END Doc Interval Loop */<br />/* END Doc Interval Loop */<br />/* END Doc Interval Loop */<br /><br /><br /><br /><br />/* Cleanup */<br />DROP TABLE#TempDocID<br /><br />SET NOCOUNT OFF<br /><br /><br /><br /><br />END<br /><br /><br />GO<br /><br /><br />Eric<br />SQL back-end tester
  13. derrickleggett New Member

    ummmm, I really don't know what to say about this query. <img src='/community/emoticons/emotion-1.gif' alt=':)' /> You are using dynamic SQL, which is a big security risk and will never have an efficient plan the way you are using it. You are using temp tables on top of that which will cause recompiles. The UPDATE STATISTICS I think might just be the icing on the cake.<br /><br />The first time you run this, look at the query and see what the execution plan is. Where are you seeing the greatest costs at? Can you posts where these areas are at and what the cost it? <br /><br />After you run this several times, see what the plan is again. I would be interested in seeing if your plan gets messed up over time. You might want to look at setting a force plan on when you have an efficient plan to see if that will perform better over time.<br /><br />You really should rewrite this so it doesn't use dynamic SQL and minimizes the use of temp tables, even if you have to use several subqueries to accomplish the same task.<br /><br />MeanOldDBA<br />derrickleggett@hotmail.com<br /><br />When life gives you a lemon, fire the DBA.
  14. ebol New Member

    I realize you probably here this a lot from people embarresed about their code, but this really isn't my code...I'm the QA guy but have a background in dev (front end) and so dig in a lot more than the average QA guy. BTW, thanks for all of your input, this had been a great learning experience for me.

    That said, I'm going to sound like I'm defending the code for a second (but only a brief one). The BOL does state:

    sp_executesql can be used instead of stored procedures to execute a Transact-SQL statement a number of times when the change in parameter values to the statement is the only variation. Because the Transact-SQL statement itself remains constant and only the parameter values change, the Microsoft® SQL Server™ query optimizer is likely to reuse the execution plan it generates for the first execution.

    ... also, this is run in a loop that loops through (typically) 100k records at a time and is run on db's with well over a million records. Therefore, wouldn't running with sp_executeSQL effectively be the same thing as an sproc with *static* code. Also, I'm not really sure how someone could make *static* code in this case where it's not a simple matter of just pluging in parameter values, but rather whether to join tables or add filters... without a case statement from hell. I suppose the first thing is to break this thing down logically into separate sprocs as you suggest. Then it might become more clear.

    Eric
    SQL back-end tester
  15. ebol New Member

    So I ran the Execution plan and got 347 queries returned! In your experience, would that say that's a bit much for one sproc?

    My results are a bit odd... it looks like most of the time is consumed making a copy of the #recipientString temp table. Perhaps this is just by design...perhaps moving from RAM to drive, but what I'm seeing is that the query that does the initial insert into #recipientString (meaning the one I can see in the sproc) is followed by two mystery copies of that table into a table named something like

    #recipientString________________________________________________________________________ yada yada yada.

    Also, this action comprises 53% of my entire batch. Oh wait, this is the clustered and unique indexes he's building, right? Is there a reason to wait until after the table has been populated to create these...or at least the clustered. If the clustered key was defined in the first place, it could simply order things properly during the insert in the first place, right?

    Eric
    SQL back-end tester
  16. derrickleggett New Member

    How much information are you putting into this temp table?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  17. ebol New Member

    Well there are a couple temp tables...some only a couple thousand for a particular test db, the #tempDocId probably gets around 300k records. The interesting thing is that formerly no temp tables where used...only a very complex query with something like 5 joins. Going to temp tables slowed down the server's progress to "dog mode" following a sqlsrvr recycle, but it seems UPDATE STATISTICS really fixed it for me. It would be interesting to put the query code back to the pre-temp table method and see how it works with the UPDATE STATISTICS...but I don't think that will happend with our release schedule.

    thanks,

    Eric
    SQL back-end tester
  18. derrickleggett New Member

    <img src='/community/emoticons/emotion-1.gif' alt=':)' /> It sounds like a good developer touched it, then a bad developer who didn't understand SQL trashed it. Transact-SQL is a "set based" language. This is absolutely critical to understanding performance and development of stored procedures. The temp tables require recompiles when this thing runs and will starve the memory system and tempdb space with records that big.<br /><br />Big, complex SELECT statements using CASE statements to aggregate things can look really ugly and be hard to understand. They will beat a series of temp tables like this hands down though. Look at the old code and figure out how it worked. See if you can't find a way to do what you are doing in this procedure with one or two statements. If you need help after you've worked on that, post it and we can look at it.<br /><br />If you take the time to do this, you'll be shocked at the results and be a much better SQL person for it.<br /><br />MeanOldDBA<br />derrickleggett@hotmail.com<br /><br />When life gives you a lemon, fire the DBA.

Share This Page