SQL Server Performance Forum – Threads Archive
Complex Query TuningI’m running into a problem with the following query but before I overwhelm you with the actual query, I need to explain some things:
- The problem I am actually running into with this example in particular is "[Microsoft][ODBC SQL Server Driver][SQL Server]There is insufficient system memory to run this query."
- The actual environment generating and executing the query is ColdFusion and before anyone says CF sucks, changing the environment IS NOT AN OPTION (unless, of course, you are willing to port close to a million lines of code to ASP or another environment and debug it for free!)
- The entire where clause is dynamic so I’m not sure how much benefit would be gained by creating a stored procedure to execute the select
- The query normally works with narrower WHERE criteria (like fewer MID or ArchiveID values)
- I’ve elimitated similar "insufficient memory" queries by creating temporary tables to hold the IN values but I’m not certain this is the best approach
- Is what would be the "best" way to eliminate the immediate problem of "insufficient memory"
- Is there a better solution for the case clauses
- Lastly, assume that all the required indexes exist and are optimized (we’ve done quite a bit of performance tuning on the indexes)
Here is the big ugly offending query:
ArchivesID, Id, t.Mid, Invoice, CardAbb, CardNumber, CardExpDate, TerminalID, AuthOnly,
when CurrencyCode is null or t.CurrencyCode=” then
end as CurrencyCode,
PrimaryAmount, SecondaryAmount, CustInfo, Clerk, BusDate, TranDate, TranTime, TotalAuth, Tolerance,
Auth3Resp, Auth3Code, PrimaryAmount + SecondaryAmount as TotalAmount, MerchantName,
when CardAbb = ‘??’
or (PrimaryAmount + SecondaryAmount = 0)
or (PrimaryAmount + SecondaryAmount > TotalAuth * Tolerance
and PrimaryAmount + SecondaryAmount – TotalAuth >= 1.00)
or (Abs(PrimaryAmount + SecondaryAmount) > 99999.99)
or (PrimaryAmount + SecondaryAmount = 0) then
end as Problem
[InBank]..[ArchTrans] t (NOLOCK)
left outer join [InBank_0000001495]..[MERCHANT] m (NOLOCK) on (m.MID=t.MID)
t.MID in (
and AuthOnly in (‘N’)
and ArchivesId in (
and m.MID = t.MID
quote:I’ve elimitated similar "insufficient memory" queries by creating temporary tables to hold the IN values but I’m not certain this is the best approachI agree with this aproach except I would use table variable instead. Other then that I suggest to replace "AuthOnly in (‘N’)" with "AuthOnly = ‘N’" and to remove "and m.MID = t.MID" from where clause. From which table are columns without table alias in front?
quote:Originally posted by mmarovic I agree with this aproach except I would use table variable instead. Other then that I suggest to replace "AuthOnly in (‘N’)" with "AuthOnly = ‘N’" and to remove "and m.MID = t.MID" from where clause. From which table are columns without table alias in front?I’ll have to research table variables reference; I’ve never used them. I will definitely remove the "and m.MID = t.MID", it is leftover from a prior version that did not use the "left outer join" to join the tables — good catch, thanks. I’ll see about optimizing the "AuthOnly in (‘N’)" clause. The items in the list are actually dynamic. In this case only ‘N’ is selected, but the user can add other values. I’ve never found any reference the "xyz=x" is any faster then "xyz in (x)"; does it make a difference? Steve Sommers
quote:I’ve never found any reference the "xyz=x" is any faster then "xyz in (x)"; does it make a difference?Maybe not. Just was not sure if query optimizer is able to figure out that there is just one value in list so it is equivalent to xyz = <only value from the list>. In that case it can use index on xyz or any combination of SARGs including one on xyz.
quote:I’ll have to research table variables reference; I’ve never used them.It looks like they are better choice for small number of rows.
The couple times I’ve run into "insufficient memory" errors were when the result set was huge (as in many wide rows). Do you have a way to run this query returning count (*) instead of the select list? This might help pinpoint whether or not the problem is with the execution or the results. My guess is that the error occurs either with the initial result set, or the sort operation that occurs once all the records are found.
There is a good chance that the result set is huge. I’m hoping to find a way to get around the result set size limitation and if there is multiple ways, which I suspect there are, determine the best. I think the reason for this error is triggered is dependant on more factors than simply result set size. The reason I think this is because I can remove the WHERE clause and get a result set without issue every time, and this result set is bigger than the failed query would have returned. Thanks. Steve Sommers
Any success with table variables?
quote:Originally posted by mmarovic Any success with table variables?Have not tried it yet. Last week was a hectic week. Hopefully tomorrow (Monday’s are always hectic)  Steve Sommers
May also check for valid ODBC driver between CF & SQL Server.
How about memory settings on SQL?
HOpe the latest service pack is applied to SQL to address few issues with CF compatibility. Satya SKJ
This posting is provided â€œAS ISâ€ with no rights for the sake of knowledge sharing.
Okay, is this voodoo or is there a reason for this? In my example query that I started this thread, I changed the WHERE clause to the following and POOF!, no more out of memory error:
t.MID in ( select MID from [InBank_0000001495]..[MERCHANT] where MID in (
and AuthOnly in (‘N’)
and ArchivesId in ( select ID from [InBank_0000001495]..[ARCHIVES] where ID in (
Basically, I changed the explicit value list to sub-queries using the same value list and my problem went away. In my mind, the sub-queries would seem more memory intensive, but I guess my mind is not in sync with the SQL gurus that wrote MSSQL. Anyone have any comments, especially if there are side-effects associated to the above that I have not thought out?… Steve Sommers
Have you checked execution plan. My guess is that you have here the same effect you would have with table variable. t.MID in (select MID from …) forced query optimizer to build worktable first with IDs (the same as for ArchivesId in ( select ID …)). Then probably join logic is applied instead of table scaning through entire table. I don’t get it why you use outer instead of inner join when you require MID to be from Merchant table anyway. Is it again due to generating dynamic sql that may include this condition but doesn’t have to?
quote:Originally posted by mmarovic Have you checked execution plan…I don’t get it why you use outer instead of inner join when you require MID to be from Merchant table anyway…I’ll definitely compare the two execution plans; maybe this will get me in more of the mindset of the SQL gurus. As to outer vs. inner join, in this particular case I could have used an inner and the reason has more to do with habit than dynamic. In most of my queries, outer is the preferred method because the data being brought in from the secondary tables (again, in most cases) is optional. This is another area I’ll look at optimizing. Thanks. Steve Sommers
This is not particularly good habit having performance in mind. This way you force sql server to select rows from ArchTrans table first which might not be better execution plan.
Look at the execution plan. See if it was creating a temporary table and doing a hash join on the old query plan. Would be fun to see both the execution plans if you can post them. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.