Memory issues | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Memory issues

I’ve written a client / server app which revolves around a very large stored proc in SQL 2000. The SP is 500 lines long and uses about 20 different parameterized select queries to populate a temp table, the results of which are then consolidated and sent back to the client. This takes about 10 seconds to run and return data The initial Spec of server was 2gig pentium with 512mb RAM and it worked fine until 2 or more users concurrently hit the stored proc and it would come up with connection time out errors. All timeout settings in SQL were set to 0. We have boosted the RAM to 2GB but it still seems to max out when you have 4 concurrent users and return a timeout to one of the clients. How can I manage the memory to get round this or do I just need more memory? In reality the maximum number of concurrent users would be 10. Any suggestions gatefully received. Ian
Leave the memory settings as DYNAMIC on SQL server part.
And recompile the SP to get the optimum performance and try to control the statements batch wise in order to control the resource usage. for information on fine tuning SP. Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
do the timeouts always occur after the same duration? 300-305sec for example?

Yes, timeouts always seem to be around 30 seconds. In the meantime I have managed to trim the SP down to using only 2 temp tables instead of 5, this seems to have made a big improvement, but I haven’t been able to ‘stress’ test it yet. Ian
You might want to post the query so we can look at it for optimization. Many times, you can tune a query instead of buying more hardware. Having said that, 512mb is "not" very much RAM at all. SQL Server is very memory intensive, and RAM is cheap. When considering performance for the dollar, it’s the best investment you can make. Some things to think about: 1. Have you tried using table variables instead?
2. How many rows are in the temp tables? Have you thought about putting indexes on the temp tables?
3. Can you get rid of the temp tables completely by using CASE? The less sets you process in SQL Server the more efficient. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
The SP is over 500 lines long so its probably best not to post it! What it does is it calculates a P&L per SKU (product) by day for a specified period up to 3 years. The tool allows sales people to simulate promotions on products and factor in uplifts and various levels of discounts. It starts off with base unit sales and each P&L line adds in either unit uplifts or discounts. The temp table is needed as each subsequent line of the P&L is calculated on the value in the previous, so I have to build up each line through a separate SQL statement and insert it into the temp table. These calculations are done through self joins on the temp table. For example the normal sales on aday would be 200 units, the seasonal variance would add 30 units, the promotion another 100…
The total units for the day would be 330 the standard price would be £2.00 a unit, customer discount £0.20, promotional discount £0.40, net price £1.40. Thus Gross sales are 330 * £2.00, net sales 330 * £1.40…….etc. This can be done for one SKU or a whole brand (multiple Skus) or a whole customer (multiple brands and SKUs). This means that the number of rows in the temp table can be anywhere between 21 (one day, 1 SKU) to 2.3million (3 years by day, 100 skus, 21 metrics). Indexing has no affect on the average call on the SP (averages about 60k rows).
I’ve not come across table variables before – I’ll look into this. Not feasible to use CASE. My other thought is that although I using drop table to clear the temp tables as soon as I don’t need them, that SQL doesn’t release the memory quickly. Is there any way to force this? Thanks Ian
Indexing has no affect on the average call on the SP (averages about 60k rows). ?? I seriously doubt this is the case if you have it set up correctly. Have you tried to use the indexes yet? If you are joining back to this temp table repeatedly, it would be magnitudes faster with the proper indexes. This is especially true if you are getting an average of 60k rows. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.