SQL Server Performance Forum – Threads Archive
SQL Memory, does it need more?Hi all I am looking at a SQL 2000 std server running on windows 2000 std. The box is a quad HTT with 4gb ram. H/W Raid 10 across 4 disks, with a separate page file disk. SQL is now configured to use all 4 processors (it was only using 3 until 30 mins ago!!) and to use a fixed amount of 2gb ram. The application uses 4 DB’s two of which are 9gb in size the other 2 are tiny. The system is performing slow and processor usage is bouncing between 50-100, i would say an avg of 70%. My hunch is that it needs more ram for SQL therefore I am thinking 2003 std or 2000 adv (with /3gb) and sql 2000 ent (the app is not certified for sql 2005 yet). Then allocated 3 gb to SQL from the 4gb on the box, or maybe even up the ram to 8gb. What is the best way to determine wither SQL requires more ram as the server OS itself does not. I have just switched SQL to use dynamic from 0-2GB to see what it tries to use.
I have looked at;
Disk queue and although there are spikes avr across 8 hrs was 1.6.
Server work queue is also below 2.
%read and % write time, but didn#%92t really know how to interoperate. And from some reading up seem to provide poor statistics for this??
Memory pages/sec below 1. So I guess I am asking how best to tell if SQL in this setup requires more than 2gb and therefore an OS and SQL version upgrade? And if anyone has any suggestions on where best to look next. Cheers Stuart.
Hi ya, first port of call should always be to run SQLProfiler to find long running transactions. Trace for RPC:Completed and SQL:Batch Completed with a duration or cpu > 100ms 80-90% of all performance issues are related to poor queries/indexing Cheers
Hi Twan thx for the fast responding.[<img src=’/community/emoticons/emotion-1.gif’ alt=’‘ />]<br />Im not much of an SQL man, but I have started the profiler for an hour. This app is a pos system so we don#%92t need data from a long period of time. This would be a time of moderate usage for them. When I ran profiler the traces you advised where already there i went and added transactionid.. Was this the correct thing to do? I also got the server to process the data?<br /><br />Stuart<br />
Just watching profiler record the data over 90% of the transactions are duration of 0. the rest are anywhere from 16 up to 62k? I take it the odd spike is ok but i am seeing a lot in the 2k-5k range.
HI ya, In the profiler properties, go to the filter tab and set duration and cpu to both be greater than 100ms. In a well tuned OLTP database there should be nothing taking anywhere near that long… a lot in the 2k-5k range is way too long, since that is 2 to 5 seconds, especially is cpu is also that high Cheers
primary: follow Twan’s line on Profiler & expensive queries secondary: exactly what is your disk configuration?
how disks in the whole system
what are each of the RAID arrays (disks & raid level)
where are the main databases data and log files
Thx for the info on profiler Twan, I am on another job for a couple of days but will follow your info asap. Joechang the system is raid 10 4 disks 15k rpm. There is a 5th disk for the page and backups. The logs are not on a seperate disk, just a partition. But as it is the same raid 10 no perf gain.
then get another disk, mirror it with the 5th disk, put the logs on this disk, it can still share with the backups, page location is not important like i said with wingszeto and on other posts:
as for disk counters, monitor:
Avg Disk sec/read
Avg Disk Read Queue Length,
and the same for writes,
do not waste time with %Disk time, it is a worthless counter in server systems with SCSI disks
Hi Joe the server only has 5 bays its a dell 6650. Looking at the disk counters there does not look to be an issue with the single raid 10. It looks like there are some transactions taking to long due to some indexing needing done or just poor queries. But it also looks like I have been given the go ahead for an upgrade to the OS and SQL and to get a additional RAM for the server.
depending on the counter details, the disks are not the main issue, even though it is not good to have the main db log on the same disk as data given that your disk queue is 1.6,
what is your disk reads/sec average ? if this number is below 200-300 and disk latency is < 10ms, i would think memory is not the main issue
but rather, like you suggested, query and indexes, in which case, OS & Ent Ed will do very little for you,
run a profiler trace per twan, add the database id and endtime columns,
get the RML.exe from MS, run Read80Trace to identify the expensive queries, weighted relative to others, and see if indexes & query rewrite will help.
do this before the upgrade, which costs money, downtime etc