Hi, newb here so go easy please! Anyway - I have a 'write intensive' batch application which I have just moved to a new 2005 server with 2GB of RAM and 4 core processor and have been having problems with performance. Basically; when the 'max server memory' is left as default (2GB) my process takes over 24 hours and one of the CPUs is averaging nearly 100%. I notice also that about 1GB of memory is actually being used and there is no heavy paging (still quite a few MBs of RAM left). Now, when I change the Max memory config to 750MB (and Min to 8MB - was 0 previously) my process takes 2hours which, for the type of operation is acceptable (this was the configuration on the old machine). Also CPU usage is a lot less and better spread across cores. My question is; why is this? You would think more available memory would mean at least equal performance to the lower configured value, wouldn't you... Any thoughts greatly appreciated.