We are running a processor intensive process, that performs combinatorial operations to combine tables (pure math) as well as Monte Carlo style re-sampling techniques to combine tables. This process is, and seems to have always been, bound by processor capacity. We recently moved to a new, dedicated server and saw something strange.The new server has 30+ GB of RAM and weâ€™ve capped SQL at 27 GB. The more ram SQL uses (easily tracked) the less processor utilization we see, and the less system performance. This process runs for days on end, so our observation window is pretty solid. When we stop the process and reset the memory we get a boost to 100 % performance. Within an hour SQL is using 4 GB or so and our performance has fallen to 80%. Within 6 hours SQL is using 12-15 GB and our performance falls to 65%. 24 hours later we are using all 27 GB and our performance is down to 50%. Over this time % processor usage falls from mid to high 90s down to mid 50s, just like our performance statistics. Is there a connection here? Does managing this obscene amount of active RAM somehow hurt my SQL instanceâ€™s performance? I have looked at all of the classic perfmon metricsâ€¦ Has anyone else ever heard of something like this? What else might be the cause of my performance degradation?