Microsoft SQL Server & Solid State Accelerators

  • You have tuned your database for peak performance, queries are running optimally, yet due to transaction or query volumes, processing needs are exceeding the hardware capabilities, and you need to explore other options.
  • You do not have time before rollout for a new SQL Server database-based application to fully examine your performance problems, but you know that you have them, and are seeking a quick and permanent hardware solution, which does not require changes to the server or the code.
  • You know you have an I/O bottleneck, perhaps even know where, and want to know specifically how (or if) Solid State memory technology can help you.
  • In this article, we will discuss when Solid State memory technology is the correct solution to your Microsoft SQL Server performance problems, and specifically how to implement it.

    When is Hardware the Answer?

    Generally, people in the software industry like to look to software for solutions. Many experienced DBAs, this author included, feel a sense of defeat if the application cannot be made to perform suitably within the constraints of the existing hardware configuration. This attitude is silly, of course. You can tune your application within limits, but you can’t overcome hardware limitations with software tuning. Sometimes the software does point to hardware limitations (there is an example below) but many times an experienced DBA just has to know when to stop tinkering with the software and look at hardware limitations. The process of tuning is a process of identifying and eliminating bottlenecks. By definition, you will always have bottlenecks that limit your application’s bandwidth. Tuning is the process of successfully shifting bottlenecks to places that have a wide enough bandpass to handle your data throughput needs. Sometimes you need a solution to a database performance problem that can’t be resolved through software. The traditional reaction to a perceived hardware problem is to buy additional hardware–-most commonly memory and/or CPUs. I have been in a variety of shops that had purchased additional CPUs (4 to 8 processors), or more memory (4GB to 8GB), and not seen a significant improvement in performance. Often, adding CPUs is a knee-jerk reaction to a perception of insufficient user throughput. More often than not, adding memory is perceived as the panacea for I/O problems, with the idea that more memory permits more caching, which improves performance. There are a number of reasons why this might be the case, but they all boil down to this: If you buy memory or CPUs to solve an I/O problem, you may be wasting your money. (See Mike Pluta’s white paper, “The Tragedy of throwing Memory at an I/O Problem”) To summarize this paper, adding memory may alleviate symptoms, on the surface, but it’s really masking rather than solving the underlying problem. To make matters worse, whenever the server or operating system decides that it needs that memory for other things, your bottleneck gets shifted, suddenly and unpredictably. In fact, it may be shifting back and forth many times in a short interval, making tracking down the bottleneck difficult. The trick is to solve the specific bottleneck, so those individual problems get resolved and are scalable, so that as your processing needs increase, you can use the same (or comparable solution) to resolve future bottlenecking issues. That is to say, once you identify a problem, you want to solve that problem, not hide it.  In summary, there are two basic reasons to choose a hardware solution to a performance problem. First, you have specifically identified a hardware bottleneck, in which case you should be choosing a hardware resource that addresses that problem (processor, memory, or Solid State memory). And second, you have identified an application problem, and have determined that you are short time or resources to correct it (alternately, that hardware will solve the problem less expensively in the short run). 

    Continues…

    Leave a comment

    Your email address will not be published.