Microsoft SQL Server & Solid State Accelerators

Identifying Your Hardware Bottleneck

The easiest way to understand your hardware problem is to use the tools provided by Microsoft, specifically Performance Monitor. Using Performance Monitor, we can identify issues in each of a set of performance objects. The specific contents of the performance objects will differ based on the version of NT/Windows you are running. 

Regardless of your operating system version, you will want to monitor your physical disks. Note that monitoring the disks will create some system overhead. Historically, this has been an increase of 3-5% of CPU, but with later versions of NT/Windows 2000 this monitoring seems to have less impact. If you haven’t done so already, you’ll have to turn the disk counters on at the command prompt, type “diskperf –y”, press ENTER, then reboot.

Each disk on your system will have a separate disk object. This gives you the ability to understand, for each object, the frequency at which it is being accessed. For each physical disk object, you can look at % disk time, which will tell you how busy the disks are. Note that you may become hardware bound before you hit 100% busy.

If you are seeing a line across the top of the Performance Monitor, rather than across the bottom, you have an I/O device that is not keeping up with the system requests. In general, if you are above 70-75% utilization regularly, you do not have sufficient capacity to handle peak data surges.

Additionally, check out your Avg. Disk Queue Length. This should always be less than 1.0 and usually is zero. As this average queue length number increases, disk contention increases. Contention means that the system is waiting while another I/O is accessing the disk. Occasional waits are normal, but if this is a steady thing, you have an I/O bottleneck. [Editor’s Note: SQL-Server-Performance.Com recommends that the Average Disk Queue Length to be less than 2.0 for best disk array performance.]

You can look at other indicators, but these should be sufficient to identify the problem.

Solving the Specific Problem

Once you have determined that you have an I/O bottleneck that needs to be solved, you need to decide on the best approach. It would be easy to replace the entire disk subsystem with a Solid State Accelerator, but this may not be needed. Typically, just a portion of the data on the disk would need to be placed on Solid State memory in order to provide significant performance benefit. The key is identifying the right subsystem and files to place on the Solid State Accelerator that will give your stressed system the I/O relief it needs.

Equally as important is determining the amount of Solid State memory that you need to provide the desired result, i.e. Solid State memory translates directly to replacing conventional rotating disk. You want to choose the most active files to place on the Solid State Accelerator, of course, to take the strain off of the disks where the I/O bottlenecks are the worst. Good examples of files to put on Solid State include:

  • Tempdb, a very frequent bottleneck for complex online applications as well as decision support applications that use aggregate functions (and hence, tempdb). Tip: You may have 5GB of tempb, but except during the busiest times use only 1-2GB … in this case, you can configure the first device fragments of tempdb to exist on the Solid State, and overflow onto other areas. We recommend that you do not use the “autogrow” feature, here, as it will have a negative performance impact just when your system is the busiest.

  • Transaction logs of very busy (write-intensive) databases (same tip here; you do not need to do this for your entire transaction log area, but to put it in the front can be very effective).

  • Databases that are hit very heavily, which might be catalog information, lookup tables, or other information that is hit constantly, but perhaps is too small to be worth spreading out , are also excellent candidates. Note that while SQL Server’s data cache may work well with this, all it might take is a large table scan to flush all of the useful, accumulated pages from memory.

High Availability Solid State Accelerators have built-in UPS systems that make them non-volatile. In the unlikely event of a power fluctuation, the built-in UPS powers the system and the built-in hard disk drive is used to effectively backup the contents of the Solid State memory. When stable power is re-established, user data from the disk drive is reload to the Solid State memory.

Continues…

Pages: 1 2 3




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |