Microsoft SQL Server & Solid State Accelerators

Cool Options for Maximizing your Hardware Dollar

What happens when your needs change on a predictable basis? For instance, you may have high-volume OLTP processes running from 8 to 5, and huge batch processing in the evening. Your bottleneck shifts twice a day.

The concept of Storage Virtualization in a Storage Area Network (SAN) gives you the ability to resolve transitional bottlenecks almost on an on-demand basis.

For example, let’s assume you have 5GB of Solid State memory as part of your SAN. During the day, you might place tempdb on the Solid State memory to eliminate your I/O bottleneck. At night, you might re-allocate the Solid State memory to address your batch processing needs, which has gotten to the point where it is stretching the batch update window beyond it’s limits. 

Managing the SAN, you mirror tempdb off of the Solid State Accelerator onto disk for this processing, and then mirror the database onto it for the batch processing. When this completes, mirror in the other direction and everything is set for the next business day.

You’ll note that this is all done in such a way as to be completely transparent to the servers and end-users. Suddenly, the Solid State memory becomes a reusable resource, with one Accelerator able to support multiple environments and the capacity able to be allocated as needed to boost performance in multiple application environments. Further benefit can be attained by using a larger Solid State Accelerator and dedicating some of it to fixed data and have the other space available for hotspot usage.

Let’s take a look at a client of mine who had problems reaching 100GB/week throughput using 6.5 SQL Server. Their batch processing made very heavy use of tempdb. Because of assorted high-volume processing needs, the SAN option would not work for them. 

This client struggled for weeks to tune the system, when the solution couldn’t have been simpler. Based on measured throughput, a small 2GB Solid State Accelerator for tempdb, and a 10GB MegaCache in front of their 400GB production database could have solved 100% of their throughput problems instantly.

Conclusion

Have you ever had a problem with I/O, tried spreading it over multiple devices, tried rewriting procedures, redesigning your database, and still couldn’t get there? Or perhaps, just got there, and know that the problem is going to crop up again? Maybe it is tempdb (a frequent culprit). Maybe in your environment, your code tables are simply getting hit so hard by your busy web application that the disk simply can’t keep up.

I’ve watched several (and heard of dozens more) situations where I/O limitations really were the problem, but folks wouldn’t admit it until the application was tuned and retuned. At that point, they bring in Solid State memory and wonder why they didn’t do it sooner. 

Solid State memory is a fast, clean, easy solution to problems that sometimes require time-consuming, expensively maintained resource juggling. Sometimes, the quick, easy solution really is cheaper in the long run, when you weigh in development time and ongoing maintenance costs.

There is a very specific set of problems that Solid State memory addresses, but it solves them completely. Make sure that you understand the performance problem you are trying to solve. Once you understand this, you may notice that Solid State memory moves from “interesting idea” to “essential, cost-effective tool for solving specific performance problems” very quickly.

Jeff Garbus is a database consultant, who works occasionally for Imperial Technology, the makers of MegaRam and MegaCache. For more information on Solid State memory, see the Imperial Technology website. This article was reprinted with permission.

]]>

Leave a comment

Your email address will not be published.