SQL Server Backup Performance with Imceda LiteSpeed

Introduction

For the last several weeks, I have been working with Imceda (now Quest Software) on improving the performance of LiteSpeed for SQL Server, particularly for high–end storage systems. This report represents my assessment of LiteSpeed performance characteristics. I am an independent consultant, so this report does not represent the views or claims of Quest. All of the tests were conducted using publicly available tools, and can be reproduced with reasonable accuracy.

SQL Server Backups

It is not uncommon today to find 100GB-1TB databases in mid-size companies and large IT shops with multi-TB databases. An important requirement for running large databases is to have fast backup and recovery capability. This allows time for other maintenance operations and keeps the overall maintenance window reasonably short. Curiously enough, there is very little meaningful public information on configuring the storage system for performance, even though recent generation systems are capable of very high disk IO performance.

The key to achieving fast backup performance is configuring for high disk transfer rate. This is best accomplished in a backup to local disks first, instead of directly to tape or a network location. The native SQL Server backup command by itself can achieve very high backup speed. The difficulty is that this approach requires the ability to write the backup location as fast as the data is read from the active database. Every one should know the importance of spreading the database data files across as many disk spindles as necessary in achieving high transaction processing throughput and quick response times. So every disk allocated to the backup location takes disks away from the data files. To make matters worse, more disks are required to support a given write rate than the same read rate because of the write overhead for fault tolerance (RAID 5, 10 or other).

Figure 1 show the optimal disk configuration for a native SQL server backup (not accounting for fault-tolerance write over head). If each disk can support 50MB/sec, then data can be read at 200MB/sec and written at 200MB/sec. With the fault-tolerance overhead, more disks are required for the backup.

Figure 1. Optimal disk configuration for SQL Server native backup.

This is where Quest LiteSpeed is essential. By using available CPU cycles to compress data during the backup, less data needs to be written, allowing the majority of disks to be allocated to the data files, which also happens to be the best configuration for supporting database query performance.

Figure 2 shows the optimal disk configuration for a LiteSpeed backup when the data compressibility ratio is 3:1. There are still 8 eight disks (excluding log requirements) as in the previous configuration. Six disks are allocated to the database data files supporting a 300MB/sec read rate. Since the data compression ratio is 3:1, the backup storage only needs to support a 100MB/sec write rate.

Figure 2. Disk configuration for LiteSpeed backup.

The important point is that the majority of disks are allocated to the database data files and high backup speed is still achieved. Note that a native backup in this configuration would run at only 100MB/sec, being limited by the ability to write to the backup location.

Test Configuration

A simplified system configuration is shown in Figure 3. Additional details are listed in Table 1. Each HSV110 controller has 2 FC ports to hosts (the database server), and 4 FC ports to storage enclosures. The redundant paths to each storage enclosure are not shown. A single HP EVA 5000 with 2 HSV110 controllers can connect a maximum of 240 disk drives. However, the maximum disk configuration does not provide the best sequential transfer performance on a per disk basis. The configuration shown uses four HSV110 controllers connected to 8 FC disk enclosures each with 14 disk drives.

With 8 LUNs each spread across 14 disks in RAID 0, this configuration was able to reach a sequential read rate of 1340MB/sec (335MB/sec per HSV110 controller). In RAID 10, the read rate was over 1,100 MB/sec. The maximum write rate observed in both cases was 500MB/sec. The best overall transfer rate per disk should be achieved with between 2-4 drive enclosures per HSV110 controller.

Windows Server 2003 Datacenter Edition was tested with both pre-SP1 hot-fixes and with SP1. SQL Server 2000 Enterprise Edition was tested with SP3, build 818 and SP4. The SQL Server 2005 June CTP was also tested. The backup performance results did not show measurable difference with either OS or SQL Server version. The version of LiteSpeed (4.0.9.7) is a maintenance build with performance enhancements for writing data with low compressibility and for writing the backup to multiple slower devices in parallel instead of a single fast device.

Figure 3. Server system configuration, simplified.

Continues…

Pages: 1 2




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |