When classifying the nature of I/O, two main terms are used; OLTP and OLAP. An example of an OLTP (OnLine Transaction Processing) database is one that stores data for a Point of Sales application, typically consisting of a high percentage of simple, short transactions from a large number of users. Such transactions generate what’s referred to as Random I/O, where the physical disks spend a measurable percentage of time seeking data from various different parts of the disk for read or write purposes.
In contrast, an OLAP (Online Analytical Processing) or DSS (Decision Support System) database is one that stores data for reporting applications which typically have a smaller number of users generating much larger queries, typically resulting in Sequential I/O, where the physical disks spend most of their time scanning a range of data clustered together in the same part of the disk. Unlike OLTP databases, OLAP databases have a much higher percentage of read activity.
It’s important to note that even for classic OLTP applications such as point of sales systems, actions such as backups and database consistency checks will still generate large amounts of sequential I/O. For the purposes of I/O workload classification, we take into consideration the main I/O pattern only.
As we’ll see a little later, the difference between sequential and random I/O has an important bearing on the storage system design.
In order to design a storage system for a database application, as well as knowing the type of workload it produces (OLTP vs OLAP), we also need to know the volume of workload, typically measured by the number of disk reads and writes per second. The process of obtaining or deriving these figures is determined by the state of the application. If the application is an existing production system, the figures can be easily obtained using Windows Performance Monitor. Alternatively, if the system is yet to be commissioned, estimates are derived using various methods, a common one being profiling the read and writes per transaction type, and then multiplying by the expected number of transactions per second per type. Existing Systems
For the purposes of this section, let’s assume that DISK I/O is determined to be a significant bottleneck and we need to redesign the storage system to correct it. The task then, is to collect I/O metrics to assist in this process.
The Windows Performance Monitor tool can be used to collect, among others, the DISK I/O metrics that we need. For each logical disk volume, i.e.; drive letter corresponding to a data or log drive, the following counter’s average value should be collected;
- PhysicalDisk : Disk Reads/sec
- PhysicalDisk : Disk Writes/sec
For a system not yet in production, application I/O is estimated per transaction type in an isolated test environment. Projections are then made based on the estimated maximum number of expected transactions/sec with an appropriate adjustment made for future growth. Armed with these metrics, let’s proceed to the next section where we’ll use them to project the estimated number of disks and controllers required to design a high performance storage system capable of handling the application load. Determining the Required Number of Disks & Controllers
In the previous section we covered the process of measuring, or estimating, the number of database disk reads and writes generated by an application per second. In this section we’ll cover the formula used to estimate the number of disks and controllers required to design a storage system capable of handling the expected application I/O load. Note that the calculations presented in this section are geared towards Direct Attached Storage solutions using traditional RAID storage. Configuring SAN based Virtualized RAID (V-RAID) storage is a specialist skill, and one that differs between various SAN solutions and vendors. As such, the calculations presented here should be used as a rough guideline only.