System and Storage Configuration for SQL Server


Memory

The technically correct assessment of memory requirement should consider disk load versus memory to determine the best cost and system characteristics. The cost of the technical assessment will probably exceed the cost of memory except for very large systems. In most cases, a reasonable memory configuration can be arrived at from the operating system and SQL Server edition support and the system DIMM sockets. The table below shows the common number of DIMM sockets by system type.


Processor

Chipset

Processor Sockets

DIMMS sockets

Max Memory

Opteron

8000

2

8

16-32GB

Opteron

4

32

32-128GB

Pentium D

E7230

1

4

8GB

Xeon

E7520

2

6-8

12-16GB

Xeon MP

E8500/8501

4

16-32

32-64GB

The table below shows the current price of DDR2 PC2-3200 registered ECC memory from Crucial (www.crucial.com) as of Mar 2006.


240-pinDIMM


Capacity

Mar-2006 Price

512MB

$100

1GB

$190

2GB

$1000 / $550

At any given point in time, the high volume DIMM products will have an essentially linear price-capacity relation. The low volume high capacity DIMM products will have a much higher price on a per GB basis. Approximately every two years, the high end DIMM doubles in capacity. Until recently, the price of the 2GB ECC DIMM was more than quadruple the price of the 1GB ECC DIMM, or twice the cost per GB. Just recently, a new 2GB DIMM SKU has appeared at a much more favorable price relative to the 1GB DIMM. The original 2GB DIMM retains the unfavorable price. It will probably another 2 years before the 4GB ECC DIMM becomes favorably priced relative to the 2GB DIMM.

In most cases it is simpler to fill the system with the largest capacity commodity DIMM, currently a 1GB or 2GB module, than to conduct a detailed analysis, except when otherwise limited by OS or SQL Server edition. If it is known that disk IO is sensitive to memory and the cost of the storage system is high, then the large capacity DIMM can make sense.

Storage System

The biggest and most frequent mistake people make with regard to the storage system is in considering only capacity as the primary requirement. Storage system vendors should know better, but tend to propagate this mistake. The storage system needs to be sized for both performance and capacity. Storage performance is further distinguished in terms of random IO capability and sequential transfer bandwidth. Given the capacity of individual disk drives today, it actually turns out that on meeting storage performance requirements, capacity is essentially irrelevant.

In most cases, one should buy the lowest sensible capacity 10K or 15K disk drives. Today, the sensible capacities are the 73GB 10K and the 36GB 15K disk drives. Both drives are typically $200 per disk when purchased as bare drives. Prices from system vendors can be 50% higher. The cost per drive in a SAN system can be over $2000 per drive. When the cost of the next capacity (146GB 10K and 73GB 15) is less than 30% higher, these become the lowest sensible capacity drives. The reason that one should not wait for price parity is that a larger drive can short-stroked for better random IO performance for a fixed data size. In terms of cost per IOP, both the 10K and 15K drives are reasonably comparable. The storage configuration with the 10K drives will have higher capacity while the 15K version will have lower latency in random IO operations.

As mentioned in the previous section, the technically correct assessment of storage performance should consider the memory versus disk system performance characteristics. Without knowing the actual application characteristics and needs, the following can be used as a minimum baseline configuration. Most transactional databases will perform both transactions and some amount of reporting, even if replication is used to move many reports to a replicated database.

Reporting queries will frequently involve either large table scans or sizeable index range scans involving a large number of rows. If these operations need to go to disk for data, the storage system could be saturated and effectively shutdown the system transaction processing capability. One reason is that the report query may generate a large number of disk read-ahead calls, which leads to a high queue depth in the disk system. Normal transactions may generate only a handful of disk reads, probably without read-ahead reads. So even if the intended priority of the transaction is higher than that of the report, in effect, the storage system has been prioritized to favor the report IO requests. Transaction driven disk reads will sit behind a long line of report driven disk reads.

Now it turns out SQL Server 2000 has limited ability to read from disk in a table scan operation. Without hints, SQL Server 2000 table scan operation in most systems will generate between 300-400MB/sec in disk reads. With the NOLOCK hint, the table scan might generate between 700-800MB/sec in disk traffic. If the configured storage system has less sequential capacity than this, a table scan can saturate the disk system, and cause a substantial degradation in transaction driven disk IO. If the storage system can handle both the sequential table scan and transaction random IO load, then there may be little to no degradation in transaction throughput, depending on the availability of CPU and other resources.

SQL Server 2005 has no such limitation in sequential disk operations. The SQL Server 2005 table scan operation on the HP Superdome with dozens of dual-channel U320 SCSI RAID controllers and 3-5 disks in each channel can drive over 12GB/sec in disk traffic. So it may not be possible or practical to configure the storage system to exceed the capability of a SQL Server 2005 table scan. On the positive side, it will also be possible to complete a report query that much quicker, thus minimizing the period during which transactions are disrupted. An infrequent 2-3 sec disruption should be barely noticeable. This cautions against the setting of arbitrary Service Level Agreements on response time without a comprehensive and detailed understanding of database and system performance characteristics, especially so if the user does not perceive a meaningful impact. It is always better to specify a high percentage of responses under a reasonable time, and allow for a small percentage at a longer interval, unless true absolute requirements mandates otherwise.

The storage performance goals are then as follows. Sequential bandwidth for the SQL Server main database data files should exceed 400-800MB/sec, depending on requirements. The recommended minimum configuration to meet this goal is described for SCSI, SAS and FC storage systems.

Direct Attach Storage on U320 SCSI Technology

  • 2-8 disk drives in the system internal drive bays on the embedded SCSI controller.
  • 1-4 external enclosures (14-15 bays) with dual-channel SCSI.
  • 4-7 disk drives in each channel of the external storage unit, for a total of 8-14 drives per unit.
  • 1 PCI-X or PCI-Express dual channel RAID controller per storage unit.

The rational for using the internal drive bays is that the drive bays are already paid for, along with an embedded controller. The bare drive cost is relatively low compared with the full amortized external storage cost covering the disk drive, the external enclosure and the controller. Even in a clustered environment where the clustered resources must be shared, it never hurts to have extra bandwidth on a non-shared resource.

The rational for external storage is as follows. Each U320 SCSI channel with a nominal bandwidth of 320MB/sec can in fact sustain approximately 240-260MB/sec in large block sequential disk transfers. It takes 3 current generation disk drives to saturate a single U320 channel. Now it is not always possible to use the fastest portion of each disk drive, or to achieve pure sequential transfer. The external enclosures also typically have a minimum of 7 bays per channel. So for the combination of reasons, any where from 4-7 disks per SCSI channel is practical. External 3U SCSI storage enclosures are available with single or dual SCSI channels. Configuring more the 7 disks per channel is a waste of sequential bandwidth capability, so avoid single channel enclosures unless a very high spindle count is required that would exceed the capacity when configuring up to 7 disks per channel. Obsolete technologies tend to persist in storage products, so avoid the U160 enclosures.

Continues…

Pages: 1 2 3 4




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |