Large Data Operations in SQL Server

The articles in the Quantitative Performance Analysis series examined the internal cost formulas used by the SQL Server query optimizer in determining the execution plan and the actual query costs for in-memory configurations. The previous articles restricted the actual query cost analysis to in-memory cases where all necessary data fit in memory so that performance was not limited by the disk configuration. This article provides a first look at queries involving data large compared to available memory and queries involving write operations where performance is limited by the disk configuration.

A case is presented in this article showing that the SQL Server internal cost formulas for I/O operations are based on disk access times. The I/O cost values were fixed several years ago. They do not accurately reflect current generation disk drive performance, nor do they reflect the actual cost of in-memory operations. In addition, the SQL Server execution cost model does not properly account for I/O costs on Insert, Update, and Delete (IUD) operations. For small row count operations, the discrepancy between the internal SQL Server cost model and the actual query costs do not significantly affect the efficiency of the execution plan. On large row count operations, the default plan can be substantially slower than an execution plan that accurately accounts for the number I/O operations involved.



Execution Plan Cost Formula Review

First, a review of the SQL Server execution plan cost formulas discussed in the Quantitative Performance Analysis articles is helpful. The Index Seek operation cost formula is shown below. The I/O cost depends on the amount of physical memory.

I/O Cost = 0.006328500 + 0.000740741 per additional page (=1GB)          = 0.003203425 + 0.000740741 per additional page (>1GB)

CPU Cost = 0.000079600 + 0.000001100 per additional row

The cost formula for multiple Bookmark Lookup operations follows the formula below.

I/O Cost = multiple of 0.006250000 (=1GB)          = multiple of 0.003124925 (>1GB)

CPU Cost = 0.0000011 per row

The Bookmark Lookup multiple is not the exact multiple of the estimated row count, but a fraction thereof, typically >95% of the estimated rows for large row counts. For the both the Index Seek and Bookmark Lookup operations, the I/O base cost depends on the system memory configuration. For up to and including 1GB memory, the Index Seek I/O base cost is 0.063285. Above 1GB, the I/O cost is 0.003203425. All other component operation I/O costs do not depend on system memory (unless an index seek is implicitly involved). The Table Scan cost formula below, also applies for clustered index scans and nonclustered index scans.

I/O Cost = 0.0375785 + 0.000740741 per additional page

CPU Cost = 0.0000785 + 0.0000011 per row

For all table modification operations, Insert, Update and Delete (IUD), the I/O cost approximately follows the formulas below. The CPU cost is exact. The total cost of Update and Delete operations include the cost of either an index seek or table scan, but may not be incorporated in the I/O or CPU cost.

IUD I/O Cost ~ 0.01002 – 0.01010 (>100 rows)

IUD CPU Cost = 0.000001 per row

An interesting point is that the IUD I/O cost is fixed, regardless of the number of rows and pages involved. Contrast this to the Index Seek, Bookmark Lookup and Table Scan I/O costs, which depend on the number of pages involved.

There appears to be no documentation from Microsoft on the units of measure for the SQL Server execution plan cost values. The unit of measure could either be some measure of time or CPU utilization. The article Parallel Execution Plans demonstrated that the unit of measure is most probably time rather than CPU utilization. The primary argument for this deduction is that an execution plan with parallel component operations has a lower cost than the equivalent plan with non-parallel operations. A logical interpretation is that a parallel operation on two processors should complete the step in approximately one-half the time, plus some overhead for combining the results from each processor. If the unit of measure for execution plan cost value were CPU utilization, the parallel operation should show a higher cost on the assumption that splitting a task between two processors does not reduce the sum total CPU-cycles expended and some extra overhead is required to combine the results.

Following the assumption that the SQL Server execution plan cost values are in units of time, the next probable deduction is that the time unit is seconds. The cost values do not depend on the system processor type, and the cost values have not changed from SQL Server 7.0 to 2000. It can be assumed that the cost values were calibrated on some system that was available while SQL Server 7.0 was in development, sometime between 1995 and 1997. A reasonable assumption for that period is that the reference environment on which the SQL Server cost values were calibrated was a database with data size much larger than the available system memory. In that case, SQL operations would frequently involve disk I/O.

On a current generation 15K disk drive, the Index Seek I/O cost of 0.0063285 seconds (or 6.33ms) is a reasonable random disk access time. However, this is excessively low for the 7200 RPM disk drives available in the mid-1990’s. It is possible that a 50% buffer cache hit ratio may have been factored into the Index Seek and Bookmark Lookup I/O cost for system for the =1GB configuration and 75% for the >1GB configuration. There is no meaningful liability in using only two discrete I/O cost values as opposed to a continuous range. The additional I/O cost per page of 0.0007407 can be interpreted as the sequential I/O time (0.74ms), implying that the disk drive sequential transfer rate is 1350 IO/sec at 8KB each or 169 IO/sec at 64KB.

However, absolute system memory is not a reliable indicator of buffer cache hit ratio. A better strategy for adjusting I/O cost might be to use the recent buffer cache hit ratio history. For large row count operations, it is also worthwhile to determine how much of the required data is already in memory.

Continues…

Pages: 1 2 3 4 5 6 7




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |