SQL Server Performance

Storage Performance Reference Architecture

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by joechang, Sep 27, 2006.

  1. joechang New Member

    I will expand the matter of file placement in more detail over time,

    A brief word first on the separation of data and logs.
    This advice is frequently followed without understanding
    the underlying reason.
    Some people take it to mean that all data files are place
    in one group of disks,
    and all log files should be place on another array.
    In fact, the underlying reason for separation of data and
    logs is the radically different performance characteristics
    of a disk drive in handling random
    and sequential IO.
    A disk drive may be capable of 150 random IOPS
    (for 8KB IO, this is 1.2MB/sec) at 5-8ms latency,
    Disk sequential performance can be 3000-5000 small IOPS
    with a bandwidth over 70MB/sec, all at very low latency, 0.3ms or less.
    Access to data files are assumed to be random,
    reasonably true for transaction processing
    Log disk activity is normally sequential writes,
    except during log backups.
    A single pair of drives in RAID 1 can handle the
    log write load for a very high volume transactional database.
    If the log is mixed with data,
    3000 IOPS would require 20 disks + RAID overhead.
    Furthermore, latency increases from 0.3ms to over 5ms,
    so it may not even be possible to support the required
    transaction characteristics.

    The implication is that each active database log
    needs to be placed on its own dedicated physical disk,
    not shared with any other load.
    ie. do not put multiple active logs on the same disks
    Now to the definition of active.
    If the server has many databases,
    and several of these databases generates less than 100 transactions/sec,
    reason says that it is more use of disk performance resources for
    the logs of these low activity db to be on a common pool.
    For databases that generate more than 200 tx/sec (high activity),
    the each log should be on dedicated disks.
    For databases in between, either choice works.
    The dividing line (or zone) is determined by the random write IO
    performance of disks.

    Data files on the other hand, can be shared.
    Since it is difficult to predict load,
    data for all filegroups and all databases can be shared
    across a common set of physical drives
    so each set can utilize the full performance
    of the entire storage system at any given instance.
    An except is between OLTP and DW.
    OLTP desires low latency,
    so disks should be operated at queue depth
    below 2 per physical disk or even 1 or less.
    The goal for DW is maximum throughput,
    so pushing disk queue to 32 is not bad.

    ----------------------------------------------------------------------------------------
    Suppose your storage system has internal disks for the operating system,
    and 4 racks of external storage each with 12-15 disks.
    The OS, page and program files should aways be on internal drives,
    and all can be on 1 pair of disks in RAID 1.
    A note: because of the Windows Server 2003 uninstall handling,
    the C: partition today should probably be
    at minimum the full 33GB of at 36G disk

    For a clustered system, all database files need to be on external storage.
    For nonclustered systems, database files can be on internal storage.
    I like the ProLiant ML370G5 which can hold 16 internal SFF SAS drives,
    and the Dell PowerEdge 2900,
    which can hold 10 internal 3.5in drives,
    plus 2 more in an optional bay,
    and then one more non-swap in a tray
    that you can ask for (not on the web site).
    In any case,
    I prefer not getting the high-density rack system for databases.
    The full tower chassis has better memory, IO and internal storage.

    back to business.
    On each external rack,
    my strong preference is to make 2 array groups of 6-7 disks,
    or 3 array groups of 5 disks for racks that hold 15 drives.
    For a 4 rack set of external storage,
    this makes for 8 disk array groups

    Suppose there is 1 database with 2 data filegroups,
    and that the application makes extensive use of tempdb
    Then on each array group, create 4 partitions.
    The first partition is for the most active main database file group.
    The second partition is for the other main db filegroup.
    The third partition is for tempdb
    The fourth is for backup, and other uses.

    The idea for the picture below was stolen from bertcord

    Rack 1
    ArrayGroupPartition 1Partition 2Partition 3Partition 4
    ArrayGroup1FG1-File1FG2-File1temdb 1 backup1
    ArrayGroup2FG1-File2FG2-File2temdb 2 backup2

    Rack 2
    ArrayGroupPartition 1Partition 2Partition 3Partition 4
    ArrayGroup1FG1-File3FG2-File3temdb 3 backup3
    ArrayGroup2FG1-File4FG2-File4temdb 4 backup4

    Rack 3
    ArrayGroupPartition 1Partition 2Partition 3Partition 4
    ArrayGroup1FG1-File5FG2-File5temdb 5 backup5
    ArrayGroup2FG1-File6FG2-File6temdb 6 backup6

    Rack 4
    ArrayGroupPartition 1Partition 2Partition 3Partition 4
    ArrayGroup1FG1-File7FG2-File7temdb 7 backup7
    ArrayGroup2FG1-File8FG2-File8temdb 8 backup8

    Since there are 8 array groups,
    each of the main db filegroups is split into 8 files,
    same for tempdb

    If RAID 10 is elected for the main db,
    it might be desired to used RAID 5 for the backup partition
    Instead of creating a single array from the full space of 7 disks,
    create 1 RAID 10 array of a given size from a set of 7 disks,
    and a second array in RAID 5 on the remaining space of the same 7 disks

    If this is a clustered system,
    some external drives need to be allocated for the logs.

    ----------------------------------------------------------------------------------------
    This started in my General Hardware Configuration post
    Now I have made the subject of storage performance a dedicated post

    Too often I have seen perfectly good (or overkill)
    CPU & Memory configuration matched
    with a seriously crippled storage system.
    Sometime too little money was spent on storage
    But just as often, someone bought a grossly overpriced
    SAN that still had horrible performance.
    A recent post had someone blaming SQL Server
    for unacceptable query performance (10min)
    on a 8-way box with 16GB & NetApp storage.
    It turns out the query did a 18GB table scan,
    and the disk system could only do 37MB/sec.
    A properly configured storage system should
    support SQL 2000 table scans at > 700MB/sec
    and SQL 2005 at over 1-3GB/sec.

    For random IOPS, performance is specified at
    low queue low latency (<10ms)
    and high queue (latency may be > 20ms)
    I am thinking of standardized ratings for storage systems,
    for data it would be something like the following

    Class, Seq Read, Random IOPS (<10ms / >20ms latency)
    AA: 10GB/sec, 40K / 100K
    A : 3GB/sec, 12K / 30K
    B : 1000MB/sec, 4K / 10K
    C : 300MB/sec, 1200 / 3K
    D : 100MB/sec, 400 / 1,000
    E : 30MB/sec, 200 / 300
    F : <30MB/sec, <150 / <300

    test scripts and reference results are below

    For a while I have intended to do a document describing the individual disk drive performance characteristics.
    This is a very complicated subject, which I never got around to doing.
    But I will start here and try to expand until it is ready for a standalone document.

    Disk drives are generally characterized by rotational speed,
    ie, RPM which affects mostly random IO performance.
    Sequential performance is also a consideration.
    Server drives today are either 10K or 15K.
    A 10K drive rotates at 10,000 rev per minute or 1,667 rev/sec
    or 1 revolution in 6 milli-sec.
    A 15K drives does 1 rev in 4 ms.
    A standard measure of performance is random IO performance.
    When a random read request is issued to the disk, on average,
    the disk must rotate between 0-360 degress
    before the head is over the desired angular position,
    for an average of 180 degrees,
    Hence the average rotational latency for 10K and 15K disks drives are 3 and 2ms.

    Next, the head has to move to the desired radial position.
    This can range from a single track to the full radial length of the disk.
    Hence the average seek time is approximately one-half the full stroke time,
    typically 5 msec for 10K drives and 3.5ms for 15K drives.
    The total average access time for random IO is comprised of the command overhead, rotational latency, seek time and transfer time.
    For recent generation disk drives and small block IO (8KB),
    command overhead and transfer time is small.
    Average access time for 8K random IO is typically 8 ms for 10K
    and 5.75ms for 15K drives.
    This corresponds to 120 and 175 random IOPS for 10K and 15K drives respectively

    The above seems to be relatively simple right?
    A 10K drive should do 120 small block random IOPS and 175 for 15K.
    Well, not necessarily.
    SCSI/FC/SAS disk drives for a long time have had command queing,
    where IO could be reordered.
    If IO requests are issued one at a time, serially,
    ie, the second IO is issued only when the first completes,
    the above is reasonable.
    When multiple IO requests are issued concurrently,
    the disk drive reorders the requests,
    the actual rotational latency and seek times are less than the assumptions above.
    At high queue depth, a 10K drive might do 280 IOPS
    for data randomly distributed across then entire disk

    If that were not complicated enough,
    suppose now that the data only resides on a small portion of the disk.
    Then the average seek times are greatly reduced.
    At queue depth 1, a 10K drive can do 220 IOPS
    for data randomly distributed over 3% of the disk.
    When the command queing and short-stroke effect are combined,
    a 10K drive can do 600 IOPS.

    For the above reasons, and the cost structure of direct attach storage,
    the best random IOPS performance for a database application,
    will have data distributed across as many disks as reasonable
    and only over a somewhat small portion of the disks.
    Since the amortized cost per disk in a SAN is often 4X or higher than DA,
    a DA solution can substantially outperform the SAN making optimum use of load distribution and short-stroke.

    more to follow
  2. SQL_Guess New Member

    *subscribed* ... this should be interesting

    Panic, Chaos, Disorder ... my work here is done --unknown
  3. joechang New Member

    -- This table is used to populate other test tables
    CREATE TABLE I (I int NOT NULL , CONSTRAINT PK_I PRIMARY KEY CLUSTERED (I) )
    GO
    SET NOCOUNT ON
    DECLARE @I int SELECT @I = 1
    BEGIN TRANSACTION
    WHILE @I <= 100000 BEGIN
    INSERT I (I) VALUES ( @I)
    SET @I = @I+1 END
    COMMIT TRANSACTION
    CHECKPOINT
    GO

    -- Used to store results of "random IO" tests
    --new Expanded results table
    --uses more data from fn_virtualfilestats
    --SQL 2000 just has IOStallMS
    --SQL 2005 has separate values for read and write, but i have not implemented this yet
    CREATE TABLE Results (
    spid int, Calls int, ms int, CPUms int, CallsPerSec real
    , RowsPerCall real, RowsPerSec real
    , ReadsPerSec real , BytesReadPerSec real
    , WritesPerSec real , BytesWrittenPerSec real
    , IOStallMS real
    -- , IOStallRead real, IOStallWrite real -- for SQL Server 2005
    , dt datetime DEFAULT getdate() )

    -- test table with Clustered index,
    -- also test a table without clustered index
    CREATE TABLE C2 (
    ID1 int NOT NULL,
    ID2 int NOT NULL,
    ID3 int NOT NULL,
    ID4 int NOT NULL,
    ID5 int NOT NULL,
    ID6 int NOT NULL,
    ID7 int NOT NULL,
    ID8 int NOT NULL,
    ID9 int NOT NULL,
    ID10 int NOT NULL,
    ID11 int NOT NULL,
    bID1 bigint NOT NULL,
    bID2 bigint NOT NULL,
    rMoney money NOT NULL,
    rDate datetime NOT NULL,
    rReal real NOT NULL,
    rDecimal decimal (9,4) NOT NULL,
    sID smallint NOT NULL,
    sChar char(700) NOT NULL,
    CONSTRAINT [PK_C2] PRIMARY KEY CLUSTERED ( [ID1] ) WITH FILLFACTOR = 100 )
    GO

    -- @BStart -- batch starting set
    -- @BEnd -- batch end set
    -- @BTotal -- total batches, can have multiple concurrent sets active
    -- @BSize -- rows inserted per batch
    -- @BRow -- starting row of batch
    -- @T total rows, @I current row, @C cardinality, @D distinct values, @P rows per page
    -- @D2 10 rows, D3 40, D4 100, D5 300, D6 1000 D7 3000 D8 10000
    ---------1---------2---------3---------4---------5---------6---------7---------8---------9
    SET NOCOUNT ON
    DECLARE @BStart int,@BEnd int,@BTotal int,@BSize int,@BRow int
    ,@T int,@I int,@C int,@P int,@M int
    ,@D2 int, @D3 int, @D4 int, @D5 int, @D6 int, @D7 int, @D8 int, @D9 int, @D10 int
    SELECT @M = 128 -- 128 Pages per MByte
    SELECT @P = 10 -- 10 rows per page, based on row size and fill factor
    -- @BEnd=32768, @BTotal=32768 set to create 32,768MB table, ie, 32GB
    SELECT @BStart=1, @BEnd=32768, @BTotal=32768, @BSize=@P*@M, @C=10
    SELECT @T=@BTotal*@BSize
    SELECT @D2=(@T-1)/@C+1, @D3=(@T-1)/20+1, @D4=(@T-1)/30+1, @D5=(@T-1)/50+1, @D6=(@T-1)/100+1
    , @D7=(@T-1)/300+1, @D8=(@T-1)/1000+1 , @D9=(@T-1)/3000+1, @D10=(@T-1)/10000+1
    WHILE @BStart <= @BEnd BEGIN
    BEGIN TRAN
    SELECT @I=(@BStart-1)*@BSize, @BRow = @BStart*@BSize
    INSERT C2(ID1,ID2,ID3,ID4,ID5,ID6,ID7,ID8,ID9,ID10,ID11,bID1,bID2,rMoney,rDate,rReal,rDecimal,sID,sChar)
    SELECT I+@I,(I+@I-1)%@D2+1,(I+@I-1)%@D3+1,(I+@I-1)%@D4+1,(I+@I-1)%@D5+1,(I+@I-1)%@D6+1
    , (I+@I-1)%@D7+1,(I+@I-1)%@D8+1,(I+@I-1)%@D9+1,(I+@I-1)%@D10+1, I+@I
    , 2*@T*RAND(CHECKSUM(NEWID())), @T*RAND(CHECKSUM(NEWID())),10000*RAND(CHECKSUM(NEWID()))
    , DATEADD(hour,150000*RAND(CHECKSUM(NEWID())),'1990-01-01')
    , 10000*RAND(CHECKSUM(NEWID())), 10000*RAND(CHECKSUM(NEWID())), (I+@I)%320+1, CONVERT(char(100),NEWID())
    FROM I WHERE I <= @BSize
    SET @BStart = @BStart+1
    COMMIT TRAN
    IF (@BStart%10 -1=0) PRINT 'C2 Step ' + CONVERT(varchar,GETDATE(),121) + ', ' + CONVERT(varchar,@BStart-1)
    + ', row ' + CONVERT(varchar,@BRow) + ', Trancount ' + CONVERT(varchar(10),@@TRANCOUNT)
    END
    PRINT 'C2 Complete ' + CONVERT(varchar,GETDATE(),121) + ', row ' + CONVERT(varchar,@BRow)
    + ', Trancount ' + CONVERT(varchar(10),@@TRANCOUNT)
    GO
    CHECKPOINT
    GO
    UPDATE STATISTICS C2(PK_C2) WITH FULLSCAN
    PRINT 'S1 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID2 ON C2 (ID2,bID1)
    PRINT 'I2 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID3 ON C2 (ID3,bID1)
    PRINT 'I3 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID4 ON C2 (ID4,bID1)
    PRINT 'I4 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID5 ON C2 (ID5,bID1)
    PRINT 'I5 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID6 ON C2 (ID6,bID1)
    PRINT 'I6 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID7 ON C2 (ID7,bID1)
    PRINT 'I7 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID8 ON C2 (ID8,bID1)
    PRINT 'I8 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID9 ON C2 (ID9,bID1)
    PRINT 'I9 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID10 ON C2 (ID10,bID1)
    PRINT 'I10 ' + CONVERT(varchar,GETDATE(),121)
    GO
  4. joechang New Member

    the original scripts removed on 2007-03-1
    this is the improved Results2 table structure

    CREATE TABLE Results2 (spid int, Calls int, ms int, CPUms int
    , CallsPerSec real, RowsPerCall real, RowsPerSec real
    , ReadsPerSec real , BytesReadPerSec real
    , DWritesPerSec real , DBytesWrittenPerSec real , DIOStallMS real
    -- , IOStallRead real, IOStallWrite real -- for SQL Server 2005
    , LWritesPerSec real , LBytesWrittenPerSec real , LIOStallMS real
    , dt datetime DEFAULT getdate() )

    Notes:
    the original script did the following:
    SELECT * FROM ::fn_virtualfilestats(db_id(), -1)

    SQL 2000 did not accept the db_id() inside fn_virtualfilestats

    below is my new instrumentation script that should work for both SQL 2000 & 2005.
    SQL 2005 collect IOStall for read and write separately
    which is more useful for write tests
    I will provide SQL 2005 only script with this when I have time
    I made the assumption that FileId = 2 is the log, others are data
    if not for your case, make adjustments

    WAITFOR DELAY '00:00:10' SET NOCOUNT ON
    DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int
    , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint
    , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint
    , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint
    , @dbid int, @D bigint, @Mem bigint -- memory in MB
    SELECT @dbid = db_id()
    SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100
    -- @Mem should be less than memory (MB) for data buffers
    SELECT @M = 128*@Mem*@M/@D

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    WHILE DATEDIFF(ms,@DT1,getdate()) < 30000 BEGIN
    SELECT @N = @N+1
    -- Query to be test here
    END

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]
  5. joechang New Member

    the original scripts removed on 2007-03-1<br />-- new in memory read test<br /><br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- in memory test 2<br />DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int<br /> , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint<br /> , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint<br /> , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint<br /> , @dbid int, @D bigint, @Mem bigint -- memory in MB<br />SELECT @dbid = db_id()<br />SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100 <br />-- @Mem should be less than memory (MB) for data buffers<br />SELECT @M = MAX(ID2) FROM C2 <br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @M = 128*@Mem*@M/@D <br /><br />WHILE @N &lt; @M BEGIN -- initial loop to load data into memory<br /> SELECT @N = @N+1 SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID2 = @N <br />END<br /><br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> BEGIN TRAN -- used for watching Perf counter-&gt;Databases-&gt;Transactions/sec <br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID2 = @Group <br /> COMMIT TRAN<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- in memory test 4<br /><br />SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100 <br />-- @Mem should be less than memory (MB) for data buffers<br />SELECT @M = MAX(ID4) FROM C2 <br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @M = 128*@Mem*@M/@D<br /><br />WHILE @N &lt; @M BEGIN -- initial loop to load data into memory<br /> SELECT @N = @N+1 SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID4 = @N <br />END<br /><br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID4 = @Group <br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- in memory test 6<br /><br />SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100 <br />-- @Mem should be less than memory (MB) for data buffers<br />SELECT @M = MAX(ID6) FROM C2 <br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @M = 128*@Mem*@M/@D<br /><br />WHILE @N &lt; @M BEGIN -- initial loop to load data into memory<br /> SELECT @N = @N+1 SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID6 = @N <br />END<br /><br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID6 = @Group <br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- in memory test 8<br /><br />SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100 <br />-- @Mem should be less than memory (MB) for data buffers<br />SELECT @M = MAX(ID<img src='/community/emoticons/emotion-11.gif' alt='8)' /> FROM C2 <br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @M = 128*@Mem*@M/@D<br /><br />WHILE @N &lt; @M BEGIN -- initial loop to load data into memory<br /> SELECT @N = @N+1 SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID8 = @N <br />END<br /><br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID8 = @Group <br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />GO<br />/*<br /><br />SELECT spid, Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec<br /> , ReadsPerSec , BytesReadPerSec/1048576.0 AS [Read MB/sec]<br /> , DWritesPerSec , DBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]<br /> , (DIOStallMS*1000.0)/((ReadsPerSec+DWritesPerSec)*ms+1) AS [Avg-sec/IO]<br /> , LWritesPerSec --, LBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]<br /> , (LIOStallMS*1000.0)/((LWritesPerSec)*ms+1) AS [L Avg-sec/IO]<br />FROM Results2 WHERE spid = @@spid<br /><br />*/<br />
  6. joechang New Member

    the original scripts removed on 2007-03-1<br />new random IO disk test<br />see tablescan and update tests on page 3<br /><br />/*<br /> Random read tests -- assumes memory &lt;&lt; C2 table size (set at 32GB)<br />*/<br />--DBCC DROPCLEANBUFFERS <br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 2<br />DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int<br /> , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint<br /> , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint<br /> , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint<br /> , @dbid int, @D bigint, @Mem bigint -- memory in MB<br />SELECT @dbid = db_id()<br />SELECT @M = MAX(ID2)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> BEGIN TRAN -- to enable tracking via PerfMon DB Tx/sec <br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID2 = @Group<br /> COMMIT TRAN<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 3<br /><br />SELECT @M = MAX(ID3)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID3 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 4<br /><br />SELECT @M = MAX(ID4)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID4 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 5<br /><br />SELECT @M = MAX(ID5)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID5 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 6<br /><br />SELECT @M = MAX(ID6)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID6 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 7<br /><br />SELECT @M = MAX(ID7)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID7 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 8<br /><br />SELECT @M = MAX(ID<img src='/community/emoticons/emotion-11.gif' alt='8)' />-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID8 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 9<br /><br />SELECT @M = MAX(ID9)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID9 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- disk read test 10<br /><br />SELECT @M = MAX(ID10)-1 FROM C2 <br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId = 2<br />SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @Group = @M*rand()+1, @N = @N+1<br /> SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WITH(NOLOCK) WHERE ID10 = @Group<br />END<br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />GO<br />/*<br />SELECT spid, Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec<br /> , ReadsPerSec , BytesReadPerSec/1048576.0 AS [Read MB/sec]<br /> , DWritesPerSec , DBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]<br /> , (DIOStallMS*1000.0)/((ReadsPerSec+DWritesPerSec)*ms+1) AS [Avg-sec/IO]<br /> , LWritesPerSec --, LBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]<br /> , (LIOStallMS*1000.0)/((LWritesPerSec)*ms+1) AS [L Avg-sec/IO]<br />FROM Results2 WHERE spid = @@spid<br /><br />*/<br /><br />
  7. joechang New Member

    this was originally for results<br />but I have moved it down 2 posts for the new tests scripts<br /><br />-- table scan performance test<br />-- test at MAXDOP 1, 2 & 4<br />-- compare difference with and w/o running DBCC DROPCLEANBUFFERS first<br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />DBCC DROPCLEANBUFFERS <br />GO<br />DECLARE @M int, @DT1 datetime, @ms int<br /> , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint<br /> , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint<br /> , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint<br /> , @dbid int, @D bigint<br />SELECT @dbid = db_id()<br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead)<br /> , @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten) , @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />SELECT @M = SUM(rMoney) FROM C2 WITH(NOLOCK)WHERE ID11 = 1 OPTION (MAXDOP 1) <br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead)<br /> , @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid, 1 [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@D)/@ms [Calls/sec]<br /> , CONVERT(real,@D)/(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /> [Rows/call] , CONVERT(real,1000*@D)/(128*@ms) [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec]<br /> , (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec]<br /> , (@IOSL2-@IOSL1) [LIOStallMS]<br />GO<br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />DBCC DROPCLEANBUFFERS <br />GO<br />DECLARE @M int, @DT1 datetime, @ms int<br /> , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint<br /> , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint<br /> , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint<br /> , @dbid int, @D bigint<br />SELECT @dbid = db_id()<br />SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid &lt; 2<br />SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br />SELECT @DT1 = getdate(), @C1 = @@CPU_BUSY<br /><br />SELECT @M = SUM(rMoney) FROM C2 WITH(NOLOCK)WHERE ID11 = 1 OPTION (MAXDOP 2) <br /><br />SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY<br />SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS) <br />FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId &lt;&gt; 2<br />SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)<br />FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2<br /><br />INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec<br /> ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)<br />SELECT @@spid, 1 [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@D)/@ms [Calls/sec]<br /> , CONVERT(real,@D)/(12<img src='/community/emoticons/emotion-11.gif' alt='8)' /> [Rows/call] , CONVERT(real,1000*@D)/(128*@ms) [Rows/sec]<br /> , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]<br /> , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]<br /> , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]<br /><br />---------1---------2---------3---------4---------5---------6---------7---------8--------9<br />GO<br /><br />/*<br />SELECT spid, Calls,ms,CPUms,CallsPerSec AS [dpages/s], RowsPerCall AS [MB]<br /> , RowsPerSec AS [Table MB/sec]<br /> , ReadsPerSec, BytesReadPerSec/1048576.0 AS [Disk MB/sec]<br /> , (DIOStallMS*1000.0)/((ReadsPerSec+DWritesPerSec)*ms+1) AS [Avg-sec/IO]<br />FROM Results2 WHERE spid = @@spid<br />*/<br />
  8. SQL_Guess New Member

    Hi Joe... Quite a bit to digest there.

    Requirements to run this is:
    -32GB for DB, and some space for log growth.

    Your reports/results are from queries in your sql?

    I'll try and get this tested on a few of our environments - I'm guessing the idea here is to show various baselines of configurations/hardware? If so, then any reponses should include the hardware/configuration per your results :

    results below for [xxx] GHz Dual Core Xeon [xxx]
    [xxx] Windows Server [xxx]
    [xxx] SQL Server [xxx]
    [xxx] GB mem
    [xxx] [xxx] GB [xxx] K SAS disks on [xxx] RAID controller


    Panic, Chaos, Disorder ... my work here is done --unknown
  9. joechang New Member

    the original reply here was moved down ~2 posts
    to make room for the new update test

    one more change to my update test
    the original turned out to be mostly a test of read from data, and write to log
    the modified script below includes the time & IO around the checkpoint as well
    you can also try increasing the run time and watch profiler to see how long you can run before a checkpoint kicks in

    -- Update test
    CHECKPOINT
    GO
    DBCC DROPCLEANBUFFERS
    GO
    ---------1---------2---------3---------4---------5---------6---------7---------8--------9
    WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- Update Test 4 in memory
    DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int
    , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint
    , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint
    , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint
    , @dbid int, @D bigint, @Mem bigint -- memory in MB
    SELECT @dbid = db_id()
    SELECT @N = 0, @bID = 0, @Cnt=0, @Mem = 100 -- @Mem should be less than memory (MB) for data buffers
    SELECT @M = MAX(ID4) FROM C2
    SELECT @D = dpages FROM sysindexes WHERE id = OBJECT_ID('C2') AND indid < 2
    SELECT @M = 128*@Mem*@M/@D -- for in memory test

    WHILE @N < @M BEGIN -- initial loop to load data into memory
    SELECT @N = @N+1 SELECT @bID = SUM(bID2), @Cnt=@Cnt+COUNT(*) FROM C2 WHERE ID4 = @N
    END

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    WHILE DATEDIFF(ms,@DT1,getdate()) < 30000 BEGIN
    SELECT @Group = @M*rand()+1, @N = @N+1
    UPDATE C2 SET rMoney = 10000*RAND(CHECKSUM(NEWID())) WHERE ID4 = @Group
    SELECT @Cnt=@Cnt+@@ROWCOUNT
    END

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]


    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    CHECKPOINT

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], 0 [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]
    GO
    ---------1---------2---------3---------4---------5---------6---------7---------8--------9
    WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- Update Test 4
    DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int
    , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint
    , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint
    , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint
    , @dbid int , @D bigint, @Mem bigint -- memory in MB
    SELECT @dbid = db_id()
    SELECT @M = MAX(ID4) FROM C2

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    WHILE DATEDIFF(ms,@DT1,getdate()) < 30000 BEGIN
    SELECT @Group = @M*rand()+1, @N = @N+1
    UPDATE C2 SET rMoney = 10000*RAND(CHECKSUM(NEWID())) WHERE ID4 = @Group
    SELECT @Cnt=@Cnt+@@ROWCOUNT
    END

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    CHECKPOINT

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], 0 [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]
    GO
    ---------1---------2---------3---------4---------5---------6---------7---------8--------9
    WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- Update Test 6
    DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int
    , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint
    , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint
    , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint
    , @dbid int , @D bigint, @Mem bigint -- memory in MB
    SELECT @dbid = db_id()
    SELECT @M = MAX(ID6) FROM C2

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    WHILE DATEDIFF(ms,@DT1,getdate()) < 30000 BEGIN
    SELECT @Group = @M*rand()+1, @N = @N+1
    UPDATE C2 SET rMoney = 10000*RAND(CHECKSUM(NEWID())) WHERE ID6 = @Group
    SELECT @Cnt=@Cnt+@@ROWCOUNT
    END

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    CHECKPOINT

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], 0 [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]
    GO
    ---------1---------2---------3---------4---------5---------6---------7---------8--------9
    WAITFOR DELAY '00:00:10' SET NOCOUNT ON -- Update Test 7
    DECLARE @N int, @M int, @Group int, @Cnt int, @bID bigint, @DT1 datetime, @ms int
    , @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint , @RB1 bigint, @RB2 bigint
    , @WD1 bigint, @WD2 bigint , @WBD1 bigint, @WBD2 bigint, @IOSD1 bigint, @IOSD2 bigint
    , @WL1 bigint, @WL2 bigint , @WBL1 bigint, @WBL2 bigint, @IOSL1 bigint, @IOSL2 bigint
    , @dbid int , @D bigint, @Mem bigint -- memory in MB
    SELECT @dbid = db_id()
    SELECT @M = MAX(ID7) FROM C2

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    WHILE DATEDIFF(ms,@DT1,getdate()) < 30000 BEGIN
    SELECT @Group = @M*rand()+1, @N = @N+1
    UPDATE C2 SET rMoney = 10000*RAND(CHECKSUM(NEWID())) WHERE ID7 = @Group
    SELECT @Cnt=@Cnt+@@ROWCOUNT
    END

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], CONVERT(real,@Cnt)/@N [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]

    SELECT @R1=SUM(NumberReads), @RB1=SUM(BytesRead), @WD1=SUM(NumberWrites), @WBD1=SUM(BytesWritten), @IOSD1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL1=SUM(NumberWrites), @WBL1=SUM(BytesWritten), @IOSL1=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2
    SELECT @N = 0, @bID = 0, @Cnt=0, @DT1 = getdate(), @C1 = @@CPU_BUSY

    CHECKPOINT

    SELECT @ms = DATEDIFF(ms,@DT1,getdate()), @C2 = @@CPU_BUSY
    SELECT @R2=SUM(NumberReads), @RB2=SUM(BytesRead), @WD2=SUM(NumberWrites), @WBD2=SUM(BytesWritten), @IOSD2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, -1) WHERE FileId <> 2
    SELECT @WL2=SUM(NumberWrites), @WBL2=SUM(BytesWritten), @IOSL2=SUM(IOStallMS)
    FROM ::fn_virtualfilestats(@dbid, 2) --WHERE FileId = 2

    INSERT Results2(spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,BytesReadPerSec
    ,DWritesPerSec,DBytesWrittenPerSec,DIOStallMS,LWritesPerSec,LBytesWrittenPerSec,LIOStallMS)
    SELECT @@spid,@N [Calls], @ms [ms], @C2-@C1 [CPUms], CONVERT(real,1000*@N)/@ms [Calls/sec], 0 [Rows/call] , CONVERT(real,@Cnt*1000.0)/@ms [Rows/sec]
    , ((@R2-@R1)*1000.0)/@ms [Reads/sec], ((@RB2-@RB1)*1000.0)/@ms [BytesRead/sec]
    , ((@WD2-@WD1)*1000.0)/@ms [DWrites/sec], ((@WBD2-@WBD1)*1000.0)/@ms [DBytesWritten/sec], (@IOSD2-@IOSD1) [DIOStallMS]
    , ((@WL2-@WL1)*1000.0)/@ms [LWrites/sec], ((@WBL2-@WBL1)*1000.0)/@ms [LBytesWritten/sec], (@IOSL2-@IOSL1) [LIOStallMS]
    GO
    ---------1---------2---------3---------4---------5---------6---------7---------8--------9

    /*
    SELECT spid, Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec
    , ReadsPerSec , BytesReadPerSec/1048576.0 AS [Read MB/sec]
    , DWritesPerSec , DBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]
    , (DIOStallMS*1000.0)/((ReadsPerSec+DWritesPerSec)*ms+1) AS [Avg-sec/IO]
    , LWritesPerSec --, LBytesWrittenPerSec/1048576.0 AS [D Write MB/sec]
    , (LIOStallMS*1000.0)/((LWritesPerSec)*ms+1) AS [L Avg-sec/IO]
    FROM Results2 WHERE spid = @@spid
    */
  10. SQL_Guess New Member

    quote:Originally posted by joechang

    all results SQL query, if SQL can not get the performance of another test tool, what use is it to your needs


    Sorry - I think I didn't make my question clear. What I meant was: Does the SQL you listed provide the answers you posted: i.e.



    table scan performance
    ms CPU-ms Reads/sec MB/sec
    63,313 1179 2421.8 519.5

    random IO - bookmark lookup performance
    memory
    Rows/call Rows/sec DiskRead/s
    10 102,343 | 0
    30 101,344 | 0
    100 119,563 | 0
    1000 122,166 | 0
    disk
    Rows/call Rows/sec DiskRead/s
    10 227.7 | 211.1667
    20 210.5 | 211.655
    30 1079 | 1033.967
    50 1211 | 1134.675
    100 1797 | 1647.587
    300 2631.8 | 2354.8
    1000 3036.7 | 2740.692


    Panic, Chaos, Disorder ... my work here is done --unknown
  11. joechang New Member

    I moved the original results for Sep 2006 and a reply from above down here to make room for new test scripts

    the following are test results on
    System 1
    2 x 3.2GHz Xeon with 4GB memory
    32-bit Windows Server 2003 sp2
    SQL Server 2005 sp1
    8 10K 73GB SCSI disks on dual channel PCI-e RAID controller (4 disks per channel)
    3 19K 73GB SCSI disks on SCSI controller
    (9 disks total for data, 8 from RAID controller, 1 from SCSI controller)

    table scan performance
    ms | CPU-ms | Reads/sec | MB/sec
    63,313 | 1179 | 2421.8 | 519.5

    random IO - bookmark lookup performance
    memory
    Rows/call | Rows/sec | DiskRead/s
    10 | 102,343 | 0
    30 | 101,344 | 0
    100 | 119,563 | 0
    1000 | 122,166 | 0
    disk
    Rows/call | Rows/sec | DiskRead/s
    10 | 227.7 | 211.1667
    20 | 210.5 | 211.655
    30 | 1079 | 1033.967
    50 | 1211 | 1134.675
    100 | 1797 | 1647.587
    300 | 2631.8 | 2354.8
    1000 | 3036.7 | 2740.692

    System 2
    results below for 2 x 2.6GHz Dual Core Xeon 5160
    64-bit Windows Server 2003 sp2
    64-bit SQL Server 2005 sp1
    8GB mem
    1st test: 8 36GB 15K SAS disks on 1 RAID controller
    2nd test: add another 12 36GB SAS 15K disks on second SAS RAID controller
    3rd test: 10 disks on 1st SAS RAID, 14 disks on 2nd

    table scan
    ms | CPU-ms | Reads/sec | MB/sec | notes
    47,890 | 718 | 3,087.6 | 686.9 - 8 disks MAXDOP 1
    29,563 | 768 | 5,108.1 | 1,112.7 - 20 disks MAXDOP 1
    29,250 | 760 | 4,852.9 | 1,124.6 - 24 disks MAXDOP 1
    23,940 | 878 | 2,791.4 | 1,374.1 - 24 disks MAXDOP 2

    random IO - bookmark lookup performance
    memory
    Rows/call | Rows/sec | DiskRead/s
    10 | 182,233| 0
    30 | 188,593 | 0
    100 | 243,486 | 0
    1000 | 242,166 | 2.4 -- this should have been zero, not sure why

    8 disks
    Rows/call | Rows/sec | DiskRead/s
    10 | 146.3 | 232.5
    20 | 186.2 | 247.8
    30 | 1045.3 | 1,047.0
    50 | 1582.5 | 1,323.5
    100 | 1992.5 | 1,616.1
    300 | 3105.2 | 2,490.6
    1000 | 3998.2 | 3,195.3
    20 disks
    10 | 114.5 | 217.4
    20 | 130.7 | 216.5
    30 | 981.5 | 1,201.3
    50 | 1668.3 | 1,516.8
    100 | 3489.9 | 2,798.7
    300 | 5659.9 | 4,543.3
    1000 | 7750.9 | 6,194.7
    24 disks
    20 | 133.2 | 227.4
    30 | 1373 | 1,594.7
    50 | 1901.7 | 1,820.3
    100 | 3220.0 | 2,955.0
    300 | 6173.2 | 5,385.9
    1000 | 8391.5 | 7,089.3


    my inclination it too size the database to match your own with adjustments
    example, if you have a 100GB db, but only 50GB gets used then test with 50GB

    the important point is to not test on too small a DB,
    i think the old sqlio could only do 3GB, which fits in some SAN cache, yield unreal results

    the above is 32GB data + approx 10GB index, there is very little log requirement for the above, but a later write test might be different

    all results SQL query, if SQL can not get the performance of another test tool, what use is it to your needs

  12. bytehd New Member

    subscribed also

    id like to try it on SQL Server Everywhere too.

    just kidding.
  13. eyechart New Member

    joe, thanks for the scripts. These are proving to be very useful.

    I do havea question though regarding the table scan performance test. I see that the MB/Sec value is derived by doing some math on the difference between BytesRead from fn_virtualfilestats. If we have a system with a large buffer cache (say 32GB or more), could that MB/Sec value be inaccurate if much of the table was already in buffer?



    -ec
  14. joechang New Member

    original reply deleted as it nolonger relevent with the new improved test scripts above

    here are new results for the Xeon 2x3.2GHz, 11 disks (10K)

    Memory - Random
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    54| 280705| 30000| 918| 9356.8| 10.0| 93568| 0| 0.0| 0.0| 0.0| 0.0| 0.0| 0
    54| 101894| 30000| 928| 3396.5| 30.0| 101894| 0| 0.0| 0.0| 0.0| 0.0| 0.0| 0
    54| 35052| 30000| 937| 1168.4| 100.0| 116840| 0| 0.0| 0.0| 0.0| 0.0| 0.0| 0
    54| 3539| 30000| 937| 118.0| 1000.0| 117967| 0| 0.0| 0.0| 0.0| 0.0| 0.0| 0
    Disk Read - Random
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    55| 587| 30013| 4| 19.6| 10.0| 196| 207| 1.6| 0.0| 0.0| 4.8| 0.0| 0
    55| 315| 30063| 5| 10.5| 20.0| 210| 208| 1.6| 0.0| 0.0| 4.8| 0.0| 0
    55| 1189| 30016| 24| 39.6| 30.0| 1188| 1117| 8.7| 0.0| 0.0| 8.2| 0.0| 0
    55| 782| 30030| 39| 26.0| 50.0| 1302| 1223| 9.6| 0.0| 0.0| 10.9| 0.0| 0
    55| 601| 30000| 35| 20.0| 100.0| 2003| 1825| 14.3| 0.0| 0.0| 14.7| 0.0| 0
    55| 284| 30000| 61| 9.5| 300.0| 2840| 2595| 20.3| 0.0| 0.0| 16.5| 0.0| 0
    55| 99| 30280| 87| 3.3| 1000.0| 3269| 2988| 23.4| 0.1| 0.0| 18.1| 0.0| 0
    55| 51| 30046| 107| 1.7| 2999.8| 5092| 4681| 36.7| 0.0| 0.0| 11.7| 0.0| 0
    55| 11| 33660| 74| 0.3| 9998.2| 3267| 2576| 20.2| 0.0| 0.0| 11.1| 0.0| 0
    Update Test
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    53| 44139| 30000| 769| 1471.3| 30.0| 44139| 0| 0.0| 22.5| 3.3| 13.3| 1471.9| 0.000294397
    53| 657| 30000| 62| 21.9| 30.0| 657| 1025| 64.1| 0.0| 0.0| 10.4| 21.9| 0
    53| 475| 30003| 100| 15.8| 100.0| 1583| 1607| 51.2| 0.0| 0.0| 16.4| 15.8| 0.002105259
    53| 285| 30000| 97| 9.5| 300.0| 2850| 2554| 20.0| 0.0| 0.0| 16.4| 9.5| 0.014035039
    Table scan
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    55| 1| 57893| 1216| 72.4| 32768.0| 566| 2473| 568.2| | | 8.1| |
    55| 1| 53750| 1504| 78.0| 32768.0| 610| 1244| 612.0| | | 21.5| |
  15. eyechart New Member

    <blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by joechang</i><br /><br />DBCC DROPCLEANBUFFERS <br />GO<br />DECLARE @M int, @DT1 datetime, @ms int, @C1 bigint, @C2 bigint, @R1 bigint, @R2 bigint, @B1 bigint, @B2 bigint, @S real<br />SELECT @S = dpages/128 FROM sysindexes WHERE id = OBJECT_ID(<font color="red"><b>'CLTransaction'</font id="red"></b>) AND indid IN (0,1)<br />SELECT @R1=SUM(NumberReads), @B1=SUM(BytesRead) FROM ::fn_virtualfilestats(db_id(), -1)<br />SELECT @C1 = @@CPU_BUSY, @DT1 = getdate()<br />SELECT @M = SUM(rMoney) FROM C2 WITH(NOLOCK)WHERE ID11 = 1 OPTION (MAXDOP 1) <br />SELECT @C2 = @@CPU_BUSY, @ms = DATEDIFF(ms,@DT1,getdate())<br />SELECT @R2=SUM(NumberReads), @B2=SUM(BytesRead) FROM ::fn_virtualfilestats(db_id(), -1)<br /><font color="red">-- added comma below</font id="red"><br />SELECT @ms [ms], @C2-@C1 [CPU-ms], 1000*@S/@ms [MB/sec]<font color="red"><b>,</font id="red"></b> <br /> ((@R2-@R1)*1000.0)/@ms [Reads/sec], (@B2-@B1)*1000.0/@ms/1048576.0 [Disk MB/sec]<br />GO<br /><br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />Joe,<br /><br />What is CLTransaction? I don't have an object by that name in my environment. I changed that to C2 and got some output, but I want to make sure that I am not missing something or misunderstanding how your code works. Thanks again, and sorry for being a PITA [<img src='/community/emoticons/emotion-2.gif' alt=':D' />].<br /><br /><br />-ec
  16. bytehd New Member

    I broke out your SQL into separate scripts and will test at home.
    1) Create Script
    2) In-Memory script
    3) Pseudo-random IO script
    4) Table Scan

    the Create script alone took 1 hour and 20 minutes.
    I throttled it back to 8GB
    SQL2000 DEV SP4, 250GB ATA-100 on single PIII-800, 1GIG SDRAM, XP Pro SP2 no AV,Antispy
    will run again with SQL 2005 dev SP1
    So much for IDE huh.....

    Why run at home?
    none of my clients have 32GB free on any server.

    besides, I needed a nice script to choke the biggest iron, and TPC is too much for me now.
  17. joechang New Member

    i don't think it is the ide disk,
    insert is very cpu intensive,
    not sure, but i think a Xeon 3.2GHz (P4 core) could do 30K rows/sec and a Xeon 5150 (Core 2) can do 50K rows /sec

    if none of your clients have 32GB free disk space, then they probably seriously screwed up their disk system
    disks need to be configured for performance, not capacity
  18. Tom Metzie New Member

    Hi
    Just got a HP DL585 4 x 2.8 Opteron Dual Core, 8GB mem
    Internal Controller and Drives
    1 x P400 Internal Controller
    2 x 72GB Raid0+1 System Drive and what not
    6 x 146 GB Raid 5 for misc reports
    .
    External
    2 x P800 controller with 512MB cache each
    2 x MSA50 each connected to its own P800 controller
    10 x 72GB 10k rpm disks per MSA50 (for a total of 20 spindles) in Raid 0+1 config
    One MSA50 for Database (350GB)
    One MSA50 for Logfiles (350GB)
    .
    This is the base test setup

    Let me know if you want me to run any tests on the server BEFORE we go live in a couple of weeks. Easy to do now (impossible later on. If you have canned SQL scrips, great, will report results back in this forum.



    Tom
  19. joechang New Member

    the scripts above are fine
    personally, i think 10 disks are too much for logs,
    consider a 14/6 or 16/4 split
  20. Tom Metzie New Member

    agree that 10 spindles is a tad much for logfiles, they do sit on separate controllers and channels though so might have two db partitions 10+4 (split over the two channels and then 6 for logfiles

    Channel 0
    10 spindles database

    Channel 1
    6 spindles database
    4 spindles logfile



    Tom
  21. Elijah New Member

    Joechang,
    many thanks for your scripts.

    I'm now testing them on the following configuration:

    -Dell PE 2950, 8GB RAM, 2 x Intel Xeon 5160 3Ghz, 2 x 36GB internal SAS 15k for system, RAID1, connected to PERC5i
    -2 x Dell MD1000 SAS units with 15 x 15k 36GB SAS disk each, both connected to a single PERC5/E card
    -Windows 2000 Advanced Server, fresh installation
    -SQL 2000
    -no AV or other software installed

    All latest patches, drivers, firmware updates were applied to the system.

    I wanted to know what's the best system configuration for the SQL database, therefore I've tested different disk layouts:
    - RAID10, 14 disks on a single channel (+1 hot spare)
    - RAID5, 14 disks on a single channel (+1 hot spare)
    - RAID10, 7 disks on a single channel + 7 on another channel (+1 hot spare)
    - RAID5, 7 disks on a single channel + 7 on another channel (+1 hot spare)
    - RAID10 on 10 disks on a single channel + 4 RAID5 disk on the same channel (+1 hot spare)

    I'll post more detailed results later, as today I have received another PERC5/E controller, so will be able to perform more tests; however, so far I'm very surprised because:
    1) it seems that peak read performance for a single PERC5/E is about 750MB/s - no matter how many disk I'd like to use and what disk layout is, that's all that we can get from the card (I have even tried RAID0 on 30 disks... no joy)... Why? In theory, PCI-e x8 should give 2Gb/s, and controller - 2 connectors x 1200 MB/s each? Do I miss something? I have checked the results with SiSoft Sandra 2007 - and they are quite the same

    2) 14 disks RAID 5 configuration on a single channel IS FASTER (even in WRITE operations) than 14 disk RAID10 on a single channel (!)

    3) even more suprising - creating a virtual disk - 2 x 7 disks on each channels:
    a) for RAID5 - causes performance DEGRADATION (?)
    b) for RAID10 - IMPROVES perfomance to the level of RAID5 on a single channel

    4) there's no point in creating separate drives for data and transaction logs. I've tested that on 10 drives configured as RAID10 (one partition for data) and 4 as RAID5 (separate partition for txlog) and the performance was worst than any other drive configured (I mean, both RAID10 and RAID5) as a single virtual disks and a single partition.

    As for me, this is quite odd. I have seen more articles on the web on not to brilliant RAID10 performance on PERC cards (actually, they were related to PERC4 SCSI controllers). Can anyone explain that?
  22. joechang New Member

    1st, the PE2950 has 1 x4 and 2 x8 PCIe slots, which is your PERC5/e connected to?
    second, i have never seen any documentation on what a single controller card can drive,
    it should not be assumed that 1 card can saturate a PCIe x8 bus
    even though a x4 has a nominal bandwidth in one direction of 1GB/sec, in practice its usually possible realize 70-80% of this in practice

    now when i first got the MD1000, i was under the impression that the 2 x4 SAS ports could be used for chaining and load distribution, that is, one set of disks on one port and the other on the second port,
    i am not sure if this mode is supported

    in any case, i strongly suggest you get a second PERC5/E, one for each MD1000
    i am confident you can hit 1.5GB/sec as i do with 10 int disks on the PERC5/i and 15 on the PERC5/e,
    i suspect that even 15 disks is saturating the PERC5 on pure sequential loads

    generally, i prefer to make 2-3 arrays from each external unit, of 5-7 disks each
    this way, you can have 3 x 7 disk arrays or 6 x 4 disk arrays for data, and the remaining disks for log

    also, use diskpart or whatever to align to 16K or larger
    disable write back cache
    RAID 10 small block random writes in theory is faster the RAID 5,
    RAID 5 should have better large block write than RAID10, that is, if you are writing the entire stripe in each op
    in practice, it is not clear what each controller vendor is actually able to achieve
  23. joechang New Member

    Below are comments I made in dtrounce's post,
    since it is relevent to storage, i am putting a copy of it here until i can clean everything up into a single post

    --------------------------------------------------------------------------------
    1. Go with a HP ProLiant ML370 G5 or Dell PowerEdge 2900 as opposed to white box for this
    use the configurations similar to TPC-H or TPC-C reports with some substitutions
    a. use 2GB DIMMs instead of 4GB
    b. use 4-6 SAS RAID controllers (4 for Dell, 6 for HP)
    c. your choice of 73GB 15K or 146GB 10K drives
    d. 8 or so external SAS storage units of 12-15 disks each,
    should cost around $60-70K, don't quibble about the price, a SAN is much higher
    white box vendors just do not have the expertise to support very heavily loaded disk configurations. 40 or so 7200RPM SATA disks seriously does not cut the mustard

    do stay away a SAN, the worse thing is that the SAN engineer or vendor may want to contribute their recommendations

    2. DW/OLAP is highly sequential, but not purely large block sequential, hence the 10K/15K SAS drives will really help

    3. RAID 5 will probably work, RAID 10 has better small block random write, but RAID 5 is actually better at large block sequential write

    4. file placement, instead one group of disks for data, a second for temp and backup,
    combine the disks with 3 partitions, 1st for data, second for temp, third for backup

    --------------------------------------------------------------------------------
    1. you just have to understand what each controller can do,
    do not assume that because it has a PCIe x8 slot that it can do 2GB/sec in each direction simultaneously
    there is also a system limit, i think the Intel 5000P chipset can do 3GB/sec

    I know the PERC5's can do about 800MB/sec on read, which is probably the realizable limit of a PCIe x4
    I think this is why HP was smart enough to offer the default config of 6 PCIe x4 slots instead 2 x8 + 2 x4 that other vendors offer
    they knew enough to do actual testing

    i think the P800 is really meant to drive more disks for peak random IO, not necessarily to get more sequential, but HP never sent me any test equipment

    2. I generally like to max out the PCI-e slots, 4 for Dell PE2900,
    6 for HP ML370G5.
    I would not complain about 3GB/sec read, 1.2GB/s+ write
    few people even bother to validate capability beyond 200-300MB/sec even when they buy a very expensive san

    3. in pure large block sequential you could get 80MB/sec per disk, meaning 10 disks could max out the controller.
    but in many SQL ops, you will get 30MB/sec per disk, so go ahead and connect 24-30 disks per controller

    4. test it, too many talk redundancy but never practice it, ie, they never bother to practice a online disk replacement.
    when it actually happens, they goof it, so it is as if they never had redundancy even though they paid for.
    talk is cheap, do it

    5. don't fret this, with enough disks, your backup performance will be fine
    use the end of the SAS disk for fast backup, put flat files there too
    consider getting a few big mother SATA drives in the extra system disk bays to copy the fast backups

    6. consider buying the system, 1 rack of SAS, 1 rack SATA,
    then run a full set of tests
    i think you will see what i am talking about

    there was another guy who compared new 7.2K SATA to older 15K SCSI, a few tests favored new SATA, but other key tests heavily favored 15K
    he just could not understand the difference between small and large block io characteristics

    --------------------------------------------------------------------------------
    1. my preference is either of 2 x 7 disk LUNs per enclosure + 1 hot spare
    or since this is a DW, not OLTP, 3 x5 disk LUNs,
    then just buy a few extra disks to keep in the cabinet,
    when a disk fails, change it
    also, be sure to PRACTICE changing the disk before release to production.

    DO NOT USE OS STRIPING
    create 3 partitions on each LUN, the data first, temp second, backup/source last.
    if you can accurate size your data & temp partitions, use a raw partition
    i actually like 4 partitions, main db in 2 filegroups, each getting a partition
    + 1 for temp and 1 for backup
    the big table gets its own file group up front, all other tables in the second FG

    use the SQL Server capability of splitting a Filegroup into multiple files
    put 1 data file on each lun, same with temp

    there is actually a trick of getting the critical data into the fastest part of the disk
    do as above, but create an additional file group in the backup location, the slowest part of the disk
    load the data into a table on the slow filegroup
    then create a clustered index on the fast file group, which moves the entire table
    do this just for the big table, don't bother for the others

    2. Until you get to 6 controllers + 12 racks of SAS drives (on HP ML370G5, 4 controllers 8 racks on Dell PE2900),
    stick to 146G 10k or 73G 15K

    my thoughts are:
    never buy a system with the intent to use for 4+ years as a primary system.
    buy for 2 years, then buy a new system for the next 2 years, rolling the previous system to development/QA

    lets see: 1 rack of 15 146G (decimal ie 146x10^9 = 136GB binary, 2^30)
    assuming 3x5disk RAID 5 LUNs
    you have 3x4x136 = 1631GB on each rack, or 543GB per LUN
    on each LUN i would probably allocate:
    160GB for data (possib4ly in 2 partitions) (approx 30% which is the really fast part)
    ~30GB for tempdb (you have to figure out what you need depending on your app)
    leave 350G+ for backup, source & extra
    be sure to use LiteSpeed for backup, later i will tell you the key tuning parameters

    this works out to 3.8TB data on 8 and 5.7TB with 12 racks

    in any case, carefully consider 2x 146G drives vs 1x 300G
    the 2x drives will have higher performance (same capacity) so it is worth an extra 20-30%

    --------------------------------------------------------------------------------
  24. joechang New Member

    below is from the donger post on disk alignment

    put the following into a text file with .vbs extension
    then execute as: cscript filename.vbs
    don't fret every disk, just the big db files


    strComputer = "."
    Set objWMIService = GetObject("winmgmts:" _
    & "{impersonationLevel=impersonate}!\" & strComputer & "
    ootcimv2")

    Wscript.Echo "Antecedent, Dependent, StartingAddress, EndingAddress "

    Set colDiskPartitions = objWMIService.ExecQuery _
    ("SELECT * FROM Win32_LogicalDiskToPartition")

    For each objPartition in colDiskPartitions
    Wscript.Echo objPartition.Antecedent _
    & ", " & objPartition.Dependent _
    & ", " & objPartition.StartingAddress _
    & ", " & objPartition.EndingAddress

    Next

  25. TRACEYSQL New Member

    Interesting thread.
    Im not sure where to start....All i know is the disk queue length on our system goes up to 70 most days...

    Im trying to figure out how many bytes of data is read and written to the disk and whether our san is adequate or not.

    I do not have 32 gig free.

    Is there something i can do with the fn_virtualstats to run over the next few weeks.

    How would i know the specs of our SAN disks how much mb per second it can handle.
    And how to figure out when some one is doing a 18 gb table scan.
  26. joechang New Member

    just change the values below to the size in MB of the data set you want to test

    @BEnd=32768, @BTotal=32768

    One thing i really despise about SAN vendors is that they charge so much that you can barely afford the capacity,
    when it fact you should really have 4-10X the capacity you if you had stuck with DA disks
  27. ex-oracle New Member

    Chaps,
    One thing that has troubled me recently is that when most people display their hardware configs they show a dell 2950 or hp dl385 with 4GB of RAM or maybe a hp dl585 with 8GB of RAM. RAM is now pretty cheap (in 2 Gig DIMMS form). Why are people not 'frontloading' their systems with memory therfore taking pressure off their disks?
    Isn't money spent on RAM, money well spent? (well within reason)
    Ex-Oracle
  28. TRACEYSQL New Member

    I got nothing in Results tables to produce
    table scan performance
    ms | CPU-ms | Reads/sec | MB/sec
    63,313 | 1179 | 2421.8 | 519.5


    I get results in C2

    Thanks

    Is there a way to capture results on a production box with actual activity so i can report from what is being performed...That would be very useful too

    I did the following:
    -- This table is used to populate other test tables
    CREATE TABLE I (I int NOT NULL , CONSTRAINT PK_I PRIMARY KEY CLUSTERED (I) )
    GO
    SET NOCOUNT ON
    DECLARE @I int SELECT @I = 1
    BEGIN TRANSACTION
    --WHILE @I <= 100000 BEGIN
    WHILE @I <= 10 BEGIN
    INSERT I (I) VALUES ( @I)
    SET @I = @I+1 END
    COMMIT TRANSACTION
    CHECKPOINT
    GO

    -- Used to store results of "random IO" tests
    CREATE TABLE Results (
    Calls int,
    ms int,
    CallsPerSec real,
    RowsPerCall real,
    RowsPerSec real,
    ReadsPerSec real,
    dt datetime DEFAULT getdate() )

    -- test table with Clustered index,
    -- also test a table without clustered index
    CREATE TABLE C2 (
    ID1 int NOT NULL,
    ID2 int NOT NULL,
    ID3 int NOT NULL,
    ID4 int NOT NULL,
    ID5 int NOT NULL,
    ID6 int NOT NULL,
    ID7 int NOT NULL,
    ID8 int NOT NULL,
    ID9 int NOT NULL,
    ID10 int NOT NULL,
    ID11 int NOT NULL,
    bID1 bigint NOT NULL,
    bID2 bigint NOT NULL,
    rMoney money NOT NULL,
    rDate datetime NOT NULL,
    rReal real NOT NULL,
    rDecimal decimal (9,4) NOT NULL,
    sID smallint NOT NULL,
    sChar char(700) NOT NULL,
    CONSTRAINT [PK_C2] PRIMARY KEY CLUSTERED ( [ID1] ) WITH FILLFACTOR = 100 )
    GO

    -- @BStart -- batch starting set
    -- @BEnd -- batch end set
    -- @BTotal -- total batches, can have multiple concurrent sets active
    -- @BSize -- rows inserted per batch
    -- @BRow -- starting row of batch
    -- @T total rows, @I current row, @C cardinality, @D distinct values, @P rows per page
    -- @D2 10 rows, D3 40, D4 100, D5 300, D6 1000 D7 3000 D8 10000
    ---------1---------2---------3---------4---------5---------6---------7---------8---------9
    SET NOCOUNT ON
    DECLARE @BStart int,@BEnd int,@BTotal int,@BSize int,@BRow int
    ,@T int,@I int,@C int,@P int,@M int
    ,@D2 int, @D3 int, @D4 int, @D5 int, @D6 int, @D7 int, @D8 int, @D9 int, @D10 int
    SELECT @M = 128 -- 128 Pages per MByte
    SELECT @P = 10 -- 10 rows per page, based on row size and fill factor
    -- @BEnd=32768, @BTotal=32768 set to create 32,768MB table, ie, 32GB
    SELECT @BStart=1, @BEnd=500, @BTotal=500, @BSize=@P*@M, @C=10
    SELECT @T=@BTotal*@BSize
    SELECT @D2=(@T-1)/@C+1, @D3=(@T-1)/20+1, @D4=(@T-1)/30+1, @D5=(@T-1)/50+1, @D6=(@T-1)/100+1
    , @D7=(@T-1)/300+1, @D8=(@T-1)/1000+1 , @D9=(@T-1)/3000+1, @D10=(@T-1)/10000+1
    WHILE @BStart <= @BEnd BEGIN
    BEGIN TRAN
    SELECT @I=(@BStart-1)*@BSize, @BRow = @BStart*@BSize
    INSERT C2(ID1,ID2,ID3,ID4,ID5,ID6,ID7,ID8,ID9,ID10,ID11,bID1,bID2,rMoney,rDate,rReal,rDecimal,sID,sChar)
    SELECT I+@I,(I+@I-1)%@D2+1,(I+@I-1)%@D3+1,(I+@I-1)%@D4+1,(I+@I-1)%@D5+1,(I+@I-1)%@D6+1
    , (I+@I-1)%@D7+1,(I+@I-1)%@D8+1,(I+@I-1)%@D9+1,(I+@I-1)%@D10+1, I+@I
    , 2*@T*RAND(CHECKSUM(NEWID())), @T*RAND(CHECKSUM(NEWID())),10000*RAND(CHECKSUM(NEWID()))
    , DATEADD(hour,150000*RAND(CHECKSUM(NEWID())),'1990-01-01')
    , 10000*RAND(CHECKSUM(NEWID())), 10000*RAND(CHECKSUM(NEWID())), (I+@I)%320+1, CONVERT(char(100),NEWID())
    FROM I WHERE I <= @BSize
    SET @BStart = @BStart+1
    COMMIT TRAN
    IF (@BStart%10 -1=0) PRINT 'C2 Step ' + CONVERT(varchar,GETDATE(),121) + ', ' + CONVERT(varchar,@BStart-1)
    + ', row ' + CONVERT(varchar,@BRow) + ', Trancount ' + CONVERT(varchar(10),@@TRANCOUNT)
    END
    PRINT 'C2 Complete ' + CONVERT(varchar,GETDATE(),121) + ', row ' + CONVERT(varchar,@BRow)
    + ', Trancount ' + CONVERT(varchar(10),@@TRANCOUNT)
    GO
    CHECKPOINT
    GO
    UPDATE STATISTICS C2(PK_C2) WITH FULLSCAN
    PRINT 'S1 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID2 ON C2 (ID2,bID1)
    PRINT 'I2 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID3 ON C2 (ID3,bID1)
    PRINT 'I3 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID4 ON C2 (ID4,bID1)
    PRINT 'I4 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID5 ON C2 (ID5,bID1)
    PRINT 'I5 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID6 ON C2 (ID6,bID1)
    PRINT 'I6 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID7 ON C2 (ID7,bID1)
    PRINT 'I7 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID8 ON C2 (ID8,bID1)
    PRINT 'I8 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID9 ON C2 (ID9,bID1)
    PRINT 'I9 ' + CONVERT(varchar,GETDATE(),121)
    GO
    CREATE INDEX IX_ID10 ON C2 (ID10,bID1)
    PRINT 'I10 ' + CONVERT(varchar,GETDATE(),121)
    GO

    From code i do not see where the results is updated
  29. joechang New Member

    to TraceySQL

    I used multiple post sections for the scripts
    1. creates table and populates data - you did this
    2. runs in memory tests
    3. random io tests
    4. sequential io tests

    it is really helpful to read the scripts to get a general idea of what is happening
    the description at the top helps too
  30. joechang New Member

    to ex-oracle:
    memory is good, intelligence is better
    in general add memory until the impact on disk io is nolonger significant

    consider that many of the oracle "rules" are so old as to be irrelevent today
    so i suggest discarding any rules unless you see the underlying analysis is still valid

    in the really old days, adding memory was good all the way up to the max system memory
    today, it is really not necessary to go up to the system max (32-128GB) for many people

    furthermore, if they are not on SQL 2005 64-bit, they will need enterprise edition to use more than 2G in SQL 2000, and even SQL 2005 32-bit has the AWE overhead

    Even with all the above,
    the key is a balanced system,
    on the hardware side
    this means CPU, memory, disk system (you can include network)
    if you disregard disks
    you will find that certain unavoidable operations will cripple your database
    even if you have all the memory in the world

    thats because data still must be written to disk
    unless you get a true in-memory database engine
    and even that will need excellent sequential disk io
  31. ex-oracle New Member

    Joe,
    Thanks for the reply.
    I am concentrating of x64 SQL 2005 but must admit I didn't know the 2Gig limit in SQL 2000 Standard Edition. Needing Enterprise edition would make thing very expensive.
    I spoke to a Microsoft guy on Friday and interesting stuff came up about licensing VMware for SQL Server 2005 SP2.
    My beepers gone off ... ... whats gone wrong now !!!
    Ex-Oracle
  32. TRACEYSQL New Member

    Thanks i try the other parts and see what i get.

    Right now im getting the scripts to work on my development system.
    On our production we have a SAN with multiple disks for log, data 4 different disks...
    Do you recommend running this after production hours our "production box"

    Thanks
  33. joechang New Member

    depending on the nature of your business,
    perhaps waiting until next weekend
    then announce that testing will be done between the hours of xx to yy
  34. TRACEYSQL New Member

    I will run on our Development Server First.
    Then get the results.

    Perhaps if possible i can post in here so you could help me digest them a little bit first let me know if i could do this.

    Thank you very much
  35. joechang New Member

    Just for fun
    why don't you solicit bets on which class performance
    your production storage system falls,

    solicit the vendor who sold you the san too,
    the low latency random IO should be measured at 30 rows per query
    high 100 rows

    from top post
    Class, Seq Read, Random IOPS (<10ms / >20ms latency)
    AA: 10GB/sec, 40K / 100K
    A : 3GB/sec, 12K / 30K
    B : 1000MB/sec, 4K / 10K
    C : 300MB/sec, 1200 / 3K
    D : 100MB/sec, 400 / 1,000
    E : 30MB/sec, 200 / 300
    F : <30MB/sec, <150 / <300
  36. TRACEYSQL New Member

    My bet its the SAN.

    I have seen the disk queue lenght go up to 97 on some days.....

    I asked network guys to look at SAN and still waiting for configuration of this.
    All i get is DISK M, Q, O, R....to store my data...(Now i want to get into the configuration and check its correct and that the data is returned as fast as possible.

    Your scripts i believe will help me analyziz this.

    So i keep you posted.

  37. joechang New Member

    replaced by scripts on first page
  38. joechang New Member

    replaced by scripts on first page
  39. joechang New Member

    replaced by scripts on first page
  40. joechang New Member

    replaced by scripts on first page
  41. joechang New Member

    replaced by scripts on first page
  42. bytehd New Member

    Thanks Joe.
    I will give these new versions a whirl.
  43. TRACEYSQL New Member

    Whats the last script for update test?
  44. bertcord New Member

    “Then on each array group, create 4 partitions.
    The first partition is for the most active main database file group.
    The second partition is for the other main db filegroup.
    The third partition is for tempdb
    The fourth is for backup, and other uses.”

    Why do you make 4 partitions? The only benefit I see is that it gives you logical separation. AM i missign another benefit?

    Also why do you recommend 3 array groups of 5 disks from a 15 drive rack? I understand this for SCSI as you have multi SCSI channels but how are you configuring this with SAS drives? How many SAS controllers do you have driving this rack of 15 drives?

    Thanks
    Bert
  45. joechang New Member

    in many databases, almost always a data warehouse,
    but often in transaction processing
    there is one table that is much bigger than all the others

    if you enjoy staying up late to watch this table being defragged/reindex, and hoping the defrag/reindex completes before morning
    then disregard

    otherwise, if you like doing something else at night

    Put the data for the big table in its own file group,
    leave the indexes for the big table and the other tables in a separate FG (easier to keep these in the primary)
    now the big table will rarely require a defrag.
    the assumption is your disk system is adequate to handle the smaller tables and the indexes for the big table
    if you went max memory with underpower disks, you are still screwed.

    the reason each FG is spread across all data disk is so you have max surge power on any single op
    15 % 3 = 0, if you do not need a hot spare
    15 % 2 = 1, if you want the hot spare

    1 SAS adapter typically has 2 x4 ports,
    connect the x4 port to 1 enclosure
    start with 1 adapter for each enclosure
    I would like to have both x4 ports connected to the 1 enclosure
    but most do not support this

    only when you have maxed out your PCI-e slots, do you go to 2 enclosures per adapter
    this is the reason I like the ProLiant ML370G5, and DL585G2
  46. TRACEYSQL New Member

    The command
    DBCC DROPCLEANBUFFERS

    is this the same as if you did a shutdown on the server.
    clears the memory?

  47. TRACEYSQL New Member

    This is more fun running these scripts then index rebuild watching.....
    This is where you really find out about your system.

    Im building my scripts now and running in test to see what i get..

    Thanks for sitting with us on this thread
  48. joechang New Member

    from the BOL description, it says: removes all clean buffers,
    then in the remarks, it says: to test .. with a cold buffer cache

    so what about dirty buffers?
    so i would think that the proper sequence to go cold is:
    1. CHECKPOINT
    2. DBCC DROPCLEANBUFFERS
    3. DBCC FREEPROCCACHE (if relevent)

    ----------------------------------------------------------------------------
    DBCC DROPCLEANBUFFERS
    Removes all clean buffers from the buffer pool.

    Remarks
    Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.

    CHECKPOINT
    Forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk.
  49. bertcord New Member

    Ok lets look at an example based on your strong preference of

    “On each external rack, my strong preference is to make 2 array groups of 6-7 disks”
    And

    “Suppose there is 1 database with 2 data filegroups,
    and that the application makes extensive use of tempdb
    Then on each array group, create 4 partitions.
    The first partition is for the most active main database file group.
    The second partition is for the other main db filegroup.
    The third partition is for tempdb
    The fourth is for backup, and other uses.

    Lets say we have two external arrays with 14 disks each (all RAID10)

    Each array group has 7 disks
    Rack1 - ArrayGroup1 – partition1 – FG1-File1
    Rack1 - ArrayGroup1 – partition2 – FG2-File1
    Rack1 - ArrayGroup1 – partition3 – temdb1
    Rack1 - ArrayGroup1 – partition4 – backup1
    Rack1 - ArrayGroup2 – partition1 – FG1-File2
    Rack1 - ArrayGroup2 – partition2 – FG2-File2
    Rack1 - ArrayGroup2 – partition3 – tempdb2
    Rack1 - ArrayGroup2 – partition4 – backup2
    Rack2 – ArrayGroup1 – partition1 – FG1-File1
    Rack2 – ArrayGroup1 – partition2 – FG2-File2
    Rack2 – ArrayGroup1 – partition3 – tempdb3
    Rack2 – ArrayGroup1 – partition4 – backup3
    Rack2 - ArrayGroup2 – partition1 – FG1-File2
    Rack2 - ArrayGroup2 – partition2 – FG2-File2
    Rack2 - ArrayGroup2 – partition3 – tempdb4
    Rack2 - ArrayGroup2 – partition4 – backup4

    Why not…

    Each array group has 7 disks
    Rack1 – ArrayGroup1 – Single partition with FG1-File1,FG2-File1, tempdb1
    Rack1 – ArrayGroup2 – Single partition with FG1-File2,FG2-File2, tempdb2
    Rack2 – ArrayGroup1 – Single partition with FG1-File3,FG2-File3, tempdb3
    Rack2 – ArrayGroup2 – Single partition with FG1-File4,FG2-File4, tempdb4

    Or
    Each array group has 14 disks
    Rack1 – ArrayGroup1 Single partition with FG1-File1,FG2-File1, tempdb1
    Rack2 – Array Group2 Single partition with FG1-File2,FG2-File2, tempdb2

    In all three examples all the filegroups target all of the spindles so I don#%92t see the advantage of the partitions. Now you could argue that you could create your most accessed table on the partition you created first because it will be created at the faster part of the disk but in most cases I don#%92t see this as that much of a benefit for the overhead of managing capacity for this many partitions. Also you are very lucky if you can isolate your DB to only one very large table.
  50. TRACEYSQL New Member

    Got my results
    112402300003746.7334 10.0 37467.3320.02007-02-27 11:10:31.027
    54006300131799.4203 30.0 53982.6050.02007-02-27 11:11:27.947
    1718830000572.93335 100.0 57293.3320.02007-02-27 11:12:36.413
    15033000050.099998 1000.0 50100.0040.02007-02-27 11:13:44.410
    4153003013.819514 10.0 138.19514142.457552007-02-27 11:24:07.990
    3753006012.47505 20.0 249.50099242.448442007-02-27 11:24:48.067
    286300469.5187378 30.0 285.56213265.293212007-02-27 11:25:28.143
    208300936.9119062 50.0 345.59531303.060522007-02-27 11:26:08.267
    127301864.2072482 100.0 420.72485350.858 2007-02-27 11:26:48.500
    49300931.6282856 299.97958 488.45245400.159522007-02-27 11:27:28.623
    17320630.53020614 999.58826 529.98785419.01882007-02-27 11:28:10.703

    I ran create table, in memory tests, random io tests, tablescan.

    Can you help digest

    Thanks
  51. joechang New Member

    TRACEY
    are you sure? many dba seems to spend much time watching the index rebuild, oh wait, they are not having fun, never mind

    BERTCORD
    do you want max performance or easy?
    many people are fine with easy,
    but if you want to be the fastest gun, you have to do some work

    anyways
    what i did not mention was that i would use raw partitions
    hence the requirement for multiple partitions
    the reason for raw partitions is that windows does funny things on ntfs partitions, preventing you from a getting a large contiguous block on the fastest portion of the disk

    the reason for 2 arrays in 1 enclosure is for greater flexibility in deploying my disks. potentially you might want 1/2 an enclosure for logs
    and you want balance arrays for data, so if you had 3.5 racks for data

    now if you had 8+ racks, i can understand 1 array per rack
    a real dba has lots and lots of racks
  52. joechang New Member

    I do not see the table scan results
    also, you are not using the latests scripts
    which collect more performance info

    use the scripts from page 3 of this post

    it looks like you are doing 50K rows/sec in memory
    i get 117K/sec on a Xeon 3.2GHz (Netburst core)
    disks
    265 disk reads/sec at 30 rows per call
    419 at 1000 rows per call

    this looks like 1-2 SCSI disks
    you will get higher numbers with more disks
    also, if you use only 10% or so of the disk space
    you will a higher ratio between 1000 and 30 rows per call
    this is the short stroke effect

  53. bertcord New Member

    Ok, leaving out the raw partitions is rather significant…<br /><br />Anyhow this “real dba” needs the ability to perform online DBCC check functions so I will stay away from the raw partitions. <img src='/community/emoticons/emotion-5.gif' alt=';)' /><br /><br />I am curios to see performance numbers of NTFS vs/ RAW. Do you have these?<br /><br />If your client refused to use raw partitions how would you configure your drives?<br />
  54. joechang New Member

    if they had reasonable skill level, i would advocate multiple partitions
    if not, multiple files on a single partition

    also remember thay i said on the assumption there is one very large table,
    if not, then the 2 FG does not apply
    it could also apply if you had many static tables
  55. bertcord New Member

    Adding this complexity gets you what other then placing one table at the outer edge of the disk? Chances are queries need to join this large table with others so I really don#%92t see the benefit unless you are running a query that only looks at this one big table.

    I think your recommended configuration needs to only be implemented with extreme caution and with the certainty that your data will not change.

    1.What happens if database access patterns change and one of your tables that was not so active is now very active?
    2.What happens if you didn#%92t size your partitions correctly? For example based on historic data you decide to make your partition1 100GB. But your data ratios change. You could be running out of space on a set of partitions where other partitions have tons of free space.

    Also any benefit you get from having your FG1 at the outer edge of your drives goes out the window as soon as you start writing to tempdb in your configuration.
  56. TRACEYSQL New Member

    With the new ones i did not understand the last program update test?

  57. joechang New Member

    it is a loop around an update statement
    does it not work for you?

    UPDATE C2 SET rMoney = 10000*RAND(CHECKSUM(NEWID())) WHERE ID4 = @Group
  58. TRACEYSQL New Member

    So in order to do new tests

    Create table with new structure of results.
    Run In memory random read tests
    Run Random read tests
    Run Table Scan
    Run Update test


    Tracey
  59. joechang New Member

    you can run any of the test scripts you want in any order

    just use the new scripts which collect more performance info
  60. TRACEYSQL New Member

    Could not get back to this for a while..
    Got major issue with NETWORKIO Waits crazy figures
    Each minute i have network wait seconds 115.43 and
    another time i have One hour i have 5082.58

    Asked network guys and they are asking me what the problem is....
    Databases all reindex, update stats etc, enough space fine...just suddenly getting this hugh wait...

    Got any pointers...sorry to post on this thread
  61. joechang New Member

    do some tests

    how long does it take to open a connection
    to get a small result set
    medium (1000 rows - 200 bytes per row)
    large

    try from different places in the network
    near the db server
    where the app servers are , etc
  62. TRACEYSQL New Member

    We have our ERP system where users all connect to db i have four other servers i try from all over the place and see the results.

    Thanks very much
  63. TRACEYSQL New Member

    I running the following:
    SET STATISTICS IO ON
    SET STATISTICS TIME ON

    select top 1000 proj_id from table

    I got the statistics on

    Open connection? Would this be this figure which i got from
    Query Analyzer tuning on client statistcs
    Time Statistics
    Cumulative client processing time
    Cumulative wait time on server replies

  64. joechang New Member

    did you ever have any indications of problems from users

    i know MS touts the waitstat method highly,
    but everyone doing so has source code access to SQL Server

    i have never seen anyone outside MS interpret waitstats in a meaningful manner

    just want to make sure you are not chasing a phantom
  65. TRACEYSQL New Member

    Nothing.....I tested today all servers reports etc from citrix, outside citrix direct to Server all is good now.

    Fun this finding nothing....
  66. awilbourn New Member

    I am waiting to hear from my Network Server team on this question, but thought I would post. We have 3 MD1000 and thought we would be able to make a RAID10 (0+1) of two cages, but it seems the DELL hardware only allows 16 drives in a single RAID10. We have other Compaq hardware that allows it. Does anyone know of how to accomplish with the DELL equipment? I believe we will just call DELL, but that is not my department to do that I just give a configuration of what we want. We recently decided to change the number of LUN's after hearing how myspace.com has their SAN configured. Granted this is not a SAN when we use attached storage, but the concept would still hold true get 28 disks working together for 1 cause verses a smaller number (spindle count is what we really want)
  67. joechang New Member

    16 disks per LUN is not unusual
    i know some vendors support more

    usually this is not an issue with databases
    the common practice is to split each database filegroup into multiple files

    my preference is to start with 1 PERC5/E per MD1000,
    only going to 2 MD 1000 per PERC when i fill the available PCI-E slots

    a second preference is to split each MD1000 into 2 LUNs
    this is for administrative flexibility

    so you mean myspace has their data on one giant super LUN?
    maybe they thought the effort to split the FG into multiple files was too much effort
    or the san vendor convinced them to do it to justify a grossly overpriced san
  68. bertcord New Member

    AS Joe mentioned you I would just create two LUNS and split your database into two files.

    I would not focus too much on how myspace has their database configured since they run their databases on a “non-traditional” SAN called 3par. When you create a LUN on 3Par it breaks the LUN into 256MB chunks and spreads it across all disks in the system. So even if they had only one LUN it would be spread across all disk in the system.

    I am a big fan of the 3Par. We have one EMC CX700 and four 3Par S400#%92s. The EMC is a piece of crap compared to the 3Par.

    Not sure if you read this article about myspace but its interesting.

    http://www.baselinemag.com/print_article2/0,1217,a=198614,00.asp
  69. joechang New Member

    hello bert
    no time no post?

    do me a favor if you can
    in one of the other post, some one was concerned about the optimum number of data files,
    i do not believe this has any relevence any more
    but i think it would require the 3par to prove it

    create the DB on page 1 of this post, and run the scripts on page 3
    if you can, send me the results

  70. gordy New Member

    I ran these tests on my production SQL server. The results may be erratic since I couldn't get exclusive time to do this.

    This is a quad single-core opteron 850 w/16gb of memory (16x1gb) 3 areca ARC-1160 SATA HBAs each w/15 10k sata drives attached. This test database had two data files each on a separate HBA on a 14 drive raid10 volume. Tempdb and log files are on separate volumes on the 3rd HBA.

    new in memory read test
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec
    116.00,"44,347.00","30,016.00",172.00,"1,477.45",10.00,"14,774.45"
    116.00,"81,515.00","30,000.00",604.00,"2,717.17",30.00,"81,515.00"
    116.00,"49,908.00","30,003.00",975.00,"1,663.43",100.00,"166,343.40"
    116.00,"4,338.00","30,003.00",985.00,144.59,"1,000.00","144,585.50"

    random read tests
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,Avg-sec/IO
    116.00,232.00,"30,000.00",82.00,7.73,10.00,77.33,155.73,1.22,6.40
    116.00,138.00,"30,050.00",55.00,4.59,20.00,91.85,166.06,1.30,6.00
    116.00,729.00,"30,000.00",103.00,24.30,30.00,729.00,"1,027.00",8.02,8.57
    116.00,760.00,"30,016.00",126.00,25.32,50.00,"1,265.99","1,353.35",10.57,9.80
    116.00,597.00,"30,016.00",104.00,19.89,100.00,"1,988.94","1,954.19",15.27,13.33
    116.00,279.00,"30,143.00",141.00,9.26,300.00,"2,776.73","2,642.80",20.65,23.59
    116.00,108.00,"30,236.00",174.00,3.57,999.94,"3,571.70","3,334.54",26.11,28.79
    116.00,30.00,"30,703.00",166.00,0.98,"2,999.90","2,931.21","2,646.91",20.73,39.20
    116.00,17.00,"30,783.00",188.00,0.55,"9,998.35","5,521.62","4,957.77",38.82,22.14

    table scan performance
    spid,Calls,ms,CPUms,dpages/s,MB,Table MB/sec,ReadsPerSec,Disk MB/sec,Avg-sec/IO
    116.00,1.00,"132,866.00","2,541.00","31,571.25","32,771.45",246.65,"1,236.43",247.56,47.10
    116.00,1.00,"76,863.00","2,041.00","54,574.32","32,771.45",426.36,869.79,428.01,53.42

    update test
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,DWritesPerSec,D Write MB/sec,Avg-sec/IO,LWritesPerSec,L Avg-sec/IO
    116.00,"51,490.00","30,003.00",686.00,"1,716.16",30.00,"51,484.85",0.07,0.00,16.73,3.34,0.93,"1,718.76",0.01
    116.00,0.00,326.00,9.00,0.00,0.00,0.00,0.00,0.00,"1,463.19",307.35,1.02,9.20,0.00
    116.00,695.00,"30,033.00",84.00,23.14,30.00,694.24,"1,105.05",8.63,0.00,0.00,8.12,23.21,0.00
    116.00,0.00,716.00,16.00,0.00,0.00,0.00,0.00,0.00,"28,998.60",227.16,0.36,4.19,0.00
    116.00,525.00,"30,003.00",132.00,17.50,100.00,"1,749.83","1,889.58",14.76,0.00,0.00,13.43,17.70,0.00
    116.00,0.00,"1,483.00",20.00,0.00,0.00,0.00,0.00,0.00,"35,002.02",275.23,0.00,2.02,0.00
    116.00,228.00,"30,156.00",166.00,7.56,300.00,"2,268.21","2,231.89",17.44,0.00,0.00,27.18,7.63,0.03
    116.00,0.00,"3,360.00",29.00,0.00,0.00,0.00,0.00,0.00,"20,020.54",158.28,3.98,0.89,0.00
  71. TRACEYSQL New Member

    Just check before i run this....Did you do the DROP CLEANBUFFERS.

    I have not done this on a production server before and not sure what it doing..(Is it removing every thing from the buffer..(so when a sp is executed in remains in memory).
    All old sp etc are removed.

    This week one of our databases grew 7 GIG (this is just data not the logs) and there were blocks on the database all day (my day consisted of Kill commands

    So question:
    I would like to run these on our Production Server as it will pin point some problems with the current system, but if the SQL Server cannot handle the 7 GIG as the database was experience system processes (Data Growth day long).

    I not sure it will handle the 32 GIG table.

    Any advice ..


  72. joechang New Member

    On Gordy's results
    if I understand correctly, the data resides on a total of 28 disks
    RAID level is 10

    notice the in-memory reads is 166K/s vs our 120K/sec
    this is about right for Opt 2.4GHz vs. Xeon 3.2GHz, as this test favor the low memory latency of the Opt.

    the random reads seem low and erratic, peak at 4958/sec or 177 per disk,
    but many other numbers from 36-120,
    i was expecting more consistency in the 120-140 range
    this could be because you are on a production system, and could not get a fresh format of the partitions,

    the table scan results of 247 & 428 MB/sec are also much lower than what 28 SATA should be able to do (are you SQL 2000 or 2005?)
    again, probably because this is a production system and the data space is fragment (or if you are SQL 2000, which has a silly mistake in the table code)

    but any ways, notice SAS/SCSI has much better high-queue random read capability than SATA, both in IOPS per disk and latency,
    this is because NCQ is not fully implemented in SATA, and is in SCSI/SAS
    (it is supposed to be in some SATA drives and controllers, but I don't have the latest, a recent test report on Tom's HWG shows NCQ in SATA)

    now to the write test
    first, the check point is not a true random test, but its not sequential either
    notice that when i write to a small 100MB block of data, the average checkpoint write size is 140-215K
    when writing randomly to 32G (at 2000 rows/s for 30sec, we are modifying approx 60K pages) the average write size is 8K

    while we did randomly update data spread across 32G
    I am assuming SQL is smart enough to checkpoint write in sequence
    so i am expecting a total access time (rotational latency + seek) of close to 2-3ms for a 10K drive, basically, just the rotational latency of 3ms
    or 300-400 IOPS per drive

    my SAS tests bear this out

    the SATA system shows 35K IOPS or 2.4K per disk (2 disk writes per IOP)
    there is no way a physical disk can do this

    If I had to guess
    I would say that what we are seeing is the OS write cache
    for SAS/SCSI controllers, this is disabled to meet integrity requirements for database log operation
    For the IDE/SATA controller on the motherboard, it is not
    I do not recall if SATA raid cards enable or disable this

    go to Computer Management -> Disk Management
    right click on the SATA raid disk,
    go to Policies, check the status of write cache

    another thing is to just increase the run time from 30,000ms to 1,800,000 or 30 min
    its probably a better idea to use perfmon recording the entire physical disk object in log mode, 5 sec sample interval
    then we can see what is really happening

  73. joechang New Member

    to Tracey

    these test script were not meant for production db
    if you have performance problems on a prod db
    just follow my steps in the 2 sticky post in SQL 2005 Perf for DBA

    or go to www.qdpma.com, read the whole of the instructions sections
    collect the info, and have some one analyse it
  74. awilbourn New Member

    quote:Originally posted by joechang

    16 disks per LUN is not unusual
    i know some vendors support more

    usually this is not an issue with databases
    the common practice is to split each database filegroup into multiple files

    my preference is to start with 1 PERC5/E per MD1000,
    only going to 2 MD 1000 per PERC when i fill the available PCI-E slots

    a second preference is to split each MD1000 into 2 LUNs
    this is for administrative flexibility

    so you mean myspace has their data on one giant super LUN?
    maybe they thought the effort to split the FG into multiple files was too much effort
    or the san vendor convinced them to do it to justify a grossly overpriced san
    We found the PERC5/E allows for RAID50 to include up to 256 drives, so we are testing with that to have 28 drives allocated at one LUN. Just thought I would pass that along.
  75. gordy New Member

    I'm using sql2005. The OS write cache setting is showing grayed-out/unchecked. However, there is a 1GB SODIMM and BBU installed on my HBAs and the controller write cache setting is enabled.

    I will re-run the update test later tonight with a much longer duration.
  76. gordy New Member

    Just finished running the modified update script - I replaced all occurances of "30000" with "1800000"

    It ran for 2 hours and 6 minutes. I was also watching the perf mon current disk queue length while it ran - it maxed out at 255 for each volume for a while. This is still on the same production server w/background traffic going on.

    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,DWritesPerSec,D Write MB/sec,Avg-sec/IO,LWritesPerSec,L Avg-sec/IO
    74.00,"1,568,084.00","1,800,000.00","129,867.00",871.16,30.00,"26,134.73",0.01,0.00,9.38,1.78,2.36,873.05,0.29
    74.00,0.00,360.00,32.00,0.00,0.00,0.00,2.78,0.02,"1,316.67",278.58,0.99,8.33,0.00
    74.00,"45,383.00","1,800,203.00","108,790.00",25.21,30.00,756.30,713.19,5.57,559.60,4.95,53.04,25.26,0.33
    74.00,0.00,"40,750.00","1,161.00",0.00,0.00,0.00,0.02,0.00,"2,974.48",24.13,165.28,0.12,1.20
    74.00,"26,036.00","1,800,050.00","81,762.00",14.46,100.00,"1,446.40","1,258.84",9.83,977.58,8.43,47.09,14.57,0.52
    74.00,0.00,"117,453.00","1,599.00",0.00,0.00,0.00,0.01,0.00,"3,390.46",29.79,173.90,0.03,0.00
    74.00,"11,062.00","1,800,030.00","56,798.00",6.15,300.00,"1,843.63","1,546.80",12.08,"1,194.20",10.57,58.26,6.27,0.11
    74.00,0.00,"149,263.00","2,492.00",0.00,0.00,0.00,0.01,0.00,"3,314.93",29.93,163.58,0.02,0.00
  77. joechang New Member

    we did all this because Michael/surreal imagined deficiencies in his disk systems,
    did not have faith in reason, and the predictability of disks in disk arrays.

    once the controller cache is exceeded, disks behave like disks.
    SCSI/SAS also have better deep queue characteristics, I am sure this is just because NCQ is still new in SATA, and not everything is in place yet

    It was also good to know that a large cache on the controller can buffer moderate size checkpoints, I will get a big DIMM for my controllers
  78. daviobrien New Member

    Here are my results for ML370
    2x5150 (2.66 dual core xeon)
    12 Gig
    12x36 15k SAS SFF Raid 10 for data files
    2xP400 w/512 each

    Memory Random
    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec WritesPerSec Write MB/sec Avg-sec/IO
    58 475437 30000 933 15848 10 158479 0 0 0 0 0.0
    58 179288 30000 936 5976 30 179288 0 0 0 0 0.0
    58 67555 30000 937 2252 100 225183 0 0 0 0 0.0
    58 6341 30013 938 211 1000 211275 0 0 0 0 0.0

    Disk Read Random
    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec DWritesPerSec Write MB/sec Avg-sec/IO
    60 431 30043 11 14 10 143 212 13 0 0 4.7
    60 294 30013 7 10 20 196 252 2 0 0 4.0
    60 1074 30000 18 36 30 1074 1029 8 0 0 7.7
    60 1215 30013 40 40 50 2024 1596 12 0 0 7.5
    60 883 30016 48 29 100 2942 2218 17 0 0 10.4
    60 433 30030 78 14 300 4326 3160 25 0 0 17.7
    60 183 30140 96 6 1000 6072 4366 34 0 0 22.4
    60 70 30373 99 2 3000 6913 4905 38 0 0 19.5
    60 26 30250 126 1 9998 8594 6131 48 0 0 18.7

    Update Test
    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec DWritesPerSec Write MB/sec Avg-sec/IO
    54 61201 30000 812 2040 30 61201 0 0 0 0 0.0
    54 0 293 3 0 0 0 7 0 1584 342 5.0
    54 636 30000 43 21 30 636 1008 63 0 0 12.2
    54 0 466 6 0 0 0 0 0 40751 319 0.0
    54 575 30030 103 19 100 1915 1848 115 0 0 18.8
    54 0 14843 15 0 0 0 0 0 3784 30 61.6
    54 296 30046 135 10 300 2955 2346 147 0 0 33.1
    54 0 24000 22 0 0 0 0 0 3616 29 55.8

    Table Scan
    spid Calls ms CPUms CallsPerSec RowsPerCall RowsPerSec ReadsPerSec Read MB/sec DWritesPerSec Write MB/sec Avg-sec/IO
    53 1 63013 858 66563 32768 520 2607 522 0 0 22.6
    53 1 55996 903 74904 32768 585 1194 587 0 0 37.1
  79. joechang New Member

    looks very reasonable
    in memory bookmark lookups at 220K/s,
    much better than the 120K for the NetBurst Xeon 3.2GHz
    and the Opteron 2.4GHz at 166K/s

    high queue/short stroke random disk io 4-6k/s
    check points at 3700/s
    table scan at 587MB/sec

  80. gordy New Member

    results from quad dual-core 2.6gz (opteron 8218s) 32gb ram and dual P800 SAS arrays.
    one data file per HBA, Raid10 with 12 15k disks each. logs on a separate dedicated raid10 volume (P400 SAS 15k)

    Joe, something confuses me comparing these numbers with the ones from our SATA box I posted earlier. The SAS shows 15,188 rows per sec but only 2,494 reads per sec. The SATA shows 4,957 reads per sec but only 5,521 rows per sec. I'm guessing the SAS "wins" with more records per sec but can you help me understand what's really happening?


    new in memory read test
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec
    53.00,"423,835.00","30,013.00",931.00,"14,121.71",10.00,"141,217.10"
    53.00,"164,278.00","30,000.00",935.00,"5,475.93",30.00,"164,278.00"
    53.00,"57,221.00","30,000.00",938.00,"1,907.37",100.00,"190,736.70"
    53.00,"4,905.00","30,000.00",937.00,163.50,"1,000.00","163,500.00"

    random read tests
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,Avg-sec/IO
    53.00,"2,156.00","30,003.00",14.00,71.86,10.00,718.59,280.94,2.19,3.52
    53.00,"1,612.00","30,033.00",10.00,53.67,20.00,"1,073.49",283.19,2.21,3.48
    53.00,"2,207.00","30,003.00",28.00,73.56,30.00,"2,206.78",489.78,3.83,4.02
    53.00,"2,059.00","30,016.00",46.00,68.60,50.00,"3,429.84",700.59,5.47,3.56
    53.00,"1,470.00","30,013.00",39.00,48.98,100.00,"4,897.88",923.40,7.21,3.80
    53.00,873.00,"30,030.00",68.00,29.07,300.00,"8,721.28","1,579.82",12.34,4.98
    53.00,451.00,"30,220.00",139.00,14.92,999.98,"14,923.66","2,604.93",20.58,8.22
    53.00,148.00,"30,170.00",125.00,4.91,"2,999.72","14,715.21","2,500.86",19.77,8.61
    53.00,46.00,"30,280.00",128.00,1.52,"9,998.24","15,188.87","2,494.72",19.55,7.11

    table scan performance
    spid,Calls,ms,CPUms,dpages/s,MB,Table MB/sec,ReadsPerSec,Disk MB/sec,Avg-sec/IO
    53.00,1.00,"62,030.00","1,273.00","67,617.34","32,768.00",528.26,"2,649.36",530.21,30.11
    53.00,1.00,"52,810.00","1,357.00","79,422.53","32,768.00",620.49,"1,265.73",622.88,45.53

    update test
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,DWritesPerSec,D Write MB/sec,Avg-sec/IO,LWritesPerSec,L Avg-sec/IO
    53.00,"76,231.00","30,000.00",806.00,"2,541.03",30.00,"76,231.00",0.00,0.00,0.00,0.00,0.00,2542.97,0.00
    53.00,0.00,453.00,10.00,0.00,0.00,0.00,11.04,0.09,"1,022.08",221.23,3.56,6.62,0.00
    53.00,976.00,"30,000.00",34.00,32.53,30.00,976.00,"1,456.07",11.38,0.00,0.00,7.24,32.67,0.00
    53.00,0.00,860.00,14.00,0.00,0.00,0.00,0.00,0.00,"33,748.84",264.95,2.13,3.49,0.00
    53.00,971.00,"30,016.00",92.00,32.35,100.00,"3,234.94","3,298.41",25.77,0.00,0.00,7.87,32.62,0.00
    53.00,0.00,"13,576.00",40.00,0.00,0.00,0.00,0.00,0.00,"6,922.73",55.33,55.12,0.22,0.00
    53.00,550.00,"30,000.00",138.00,18.33,300.00,"5,499.93","5,262.73",41.12,0.00,0.00,11.03,18.73,0.03
    53.00,0.00,"25,983.00",77.00,0.00,0.00,0.00,0.00,0.00,"6,078.21",48.77,47.49,0.23,0.00

  81. gordy New Member

    BTW, my previous results for quad single-core opt 850 - the cpus are actually 852.
  82. gordy New Member

    Interesting to see the new Xeon outperforming the Opteron in the memory read test!
    2.66 Xeon 225k
    2.61 Opty 191k

  83. joechang New Member

    Gordy,
    on your new results
    your system has too much memory, so most of the database is in memory
    and the random read test does not hit the disks hard

    give some memory back,
    just kidding
    you could have gone with a 128G database,
    or your could also restrict SQL to 8GB for this test,
  84. gordy New Member

    I'll re-run the test with limited memory as soon as I get a chance. Do you think the 16gb of ram in my SATA box is affecting its results as well?

    Also, what's your take on the memory read results between the opteron and xeon systems? Is this test comparing the performance of intel's MCH against all 4 on-chip controllers or just one? If it's just one, would it be possible to construct a scenario where the opteron system can leverage all of its memory controllers?

    I'm considering using xeons in my next SQL server but I'm concerned with memory bandwidth. I've read that the opteron architecture scales with each cpu having its own memory controller and the intel architecture tends to bottleneck the memory controller as you add cpus. I would love to see that manifested in some query results as the theoretical world and the real world often don't jive.
  85. joechang New Member

    the ratio of memory to data size <br />will be reflected in the number of rows/sec to disk reads/sec<br />the higher the mem-data, the more rows fetched to disks reads<br /><br />since we are looking at the disk reads, <br />so long as the rows/sec are limited by the reads/sec <br />the results are reasonable,<br /> <br />only in your last test with 32G that i suspect you might not be disk limited<br /><br />on the matter of opteron vs xeon<br />see my discussion in:<br /<a target="_blank" href=http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=19653>http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=19653</a> <br /<a target="_blank" href=http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=15002>http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=15002</a><br /><br />the results here are reasonable based on what is know about each processor<br />this will teach to you how much to believe the marketing bs put out by vendors<br />and the nitwits that talk technology without actual measurements<br /><br />my read tests only use a single thread<br />you might try running multiple sessions to saturate all processors<br />make sure the begin/commit tran is in the code<br />monitor SQL Server<img src='/community/emoticons/emotion-2.gif' alt=':D' />atabases-&gt;Transactions/sec in perfmon<br />to estimate the number of rows/sec
  86. gujsa01 New Member

    Hi Joechang,

    Thanks for your scripts!!

    Our Cluster Hardware configuration is as follows:
    Processor: Quad dual-core 2.4GZ Opteron 280 with 8GB RAM.
    iSCSI SAN: PS300E with 2 Arrays comprising of RAID50 28 SATA Disk with 10K each.
    Operating System: Windows 2003 X64 SP1.

    Local drive is 47GB size. Each disk within SAN is 466GB size.
    Data & Logs are on separate volumes.

    These are the results:

    Memory in Read :
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec
    56,191306,30003,396,6376.229,10.0,63762.289
    56,179814,30000,888,5993.7998,30.0,179814.0
    56,54240,30000,934,1808.0,100.0,180800.0
    56,5202,30003,936,173.38266,1000.0,173382.66

    Random I/O
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,DWritesPerSec,D Write MB/sec,Avg-sec/IO
    55,185,30000,0,6.1666665,10.0,61.666668,127.26667,0.99427086,0.0,0.0,7.8163939
    55,99,30190,1,3.2792315,20.0,65.584633,126.06824,0.9849081,0.0,0.0,7.9030457
    55,74,30190,1,2.4511428,30.0,73.534286,125.57138,0.98102641,0.0,0.0,7.9380088
    55,485,30003,12,16.165051,50.0,808.2525,1071.7595,8.3916092,0.0,0.0,0.91457272
    55,422,30046,18,14.045131,100.0,1404.5131,1461.925,11.447291,0.0,0.0,0.66479224
    55,205,30203,22,6.787405,300.0,2036.2216,1993.4113,15.610256,0.0,0.0,0.48062518
    55,74,30296,29,2.4425666,999.97296,2442.5007,2358.1331,18.464949,0.0,0.0,0.40284428
    55,28,30673,27,0.91285497,2999.8215,2738.4019,2591.7256,20.296505,0.0,0.0,0.36571652
    55,10,33003,43,0.30300274,9998.2998,3029.5125,2685.7256,21.032179,0.0,0.0,0.35022619

    Table Scans
    spid,Calls,ms,CPUms,dpages/s,MB,Table MB/sec,ReadsPerSec,Disk MB/sec,Avg-sec/IO
    57,1,321670,403,13018.861,32717.008,101.70985,410.62579,102.22108,2.3332553
    57,1,346063,535,12101.198,32717.008,94.540611,381.63861,95.015488,2.5378351


    Update
    spid,Calls,ms,CPUms,CallsPerSec,RowsPerCall,RowsPerSec,ReadsPerSec,Read MB/sec,DWritesPerSec,D Write MB/sec,Avg-sec/IO,LWritesPerSec
    58,25872,30000,305,862.40002,30.0,25872.0,3.3333335E-2,2.6041668E-4,4642.4331,36.269009,1.4719189E-3,863.09998
    58,0,21470,41,0.0,0.0,0.0,0.0,0.0,10598.091,82.797585,0.0,0.13972986
    58,71,30126,2,2.3567681,30.0,70.703049,127.56423,0.99659556,0.0,0.0,7.7861028,2.3567681
    58,0,3156,1,0.0,0.0,0.0,0.0,0.0,675.85553,5.2801213,0.0,0.95057034
    58,23,30736,3,0.74830818,100.0,74.830818,127.4076,0.99664277,0.0,0.0,7.8161368,0.74830818
    58,0,2156,1,0.0,0.0,0.0,0.0,0.0,1068.1818,8.34517,0.0,1.3914657
    58,8,33656,2,0.23769908,300.0,71.309723,121.76135,0.95218909,0.0,0.0,8.1744738,0.23769908
    58,0,3453,4,0.0,0.0,0.0,0.0,0.0,695.91656,5.4368482,0.0,0.86880976


    I'm new to Hardware configuration/tweaking and would appreciate any help with the results mentioned above.

    Thanks.













  87. joechang New Member

    gujsa
    I am assuming this is the new Opteron 82xx with DDR II memory
    not the older one with DDR memory

    the in memory performance numbers correlate with the new

    assuming the data is slit in 2 files, each on 14 SATA disks, and the logs elsewhere
    the random io at high queee is not as good as SAS, but the latency is unusually low, not sure what is happening

    your table scan is approx 100MB/sec, which is what you can get out of GbitE,
    actually pretty good, as in network I see 90-95MB/sec

    but direct attach disks could do 800-1000MB/sec
    if you are Data Warehouse, you really need to switch to DAS
    iSCSI is ok for for pure transactions (small queries)
    but even a big report query will saturate the iSCSI link

    I do not really buy into RAID 50
    either RAID 10 or RAID 5 will be better

    do you really think you need RAID 50?

    I have seen multiple disk failures
    but I think good environment control effects this more than RAID level
  88. gujsa01 New Member

    Appreciate your response Joechang!!!

    This environment (not a DW) is a new setup for our CO-Lo and we are going to add one more array to the mix. I will be recommending to divide these 3 arrays into 2 Speed (RAID 10) and 1 Space (RAID50) for data/log and backup volumes respectively.

    Also I'm running IOMeter as well with the following settings:
    Thread----------Drive-----------Access Specification
    Worker 1---------C:------------ 32K 100% Read
    Worker 2---------Data Volume---- 32K 75% Read
    Worker 3---------Log Volume----- 32K 25% Read

    Appreciate your feedback and/or suggestions on this...

    I may be stress testing this new hardware more than I should but being a newbie would like to explore and learn every possible option.

    Thanks.



  89. joechang New Member

    if you are going to buy 1 more array
    i suggest a direct attach SAS disks

    i do not think IOMeter results will be substantially different for similar IO pattern and queue depth
  90. ssven New Member

    Here are my results for HP Pro Liant DL 380 G4 with 4 GB RAM (2 x 3,6 GHz Xeon)
    SQl 2005 standard, W2k3 Server Standard
    1xMSA50
    1xP800
    Accel. ratio 25% Read / 75% Write
    Stripe Size 128 kb
    10x146GB SAS
    3 databases (1xdb with 5 GB and 1 GB logs, 2xdb with 36 GB data and 40 GB logs - all replicated)
    and we are using tempdb a lot

    Config today: 6 logical drives w 120Gb Raid 1 for
    drive 1-3 datafiles and tempdb,
    drive 4&5 are for filegroups with huge tables,
    drive 6 for logfiles
    Test done on drive 1 (mdf) and 6 (ldf)

    Memory - Random
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    55|299685|30000|928|9989,5|10|99895|0|0|0|0|0|0|0
    55|117595|30000|935|3919,8|30|117595|0|0|0|0|0|0|0
    55|39740|30000|938|1324,6|100|132467|0|0|0|0|0|0|0
    55|4083|30000|937|136,1|1000|136100|0|0|0|0|0|0|0
    Disk Read - Random
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    52|556|30000|9|18,5|10|185,3|207,3|1,6|0|0|4,8|0|0
    52|292|30110|16|9,7|20|194|204,5|1,6|0|0|4,9|0|0
    52|1137|30016|23|37,9|30|1136,4|1140,1|8,9|0|0|7,7|0|0
    52|812|30016|28|27,1|50|1352,6|1336,8|10,4|0|0|7,8|0|0
    52|489|30030|37|16,3|100|1628,4|1566,5|12,2|0|0|8|0|0
    52|188|30033|45|6,3|300|1877,9|1773,8|13,9|0|0|8,2|0|0
    52|63|30483|49|2,1|1000|2066,7|1893,8|14,8|0|0|8,4|0|0
    52|30|30220|65|1|2999,8|2978|2712,9|21,2|0|0|5,9|0|0
    52|8|33736|50|0,2|9998,5|2371|2230,3|17,5|0|0|7,2|0|0
    Update Test
    spid|calls|ms|cpu-ms|call/s|rows/cal|rows/s|reads/s|rd mb/s|wr/s|wr mb/s|D ms/IO|L wr/sec|L ms/IO
    55|50855|30000|822|1695,2|30|50855|0|0|0|0|0|1695,2|0
    55|630|30016|25|21|30|629,7|1019,8|8|0|0|8,5|21|0
    55|353|30000|46|11,8|100|1176,7|1330,8|10,4|0|0|8,7|11,8|0
    55|159|30060|65|5,3|300|1586,8|1588,6|12,4|0|0|8,9|5,4|0,02484457
    Table scan
    spid|Calls|ms|CPUms|dpages/s|MB|Table MB/sec|ReadsPerSec|Disk MB/sec|Avg-sec/IO
    52|1|119160|701|35198,9|32768|275|560,5|276,1|3,1
    52|1|127613|806|32867,4|32768|256,8|523,8|257,8|7,2

    What can I improve on the current system and we have a second MSA50 w P800 + 10x146GB SAS how should I configure it for best performance if we will connect it also to the server?
  91. merrillaldrich New Member

    A quick forum tip y'all: set your query results to "column aligned" and then when you post, enclose the matrix in "code" tags. That will make the results legible in the form post.
  92. eyechart New Member

    Joe, I am getting an error on the 2nd script from page 1. I wonder if something happened during the conversion to the new forum environment and possibly clipped the bottom bit of text from your code. Can you take a look at that post and make sure that it is all there? thanks again for the excellent work.
    -ec

  93. satya Moderator

    I suggest to email Joe in this case by choosing his name from left hand pane [:)].
  94. TRACEYSQL New Member

    Hi there everyone....I now finally got around to runninig some IO Tests using the SQLIO Stress tool
    Results are on http://www.stretchunlimited.com/io.xls
    Could you help through some light IO Per Sec and MB Sec what is this telling me.
    Ran
    call sqlio -kW -s360 -frandom -o8 -b8 -LS -Fparam.txt8192 test file.TEST SERVER (F Disk)IOPS
    355.64MG / SEC
    2.77
    R (RAID 1)766.275.98
    M (RAID 5)266.4266.6The numbers seem low. Is it that it performed 355.64 IO transactions at a rate of 2.77 Mg a second.
    When i view some of my results from Quest Software the IO can be 717,715.00 (1 hour period)
    Trying to tie the two together and determine how long 717,715.00 would take or whether this is just too much for SAN.
    Any additional information would be appreciated.
  95. mdefehr New Member

    I ran the tests on my new server which is an HP DL380 G5 with 18 73GB SFF SAS drives (10 in an MSA50) - the internal 8 are on a P400 controller and the external 10 are on a P800 controller (I don't have the BBWC (so the cache isn't configurable) for the internal controller yet so all tests were done on the P800). It's running Server 2003 R2 x64 and SQL 2005 32-bit on one quad-core 2.33 GHz XEON and 4GB RAM (for now).
    I messed around extensively with the cache settings and found that write caching was very helpful. With the battery backed cache enabler, it is, apparently, safe. Read caching made absolutely no difference, of course. My results are posted below - the colums are organized side by side for hopefully easier comparison - the values on the left are with write-caching enabled - the values on the right are with caching disabled (there are really only differences in the Update section, of course). This is what I would expect to see if I listened to my vendor, but Joe - you seem to have some negative opinions about caching - what am I missing?
    TestType Calls Calls ms ms CPUms CPUms RowsPerCall RowsPerCall RowsPerSec RowsPerSec ReadsPerSec ReadsPerSec MBReadPerSec MBReadPerSec DWritesPerSec DWritesPerSec DIOStallMS DIOStallMS LWritesPerSec LWritesPerSec LMBWrittenPerSec LMBWrittenPerSec LIOStallMS LIOStallMS
    MemoryRead 460577 460790 30000 30013 929 931 10 10 153525.7 153530.1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    MemoryRead 178617 178863 30000 30000 935 934 30 30 178617 178863 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    MemoryRead 65625 65637 30000 30000 936 937 100 100 218750 218790 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    MemoryRead 6781 6779 30000 30000 937 938 1000 1000 226033.3 225966.7 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    RandomIO 409 393 30000 30046 4 3 10 10 136.3333 130.7994 277.1333 270.219 2.165104 2.111086 0 0 29900 29944 0 0 0 0 0 0
    RandomIO 232 236 30033 30140 3 3 20 20 154.4967 156.6025 274.4648 276.3105 2.144256 2.158676 0 0 29979 30120 0 0 0 0 0 0
    RandomIO 982 983 30000 30030 21 25 30 30 982 982.018 1272.167 1274.459 9.938802 9.95671 0 0 271794 271964 0 0 0 0 0 0
    RandomIO 760 760 30013 30013 22 21 50 50 1266.118 1266.118 1326.025 1327.225 10.35957 10.36894 0 0 320492 317112 0 0 0 0 0 0
    RandomIO 496 494 30050 30030 21 25 100 100 1650.582 1645.022 1636.639 1637.163 12.78624 12.79033 0 0 390377 388430 0 0 0 0 0 0
    RandomIO 193 197 30140 30093 25 31 300 299.9949 1921.035 1963.879 1872.13 1914.831 14.62602 14.95961 0 0 443407 449053 0 0 0 0 0 0
    RandomIO 56 57 30080 30513 27 31 999.9821 999.9649 1861.669 1867.991 1786.602 1790.876 13.98692 14.02092 0 0 490377 503507 0 0 0 0 0 0
    RandomIO 27 25 30493 30046 29 32 2999.778 2999.84 2656.151 2496.039 2363.657 2353.225 18.50809 18.42461 0 0 508687 499023 0 0 0 0 0 0
    RandomIO 8 8 33470 33513 34 38 9998.25 9998.375 2389.782 2386.745 2217.15 2213.947 17.3635 17.33842 0 0 540233 538310 0 0 0 0 0 0
    TableScan 1 1 90156 88983 685 682 32768 32768 363.4589 368.2501 1824.781 1848.836 364.8618 369.6715 0 0 245963 243873 0 0 0 0 0 0
    TableScan 1 1 69730 69970 812 795 32768 32768 469.9268 468.315 958.6404 955.3523 471.7407 470.1226 0 0 271057 271427 0 0 0 0 0 0
    Update 74245 7419 30000 30000 843 79 30 30 74245 7419 0 0 0 0 0 0 0 0 2475.567 247.5 9.66875 0.9664062 4 29654
    Update 0 0 796 3800 1 2 0 0 0 0 5.025126 1.052632 0.03925879 0.008223685 579.1458 121.3158 14188 4471 3.768844 0.7894737 0.01594888 0.003340872 0 12
    Update 852 761 30003 30013 29 23 30 30 851.9148 760.6703 1306.003 1197.948 10.20315 9.358965 0 0 280556 258369 28.39716 25.35568 0.1109264 0.09904562 0 2662
    Update 0 0 670 23766 12 11 0 0 0 0 0 0 0 0 37943.29 951.864 434313 452272 4.477612 0.1262307 0.01894823 0.0005341796 0 12
    Update 437 436 30023 30030 34 45 100 100 1455.551 1451.881 1552.51 1563.903 12.12898 12.21799 0 0 379042 367920 14.55551 14.51881 0.1705724 0.1701424 0 1572
    Update 0 0 6373 45953 10 12 0 0 0 0 0 0 0 0 6747.842 929.1885 1949236 855498 0.4707359 0.0652841 0.001992047 0.0002762673 0 14
    Update 170 190 30056 30000 48 45 299.9824 300 1696.733 1900 1669.351 1873.133 13.0418 14.63385 0 0 427985 434996 5.722651 6.4 0.193454 0.2166178 0 733
    Update 0 0 9636 55733 14 18 0 0 0 0 0 0 0 0 5136.779 1003.337 2287815 1119522 0.3113325 0.05382807 0.001317488 0.0002277881 0 18
  96. waqar Member

    Hi Joe,
    First of all i would like to say thank you very much for sharing such valuable information.
    I need your opinion in one of my new purchase (comments from others are also welcome).
    I have following state of my database

    DB1 = 198GB (daily update)
    DB2 = 185GB (daily insert 200~500 files - fill with pdf/zip files) use for SELECT purpose
    DB3 = 57GB (almost static - fill with pdf/zip files) use for SELECT purpose
    DB4 = 29GB (daily insert 4000~6000 files - fill with pdf/html files) daily growth of 30~40MB
    DB5 = 15GB (static) use for SELECT purpose

    Currently i am using
    DELL PE 2650 2xdisk RAID 1 for OS + DB and 2xdisk RAID 1 for DB1 LOG
    PV 220 with 10x146GB (10krpm) - RAID 5. (all DBS + tempDB).
    Now i am planning to purchase
    3x PowerVault MD 1000 x 15 x 146 (15krpm).
    1 x PE 6950 or PE 2900 (i don't have high budget for server though).

    With my current system, i am facing huge problem of Avg. Disk Queue hung at max when some of customer try to extract data between 200k~ 500k records for Simple query which use 2~3 tables with Join.

  97. nguyenhqtuan New Member

    Hi Joe,
    This is a great thread ... Thank you very much for your time answering questions!
    In my environment, I have DL380 with P400 Controller. This controller is attached to 8 internal SAS disks. Here is my configuration
    Array 0: RAID 1 (disk 1 and disk 2)
    Array 1: The rest of SAS disks i.e. 6 disks. Using HP Array Configuration Utility
    Logical Drive 1: RAID 1+0
    Logical Drive 2: RAID 5
    My question is that ... For Array 1 with 6 disks, did I set up Software RAID on Logical Drive 1 and Logical Drive 2?
    Thanks,
    nguyenhqtuan

Share This Page