Script to Test Disk IO? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Script to Test Disk IO?

I am getting ready to "fight" with our SAN administrators as to whether we set up RAID-10 or RAID-5 for the database servers. You see, they already set up a 30-drive RAID-5 and are going to be giving pieces of it to servers on the IBM VMWare machines that are being set up. I am going to be setting up consolidated SQL servers and want to get as much performance out of them as possible so I can consolidate as many as possible into one VM, to save on licensing and help reduce maintenance. Right now I have about 70 SQL instances in total. I’ll be splitting them into a combination of DEV, TEST, and Production consolidated VMs. Anyway, I need to come up with a proposal for the production VM as to how I want the SAN set up. Obviously Disk IO is my biggest concern. Everything I read says a combination of RAID-1 and RAID-10 is ideal, they will disagree. I’d like to set up a test environment to test their RAID-5 versus a logically split out RAID-1 and RAID-10 configuration and then run a process that puts it through its paces to see which one is faster. I’m hoping there exist or someone would share a script (or a few) that would help me test the system to see just how well it will perform in each scenario. Any help greatly appreciated. Thanks,
Mike
comments:<br />as dba, you must control your own disks, and not share disks with any one<br />so get off the SAN and get direct attach storage for your DB’s<br />don’t think about running performance critical db’s on vm<br /><br />try the following, I is just a seed table used to populate the test able<br /><br />CREATE TABLE I (I int NOT NULL , CONSTRAINT PK_I PRIMARY KEY CLUSTERED (I) )<br />GO<br />SET NOCOUNT ON <br />DECLARE @I int SELECT @I = 1<br />BEGIN TRANSACTION<br /> WHILE @I &lt;= 100000 BEGIN<br /> INSERT I (I) VALUES ( @I) <br /> SET @I = @I+1 END <br />COMMIT TRANSACTION <br />CHECKPOINT<br />GO<br /><br />CREATE TABLE C2 (<br />ID1 int NOT NULL,<br />ID2 int NOT NULL,<br />ID3 int NOT NULL,<br />ID4 int NOT NULL,<br />ID5 int NOT NULL,<br />ID6 int NOT NULL,<br />ID7 int NOT NULL,<br />ID8 int NOT NULL,<br />ID9 int NOT NULL,<br />ID10 int NOT NULL,<br />ID11 int NOT NULL,<br />bID1 bigint NOT NULL,<br />bID2 bigint NOT NULL,<br />rMoney money NOT NULL,<br />rDate datetime NOT NULL,<br />rReal real NOT NULL,<br />rDecimal decimal (9,4) NOT NULL,<br />sID smallint NOT NULL,<br />sChar char(700) NOT NULL,<br />CONSTRAINT [PK_C2] PRIMARY KEY CLUSTERED ( [ID1] ) WITH FILLFACTOR = 100 )<br />GO<br /><br />below is the test table population script. it is important that you set @BEnd & @BTotal to the estimated total size of your databases in MB, if you expect 100GB make it 102400<br /><br /><br />SET NOCOUNT ON <br />DECLARE @BStart int,@BEnd int,@BTotal int,@BSize int,@BRow int<br /> ,@T int,@I int,@C int,@P int,@M int<br /> ,@D2 int, @D3 int, @D4 int, @D5 int, @D6 int, @D7 int, @D8 int, @D9 int, @D10 int<br />SELECT @M = 128 — 128 Pages per MByte<br />SELECT @P = 10 — 10 rows per page, based on row size and fill factor<br />SELECT @BStart=1, @BEnd=32768, @BTotal=32768, @[email protected]*@M, @C=10<br />SELECT @[email protected]*@BSize <br />SELECT @D2=(@T-1)/@C+1, @D3=(@T-1)/20+1, @D4=(@T-1)/30+1, @D5=(@T-1)/50+1, @D6=(@T-1)/100+1<br /> , @D7=(@T-1)/300+1, @D8=(@T-1)/1000+1 , @D9=(@T-1)/3000+1, @D10=(@T-1)/10000+1 <br />WHILE @BStart &lt;= @BEnd BEGIN<br />BEGIN TRAN<br /> SELECT @I=(@BStart-1)*@BSize, @BRow = @BStart*@BSize<br /> INSERT C2(ID1,ID2,ID3,ID4,ID5,ID6,ID7,ID8,ID9,ID10,ID11,bID1,bID2,rMoney,rDate,rReal,rDecimal,sID,sChar) <br /> SELECT [email protected],([email protected])%@D2+1,([email protected])%@D3+1,([email protected])%@D4+1,([email protected])%@D5+1,([email protected])%@D6+1<br />, ([email protected])%@D7+1,([email protected])%@D8+1,([email protected])%@D9+1,([email protected])%@D10+1, [email protected]<br />, 2*@T*RAND(CHECKSUM(NEWID())), @T*RAND(CHECKSUM(NEWID())),10000*RAND(CHECKSUM(NEWID()))<br />, DATEADD(hour,150000*RAND(CHECKSUM(NEWID())),’1990-01-01′) <br />, 10000*RAND(CHECKSUM(NEWID())), 10000*RAND(CHECKSUM(NEWID())), ([email protected])%320+1, CONVERT(char(100),NEWID()) <br /> FROM I WHERE I &lt;= @BSize<br /> SET @BStart = @BStart+1<br />COMMIT TRAN<br />IF (@BStart%10 -1=0) PRINT ‘C2 Step ‘ + CONVERT(varchar,GETDATE(),121) + ‘, ‘ + CONVERT(varchar,@BStart-1)<br /> + ‘, row ‘ + CONVERT(varchar,@BRow) + ‘, Trancount ‘ + CONVERT(varchar(10),@@TRANCOUNT)<br />END <br />GO<br /><br />CREATE INDEX IX_ID4 ON C2 (ID4,bID1) <br />PRINT ‘I4 ‘ + CONVERT(varchar,GETDATE(),121)<br />GO<br />CREATE INDEX IX_ID6 ON C2 (ID6,bID1) <br />PRINT ‘I6 ‘ + CONVERT(varchar,GETDATE(),121)<br />GO<br />CREATE INDEX IX_ID8 ON C2 (ID8,bID1) <br />PRINT ‘I8 ‘ + CONVERT(varchar,GETDATE(),121)<br />GO<br /><br />below are the 3 test queries, set to 30sec, (set the run time in milli-sec, currently at 30000)<br />for low, medium and high queue depth read IO<br /><br />– disk read test<br />SET NOCOUNT ON <br />DECLARE @N int,@M int,@Group int,@Cnt int,@bID bigint,@DT1 datetime,@ms real,@CPU bigint,@IO bigint<br />SELECT @M=MAX(ID4), @bID=COUNT(ID4) FROM C2 WITH (INDEX(IX_ID4))<br />SELECT @N=0, @bID=0, @Cnt=0, @DT1=getdate(), @[email protected]@CPU_BUSY, @[email protected]@IO_BUSY<br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @[email protected]*rand()+1, @[email protected]+1 SELECT @bID=SUM(bID2), @[email protected]+COUNT(*) FROM C2 WITH(NOLOCK) WHERE [email protected] END<br />SELECT @ms=DATEDIFF(ms,@DT1,getdate()) SELECT @N [Calls], @ms [ms], CONVERT(real,@N)/@ms [Calls/sec], @Cnt/CONVERT(real,@N) [Rows/call]<br /> , @Cnt/@ms [K Rows/sec], (@@[email protected])*@@TIMETICKS/1000 [CPU], (@@[email protected])*@@TIMETICKS/1000 [IO]<br />GO<br /><br />SET NOCOUNT ON <br />DECLARE @N int,@M int,@Group int,@Cnt int,@bID bigint,@DT1 datetime,@ms real,@CPU bigint,@IO bigint<br />SELECT @M=MAX(ID6), @bID=COUNT(ID6) FROM C2 WITH (INDEX(IX_ID6))<br />SELECT @N=0, @bID=0, @Cnt=0, @DT1=getdate(), @[email protected]@CPU_BUSY, @[email protected]@IO_BUSY<br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @[email protected]*rand()+1, @[email protected]+1 SELECT @bID=SUM(bID2), @[email protected]+COUNT(*) FROM C2 WITH(NOLOCK) WHERE [email protected] END<br />SELECT @ms=DATEDIFF(ms,@DT1,getdate()) SELECT @N [Calls], @ms [ms], CONVERT(real,@N)/@ms [Calls/sec], @Cnt/CONVERT(real,@N) [Rows/call]<br /> , @Cnt/@ms [K Rows/sec], (@@[email protected])*@@TIMETICKS/1000 [CPU], (@@[email protected])*@@TIMETICKS/1000 [IO]<br />GO<br /><br />SET NOCOUNT ON <br />DECLARE @N int,@M int,@Group int,@Cnt int,@bID bigint,@DT1 datetime,@ms real,@CPU bigint,@IO bigint<br />SELECT @M=MAX(ID<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ />, @bID=COUNT(ID<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> FROM C2 WITH (INDEX(IX_ID<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ />)<br />SELECT @N=0, @bID=0, @Cnt=0, @DT1=getdate(), @[email protected]@CPU_BUSY, @[email protected]@IO_BUSY<br />WHILE DATEDIFF(ms,@DT1,getdate()) &lt; 30000 BEGIN<br /> SELECT @[email protected]*rand()+1, @[email protected]+1 SELECT @bID=SUM(bID2), @[email protected]+COUNT(*) FROM C2 WITH(NOLOCK) WHERE [email protected] END<br />SELECT @ms=DATEDIFF(ms,@DT1,getdate()) SELECT @N [Calls], @ms [ms], CONVERT(real,@N)/@ms [Calls/sec], @Cnt/CONVERT(real,@N) [Rows/call]<br /> , @Cnt/@ms [K Rows/sec], (@@[email protected])*@@TIMETICKS/1000 [CPU], (@@[email protected])*@@TIMETICKS/1000 [IO]<br />GO<br /><br />table scan test:<br />SET NOCOUNT ON <br />DECLARE @N int,@M int,@Group int,@Cnt int,@bID bigint,@DT1 datetime,@ms real,@CPU bigint,@IO bigint<br />SELECT @N=0, @bID=0, @Cnt=0, @DT1=getdate(), @[email protected]@CPU_BUSY, @[email protected]@IO_BUSY<br /> SELECT @[email protected]*rand()+1, @[email protected]+1 SELECT @bID=SUM(bID2), @[email protected]+COUNT(*) FROM C2 WITH(NOLOCK) WHERE [email protected] <br />SELECT @ms=DATEDIFF(ms,@DT1,getdate()) SELECT @N [Calls], @ms [ms], CONVERT(real,@N)/@ms [Calls/sec], @Cnt/CONVERT(real,@N) [Rows/call]<br /> , @Cnt/@ms [K Rows/sec], (@@[email protected])*@@TIMETICKS/1000 [CPU], (@@[email protected])*@@TIMETICKS/1000 [IO]<br />GO<br /><br />i will provide write queries after you provide some feedback<br />
Joe:<br /><br />Thanks a ton! <br /><br />Well…Unfortunately I have no choice in the matter with regards to VM. At least unless I can prove that our VM environment can’t handle it, which I don’t think I’ll be able to do beyond a reasonable doubt. The VM environment was basically thrust on all of us by upper management; each VM host is an 8-way dual core IBM w/64G of RAM. So, I have to work with it, for better or worse. The troubling thing in my opinion is that it’ll be extremely difficult for me to troubleshoot resource issues if the VM Host is hit hard. Ultimately, though, if I can prove that the DB will harm other VMs on that host I’ll likely get dedicated hardware. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />My goal in this testing is really two fold…First of all, I want to prove that RAID-10 is better than RAID-5. Secondly, I’d love for the results to absolutely hammer the IO for the entire host VM to the SAN in general. This may or may not occur. We shall see. I plan to hit it hard, though. We have some SQL systems that generate a ton of disk IO, so me doing this testing isn’t unrealistic.<br /><br />In all honestly, in the back of my mind I’m working towards regaining full control of the hardware, which was the setup at my previous employer. I definitely don’t want to take over all responsibilities, but I think our DBA group should have final say-so with regards to all configurations and monitoring, whereas now the Server group has taken control of almost everything. We’ll see what happens I guess.<br /><br />Anyway, I’d definitely love to have the write queries to use as well. I’ll probably have an environment to start pounding on in a week or so. I’ll be putting in a requisition to set up the VMs with the drives the way I want soon and expect things to get interesting at that point. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> Gotta love IT politics and power plays.<br /><br />Thanks and I’ll report back as I move forward.<br /><br />Mike
I’m not that much into hardware configuration, but maybe this one will give you some arguments against RAID 5:http://www.baarf.com And then there are:
http://www.microsoft.com/downloads/…53-893a-4aaf-b4a6-9a8bb9669a8b&DisplayLang=en
http://support.microsoft.com/default.aspx?scid=kb;EN-US;231619
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Frank: Excellent stuff there. I had forgotten about SQLIOStress even though I have seen it before. The BAARF thing is awesome. Great stuff. Thanks!
Mike
my inclination is to stay away from the RAID 5-10 argument i prefer a set of tests. The following events should not disrupt transactions:
1. nolock table scan (700MB/sec)
2. checkpoint (a massive dump of all dirty pages into the write queue)
3. t-log backup (mixing read & writes disrupting the pure sequential op)
Joe, thank you very much for providing your script. I modified it to capture test results run repeatedly (let’s say 10 times vs. a 40 GB C2 database) to get an average.<br /><br />Situation: we plan to upgrade one of our larger client’s 40 GB database server from SQL 2000 to SQL 2005, but we want to see if the existing hardware is up to the task. I am trying to understand the significance of the captured columnar values per your when I run it with SQL 2000 and SQL 2005 for a sort of apples-to-apples comparison.<br /><br />Here are some average run values over 10 executions:<br /><br />Type ElapsedTimeCallsRunMillisecRowsPerCallKBRowsPerSecCpuTimeIOTime<br />C2Large Average 00:01:23 19 30,839 1000 0.62 29,735 490 <br />C2Medium Average00:01:27 151 30,101 100 0.50 28,833 414 <br />C2Small Average00:01:06 415 30,033 30 0.41 28,817 398 <br />C2Scan Average00:08:55 1 535,173 0 0.00 94,846 31,456 <br /><br />It took over 3 hours to build C2 with no load on system ( IBM Server with 4 Hyperthreaded XEON 2.8 GHz, 8 GB RAM, RAID 5 10K rpm drives). I plan to run it on the same system with Windows 2003 SP1 SQL 2005 SP1 w/RAID 10 segregated into logical partitions by spindle for logs(1), data (2) and tempdb(1).<br /><br />Any help you can provide would be appreciated,<br /><br />Thank you,<br /><br />jim<br /><br />P.S. here is the supporting table & code:<br /><br />–<br />– Here is the structure of C2Stats table that captures your select statements:<br />–<br />USE [StressTest]<br />GO<br />/****** Object: Table [dbo].[C2Stats] Script Date: 10/02/2006 16:54:29 ******/<br />SET ANSI_NULLS ON<br />GO<br />SET QUOTED_IDENTIFIER ON<br />GO<br />SET ANSI_PADDING ON<br />GO<br />CREATE TABLE [dbo].[C2Stats](<br />[C2StatsID] [int] IDENTITY(1,1) NOT NULL,<br />[Type] [varchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,<br />[RunTime] [datetime] NOT NULL CONSTRAINT [DfC2StatsRunTime] DEFAULT (getdate()),<br />[Calls] [int] NULL, <br />[MS] [real] NULL,<br />[CallsPerSec] [real] NULL,<br />[RowsPerCall] [real] NULL,<br />[KRowsPerSec] [real] NULL,<br />[CPUTime] [bigint] NULL,<br />[IOTime] [bigint] NULL,<br />[GroupID] [int] NOT NULL CONSTRAINT [DF_C2Stats_Group] DEFAULT (0),<br /> CONSTRAINT [PkC2Stats] PRIMARY KEY CLUSTERED <br />(<br />[C2StatsID] ASC<br />)WITH FILLFACTOR = 99 ON [PRIMARY]<br />) ON [PRIMARY]<br /><br />GO<br />SET ANSI_PADDING OFF<br /><br />–<br />– Here is the stored procedure for running tests — it generates 3 rows per group (start, your data, end)<br />–<br />USE [StressTest]<br />GO<br />/****** Object: StoredProcedure [dbo].[sqlrunstresstest] Script Date: 10/02/2006 16:55:50 ******/<br />SET ANSI_NULLS ON<br />GO<br />SET QUOTED_IDENTIFIER ON<br />GO<br />CREATE procedure [dbo].[sqlrunstresstest]<br />as<br />set nocount on <br />–<br />– Small disk read test<br />–<br />declare @groupid int<br />select @groupid = max( groupid ) + 1 from dbo.c2stats with (nolock)<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Small’, getdate(), @groupid )<br />declare @n int,@m int,@group int,@cnt int,@bid bigint,@dt1 datetime,@ms real,@cpu bigint,@io bigint<br />select @m=max(id4), @bid=count(id4) from c2 with (index(ix_id4))<br />select @n=0, @bid=0, @cnt=0, @dt1=getdate(), @[email protected]@cpu_busy, @[email protected]@io_busy<br />while datediff(ms,@dt1,getdate()) &lt; 30000 <br />begin<br />[email protected][email protected]*rand()+1, @[email protected]+1 <br />select @bid=sum(bid2), @[email protected]+count(*) <br />from stresstest.dbo.c2 with (nolock) <br />where [email protected]<br />end<br />select @ms=datediff(ms,@dt1,getdate()) <br />insert into stresstest.dbo.c2stats ( type, calls, ms, callspersec, rowspercall, krowspersec, cputime, iotime, groupid )<br />select ‘C2Small’, @n [calls], @ms [ms], convert(real,@n)/@ms [calls/sec], @cnt/convert(real,@n) [rows/call], @cnt/@ms [k rows/sec], (@@[email protected])*@@timeticks/1000 [cpu], (@@[email protected])*@@timeticks/1000 [io], @groupid<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Small’, getdate(), @groupid )<br />–<br />– Medium disk read test<br />–<br />select @groupid = max( groupid ) + 1 from dbo.c2stats with (nolock)<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Medium’, getdate(), @groupid )<br />select @m=max(id6), @bid=count(id6) <br />from c2 with (index(ix_id6))<br />select @n=0, @bid=0, @cnt=0, @dt1=getdate(), @[email protected]@cpu_busy, @[email protected]@io_busy<br />while datediff(ms,@dt1,getdate()) &lt; 30000 <br />begin<br />[email protected][email protected]*rand()+1, @[email protected]+1 <br />[email protected]=sum(bid2), @[email protected]+count(*) <br />from c2 with(nolock) <br />where [email protected] <br />end<br />select @ms=datediff(ms,@dt1,getdate()) <br />insert into stresstest.dbo.c2stats ( type, calls, ms, callspersec, rowspercall, krowspersec, cputime, iotime, groupid )<br />select ‘C2Medium’, @n [calls], @ms [ms], convert(real,@n)/@ms [calls/sec], @cnt/convert(real,@n) [rows/call], @cnt/@ms [k rows/sec], (@@[email protected])*@@timeticks/1000 [cpu], (@@[email protected])*@@timeticks/1000 [io], @groupid<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Medium’, getdate(), @groupid )<br />–<br />– Large disk read test<br />–<br />select @groupid = max( groupid ) + 1 from dbo.c2stats with (nolock)<br />insert into stresstest.dbo. c2stats ( type, runtime, groupid ) values ( ‘C2Large’, getdate(), @groupid )<br />select @m=max(id<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ />, @bid=count(id<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> <br />from c2 with (index(ix_id<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ />)<br />select @n=0, @bid=0, @cnt=0, @dt1=getdate(), @[email protected]@cpu_busy, @[email protected]@io_busy<br />while datediff(ms,@dt1,getdate()) &lt; 30000 <br />begin<br />select @[email protected]*rand()+1, @[email protected]+1 <br />select @bid=sum(bid2), @[email protected]+count(*) <br />from c2 with(nolock) <br />where [email protected] <br />end<br />select @ms=datediff(ms,@dt1,getdate()) <br />insert into stresstest.dbo.c2stats ( type, calls, ms, callspersec, rowspercall, krowspersec, cputime, iotime, groupid )<br />select ‘C2Large’, @n [calls], @ms [ms], convert(real,@n)/@ms [calls/sec], @cnt/convert(real,@n) [rows/call], @cnt/@ms [k rows/sec], (@@[email protected])*@@timeticks/1000 [cpu], (@@[email protected])*@@timeticks/1000 [io], @groupid<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Large’, getdate(), @groupid )<br />–<br />– table scan test<br />–<br />select @groupid = max( groupid ) + 1 from dbo.c2stats with (nolock)<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Scan’, getdate(), @groupid )<br />select @n=0, @bid=0, @cnt=0, @dt1=getdate(), @[email protected]@cpu_busy, @[email protected]@io_busy<br />select @[email protected]*rand()+1, @[email protected]+1 select @bid=sum(bid2), @[email protected]+count(*) from c2 with(nolock) where [email protected] <br />select @ms=datediff(ms,@dt1,getdate()) <br />insert into stresstest.dbo.c2stats ( type, calls, ms, callspersec, rowspercall, krowspersec, cputime, iotime, groupid )<br />select ‘C2Scan’, @n [calls], @ms [ms], convert(real,@n)/@ms [calls/sec], @cnt/convert(real,@n) [rows/call], @cnt/@ms [k rows/sec], (@@[email protected])*@@timeticks/1000 [cpu], (@@[email protected])*@@timeticks/1000 [io], @groupid<br />insert into stresstest.dbo.c2stats ( type, runtime, groupid ) values ( ‘C2Scan’, getdate(), @groupid )<br />set nocount off<br /><br />–<br />– Here is the view of the results:<br />–<br />USE [StressTest]<br />GO<br />/****** Object: View [dbo].[C2StatInfo] Script Date: 10/02/2006 16:56:29 ******/<br />SET ANSI_NULLS ON<br />GO<br />SET QUOTED_IDENTIFIER ON<br />GO<br />create view [dbo].[C2StatInfo]<br />as<br />select d.groupid, d.Type, t.elapsedtime, d.runtime, d.calls, d.ms, d.callspersec, d.rowspercall, d.krowspersec, d.cputime, d.iotime<br />from<br />(select groupid, Type, runtime, calls, ms, callspersec, rowspercall, krowspersec, cputime, iotime<br />from dbo.C2Stats <br />where calls is not null<br />) as d<br />inner join<br />(select groupid<br />, max( runtime ) – min( runtime ) as elapsedtime<br />from dbo.C2Stats <br />where calls is null<br />group by groupid<br />) as t on d.groupid = t.groupid<br /><br />–<br />– The query<br />–<br />use stresstest<br />go<br />select Type, runtime as RunDate, ElapsedTime, Calls, ms as RunMillisec, RowsPerCall, krowspersec as KBRowsPerSec, CPUTime, IOTime from dbo.C2StatInfo <br />order by groupid;<br />
try the new scripts below to capture more important info on disk
i consider 4 rack of disk (10-15 disks per rack) the min for a 4 proc system, regardless of capacity considerations http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=16995

]]>