SQL Server Performance

Average Disk Queue Read Length High

Discussion in 'Performance Tuning for Hardware Configurations' started by Pichi63, Aug 1, 2005.

  1. Pichi63 New Member

    Hello,<br /><br />We are running Navision 3.70 on SQL 2000. At peak times we have maybe 50 users connected to the database. The majority of the work they do is querying. Users connect via Terminal Server and they often complain that everything is going way too slow. <br /><br />Here is the hardware setup:<br /><br />Compaq DL580 G3 with 4 x 2.7 Ghz XEON CPUs with 6 GB of RAM. The OS is on a mirrored array. The transactions logs are on a four disk 1+0 array and the database is on a ten disk 1+0 array. The database is about 151 GB.<br /><br />Sofware setup:<br /><br />Windows Server 2003 Enterprise edition running sp1, SQL Server 2000 Enterprise Edition running sp3.<br /><br />I have monitored the following with perfomance monitor:<br /><br />Memory<br />Physical Disk<br />Process<br />Processor<br />SQLServer:Access Methods<br />SQLServer:Buffer Manager<br />SQLServer:Cache Manager<br />SQLServer<img src='/community/emoticons/emotion-2.gif' alt=':D' />atabases<br />SQLServer:General Statistics<br />SQLServer:Latches<br />SQLServer:Locks<br />SQLServer:Memory Manager<br />SQLServer<img src='/community/emoticons/emotion-7.gif' alt=':S' />QL Statistics<br />SQLServer<img src='/community/emoticons/emotion-7.gif' alt=':S' />QL Settable<br /><br />Other than Average Disk Queue Length I see nothing abnormal in those counters. What I see with Average Disk Queue Read Length are sustained averages of 1-4 and maximums of 400. This can last several minutes.<br /><br />I am not a DBA. I am a network admin who has been handed solving this problem. The hardware setup was recommended to us by MS so I dont think we have done anything wrong there. Could this be a case of bad code?<br /><br />Thanks for you help,<br /><br />Pedro
  2. satya Moderator

    See if SQL Server is doing a lot of I/O. Tools like SQL Profiler and Blocker Script output can tell you if the query(s) are resulting in these large I/Os, and tuning or optimizing these individual queries from a statistics and indexes point of view may help from the disk I/O perspective.

    An improper index strategy can add to a disk subsystem workload. For example, instead of doing an Index Seek, the SQL Server Optimizer may choose to do a Table/Index Scan, resulting in a lot of unnecessary I/O.

    Adding appropriate indexes reduces load on disks, thus reducing the disk I/O. If the proper indexes do exist, adding more spindles to the RAID configuration may also help.

    Refer to the below KBAs for further investigations:
    http://support.microsoft.com/default.aspx?scid=kb;en-us;324885&Product=sql2k
    http://support.microsoft.com/?id=298475

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. joechang New Member

    the people that recommend HW setup often don't know what they are talking about, so that is no guarantee things will work right, of course, its still their fault for not recommending the correct HW configuration

    if i understand this correctly, your system has 16 disks drives total? (2 OS, 4 logs, 10 data)
    i am guessing the 2 OS drives are on the DL580 itself, connected to either the internal SCSI channels or a RAID controller, and the other 14 are in an external SCSI chassis?
    if so, are the 14 drives connected to 1 SCSI channel or 2?, if not 2, i would split it into 2.

    why are users connecting terminal server on the database server?
    there should be another server setup as a terminal server which opens a connection to the database server

    anyways, try to track down the query that is the cause of the disk queue ~400 for several minutes,
    then look at the execution plan for this query,
    does it show parallel ops or not

    also note the physical disk read byte/sec, is it over 200MB/sec?

    basically, if you must run a query that scans a large table, then you want enough disk IO bandwidth to accomodate the scan (typcially > 400MB/sec) or failing that, disable parallel execution for this query, which will lower the disk queue (there is no point running a parallel plan if you do not have sufficient ability to read off disk to feed multiple processors)
  4. Adriaan New Member

    Also check for anti-virus software monitoring the MDF and LDF files - make sure these files are excluded.
  5. Pichi63 New Member

    Hello Everyone,

    A little more information:

    The array for the OS is on its own controller. The transaction logs and DB are on an external Compaq Modular Storage Array 1000 connected via fiber to a StorageWorks Modular SAN controller.

    The counters for Average Read Bytes/Sec is about 260MB during peak times.

    I have started the process described in Q224587:

    http://support.microsoft.com/default.aspx?scid=kb;en-us;q224587

    But so far nothing is jumping out at me making it obvious what is the problem.

    At this time there is no antivirus running on this server.

    We wanted to deploy the server with its own Terminal Server but the client did not want to pay for it.

    Thanks,

    Pedro
  6. Adriaan New Member

    Monitor memory footprint for each terminal server session - depends on the client software used but not necessarily that high. Do expect issues with client apps that are actually Microsoft Access MDB files, as they are memory hogs.
  7. joechang New Member

    i would then start noting the Read bytes/sec and Avg Read Queue counters for each drive separately,
    basically, a single 2Gbit/sec fiber channel connection cannot drive more the 200MB/sec in each direction, so 260MB/sec on the read implies atleast 60MB/sec is going to the OS drives, ie, not to the FC adapter, so something else is hitting disk
  8. joechang New Member

    one more thing, try to figure out how memory the OS needs, and the total memory needed to support terminal services, then reduce the max memory sql server can use by this amount,
    if you are in SQL Std, 2GB is probably good,
    i do not advise using AWE memory or /3GB in this configuration
  9. Pichi63 New Member

    Hello Again,

    The memory (6 GB) use of this server is more or less the following:

    1.8 GB SQL
    1.3 GB FinSQL (Navision Client)
    Misc processes 700Mb

    There is about 2.26 Gb available in memory at all times. The server has never had to use virtual memory.

    SQL is not configured to use a fixed amount of memory.

    I am still trying to interpret SQL Profiler. Any hints I what I could be looking for?

    Thanks,

    Pedro

  10. satya Moderator

    If the SQL Server edition is a standard edition then it will not use more than 2GB at all, you must have Enterprise Edition to take advantage of higher memory.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  11. Pichi63 New Member

    Sofware setup:

    Windows Server 2003 Enterprise edition running sp1, SQL Server 2000 Enterprise Edition running sp3.

    Thanks,

    Pedro
  12. ranjitjain New Member

    Check for any fixes covered in SP4 related to your cause.
  13. Pichi63 New Member

    OK I have something but I am not a SQL developer. I am a Network Admin....but this particular query is off the scale compared to most. Most queries have 10-30 Reads and No writes.

    Using SQL Profiler I found the following statement created 16711 reads 28 Writes had a CPU of 157 and a Duration of 157.


    INSERT INTO "BD370SQL"."dbo"."Aquacity$Item Ledger Entry" WITH (REPEATABLEREAD, ROWLOCK) ("Entry No_","Item No_","Posting Date","Entry Type","Source No_","Document No_","Description","Location Code","Quantity","Remaining Quantity","Invoiced Quantity","Applies-to Entry","Open","Global Dimension 1 Code","Global Dimension 2 Code","Positive","Source Type","Drop Shipment","Transaction Type","Transport Method","Country Code","Entry_Exit Point","Document Date","External Document No_","Area","Transaction Specification","No_ Series","Prod_ Order No_","Variant Code","Qty_ per Unit of Measure","Unit of Measure Code","Derived from Blanket Order","Cross-Reference No_","Originally Ordered No_","Originally Ordered Var_ Code","Out-of-Stock Substitution","Item Category Code","Nonstock","Purchasing Code","Product Group Code","Transfer Order No_","Completely Invoiced","Last Invoice Date","Applied Entry to Adjust","Correction","Prod_ Order Line No_","Prod_ Order Comp_ Line No_","Service Order No_","Serial No_","Lot No_","Warranty Date","Expiration Date","Return Reason Code","Shipment Method Code","Cantidad ABS","Coste por Cantidad","Subtipo","Cód_ centro","Precio calculado MACRO","Precio medio antes","Precio medio después","Precio medio antes recuento","Precio medio después recuento","Replicado","Salva_precio_coste","Stock Actual Consumo","Compactado") VALUES (@P1,@P2,@P3,@P4,@P5,@P6,@P7,@P8,@P9,@P10,@P11,@P12,@P13,@P14,@P15,@P16,@P17,@P18,@P19,@P20,@P21,@P22,@P23,@P24,@P25,@P26,@P27,@P28,@P29,@P30,@P31,@P32,@P33,@P34,@P35,@P36,@P37,@P38,@P39,@P40,@P41,@P42,@P43,@P44,@P45,@P46,@P47,@P48,@P49,@P50,@P51,@P52,@P53,@P54,@P55,@P56,@P57,@P58,@P59,@P60,@P61,@P62,@P63,@P64,@P65,@P66,@P67)


    Can anyone tell me if this is a possible reason users are saying everything is going so slow???

    Thanks a million,

    Pedro


  14. peterlemonjello New Member

    Is there a trigger on this table?

    "BD370SQL"."dbo"."Aquacity$Item Ledger Entry"
  15. Pichi63 New Member

    Sorry, the only triggers I know about are on guns.....

    Thanks anyway...
  16. satya Moderator

    LOL [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br />Run SP_HELPTRIGGER against the table for information.<br /><br /><hr noshade size="1"><b>Satya SKJ</b><br />Moderator<br /<a target="_blank" href=http://www.SQL-Server-Performance.Com/forum>http://www.SQL-Server-Performance.Com/forum</a><br /><center><font color="teal"><font size="1">This posting is provided “AS IS” with no rights for the sake of <i>knowledge sharing.</i></font id="size1"></font id="teal"></center>
  17. joechang New Member

    the CPU and duration in Profiler are in units of milli-sec, so 157 is really too small to be the cause of a problem that lasts for several minutes. unless this insert runs repeatly thousands of times per series.
    try running profiler with the Duration filter for greater than 10000 to see if there are long running queries

    also, you cut off parts of this query, somewhere the @P1 ... variables should have been declared.
    i am puzzled by the clause
    WITH (REPEATABLEREAD, ROWLOCK)
    as this is usually used only on the SELECT portion of a transaction (ie, a set of queries), not a standalone statement
    still 157ms is a long time to insert a single row
  18. Pichi63 New Member

    Well obviously I am way over my head. The advice you all are so kindly giving me sounds like Chinese but I really would like to know what is the problem. So I guess I need to get a few books and start reading them.

    At least I was able to convince one developer here (who has no real experience with SQL) that the problem is in the design and not the hardrives. I showed him a few traces and he has called in a few hotshot DBAs and developers who will come and try to troubleshoot this. Whatever they find I will publish later for anyone interested.

    Thanks again for helping me. I appreciate it.

    Pedro

Share This Page