NTFS Allocation Unit Size | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

NTFS Allocation Unit Size

What NTFS allocation unit size is best for OLTP databases? SQL executables, log files, data files are all located on separate channels. Was thinking that OS and SQL executables should be on the default for the disk (4KB). Data and log files on drives that have an 8KB block size (thinking that OLTP servers do a lot of random IO in 8KB blocks). Or, should we use 4KB or 64KB block sizes for data and log drives? What does SQL Server like best for the data and log drives?
This is a topic of some debate. Personally, I use the defaults. Here are some notes that I found on this subject that may be relevant to your question: According to the book, "Microsoft SQL Server 2000 Performance Tuning,", the authors recommend using the 64-KB stripe size for RAID arrays whenever possible. Another theory that I have heard suggests that you find the average disk transfer on your disk subsystem and then set the allocation unit to match closely it. See this article for more details: http://www.winnetmag.com/Articles/Index.cfm?ArticleID=4826&pg=2 Also, there is a presentation by a Unisys employee at: http://216.239.53.100/search?q=cach…rver NTFS allocation unit size&hl=en&ie=UTF-8 that recommends the 64K allocation unit, claiming that their tests indicate better SQL Server I/O performance. Hopefully others will be able to offer their own personal experience in this area. ——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
For SAP R/3 I now 8k is the recommended size when tuning for SQL Server 7.0. /Argyle
Thanks for the repsonses. I also just found 64KB size is recommended in the Microsoft SQL Server 2000 Operations Guide. Because there are so many different answers to this question, depending on who you ask, we will probably do some benchmarking and determine which is best for our implementation (Microsoft Business Solutions – Great Plains Edition). Thanks, ozzie
If you get any good benchmarking results, please share them with us. Thanks. ——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
The 64K allocation unit size for OLAP has it’s merits. It can help mitigate the negative impact of prefetch in SQL 2000. When you read a large nuber of rows in a table or index, SQL 2000 will prefetch up to 2000 pages. The added IO can cause a short term negative impact on performance due to the added IO. In the long term you gain because the pages are now in memory. Prefetch performs 64K IOs when reading the pages. On locally attached SCSI, The differece between doing a 64K IO with a 64K allocation unit size on the file system and a 64K IO whith an 8K allocation unit size is about 7% on locally attached SCSI. On Fibre Channel, the perfomance penalty climbs to a whopping 38%. On the other side of the coin, the penalt for 8K IOs on a 64K allocation unit is about 4% on either locally attached SCSI or Fibre Channel. In situations like OLAP, where you do a lot of reads, you could make a good arguement for a 64K allocation unit size. For OLTP, where it’s mostly writes, you’ll see overall performance degrade slightly. John Fullbright
MOSMWNMTK
[email protected]
Could someone explain the difference unit sizes here that exist from OS down to the physical disks. Even if the default allocation size is 4096 kb in Windows NT/2k I’ve heard that Windows internally will work with 32 kb chunks. How does this affect things. And when you stripe a RAID you also have what’s called a stripe size that you can set, for example 64 kb. How do all these relate? /Argyle
I guess the place to start is the physical disk geometry. Most disks have 64 sectors per track. At 512 bytes per sector, that’s 32K. When we stripe disks in an array, we take chunks; which should be 32k or a multiple of 32K. Any less would read part of a track off one spindle and part of a track off another. This causes an extra IO on the array. You can check the physical geometry of your drives by using diskpar.exe. The command is: diskpar -i [physical drive number] For physical disk 2, it would be diskpar -i 2. There are some recommendations floating out there to make the chunck size larger. This is usually an attempt to mitigate the effects of disk alignment. By default, Windows uses the first 63 sectors on a volume for volume data that is hidden. The partition begins at the 64th sector. This causes extra track stepping, and potentially IOs, on the backend. Let’s assume that the file system has been formatted with an 8k allocation unit size, and the chunk size is 32K. One out of four IOs will cross a track boundary, as well as a chunk boundary. Crossing a track boundary adds the head stepping time to that one out of four IOs. Crossing a chunk boundary creates an extra IO on the next chunk. Increasing the chunk size to 64K will not do anything for crossing track boundaries, but will reduce the number of extra IOs caused by crossing a chunk boundary by half. A better way to address this problem is to perform disk alignment. Using diskpar -s, we can change the offset to the start of the partition to 64. This will cause the partition to line up with the first sector on the second track. It eliminates crossing track and chunk boundaries for IO sizes up to 32K. If using an IO size larger than 32K, you’ll need to set both the chunk size and allocation unit size appropriately, and choose an appropriate offset when performing disk alignment. For example; if you have a 64K IO size, then you would set the chunk size to 64K or a multiple of 64K. You would set the allocation unit size to 64K, and your alignment offset would be 128. This will align IOs with track boundaries, and prevent any 64K IO from crossing a chunk boundary. John Fullbright
MOSMWNMTK
[email protected]

]]>