Intel IOP341, IOP348, IOP333: SAS Raid Controllers | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Intel IOP341, IOP348, IOP333: SAS Raid Controllers

Does anyone (JoeChang) know when to expect the controllers with the newer Intel IOP chips?<br /><ul><br /><u>Here are my summary notes on research I’ve culled:</u><li>Areca’s ARC-1680/1680LP (667 mhz Intel IOP34<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> — Expected July(?) or later (Q2 or Q3 2007)</li><br /><li>ATTO ExpressSAS R348 and R380 (800MHz Intel IOP34<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> (supposedly shipping now, can’t find price or supplier)</li><br /><li>Adaptec 4805SAS (IOP34<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> ($910, can purchase at</li><br /><b>What am I missing?</b><br /></ul><br /><ul><br /><u>Here are my <b>non-Intel-IOP34x</b> considerations:</u><li>LSI MegaRAID SAS 8888ELP (uses 500 MHz PowerPC chip, NOT Intel) — "Coming Soon"</li><br /><li>HP Smart Array P800 – 533MHz PowerPC</li><br /><li>PERC5 – IOP333 (500 MHz??)</li><br /></ul><font size="1"><i>Does anyone have any guesses as to how to compare the PowerPC chips vs the Intel IOP chips?</i></font id="size1"><br /><br /><br />We need to make an order next week. So, I’m trying to determine if I should go for the Adaptec and risk a couple weeks delay if they end up not being able to deliver the card (per Areca’s reply on reliability issues with the chip).<br /><br /><b>I mean, bottom line: will the difference in just getting a PERC5 vs a card based on IOP348 or IOP341 make a significant difference?</b> <i>(where significant == greater than 15-20%)</i> Oh … and if you can just answer this, you can skip all the hooplah above: What is the most performant SAS RAID controller card out there that I can purchase today?! <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br /><b>FYI:</b> We are planning to buy (2) Dell MD1000’s with (30) 146GB 15.5K Cheetah drives.<br /><br /><br /><hr noshade size="1"><br /><br />Email from Areca:<br />—————————–<br /><br />Dear Sir,<br /><br />This is Kevin Wang from Areca Technology, Tech-Support Team.<br />regarding your question, in our schedule, the SAS controllers should available on end of Q2 or Q3.<br />actually, all our hardware, firmware and drivers are ready. we are just waiting for the processor become more reliable.<br />because the processor we used have some issues needed be fixed.<br /><br /><br />Best Regards,<br /><br /><br />Kevin Wang<br /><br />Areca Technology Tech-support Division<br />Tel : 886-2-87974060 Ext. 223<br />Fax : 886-2-87975970<br /<a target="_blank" href=Http://>Http://</a><br /<a target="_blank" href=Ftp://>Ftp://</a><br /><br />
I would like to thank Kevin of Areca for his frank answer, its not an answer you would normally get that has been sanitized by a marketing numnut Its not like I get early info on this,
I know some HP storage people are using my test scripts, but they do not share info I would really like to know what reliability problems affect only SAS drives, not SATA,
is something with SAS dual-ports?
I suppose it has to be something with an SAS feature that is not in SATA
Hopefully its not because they feel SATA does not need reliability In general, for a silicon bug,
if its found after first silicon
the bug has to be corrected, a new mask made,
it has to run through the manufacturing process,
which can be a couple of weeks with hand walking,
then the validation process has to start over this is why there is usually not a 1-2 week delay,
either its on time, or 3-6 months late
and sometimes when you are really lucky,
and A step actually has no significant bugs, its 3-6 months early anyways, new technology is great to play with
but no need to risk your job and reputation on improvements
the people out in user land cannot appreciate I suggest ording the PERC5/E for production use
order the IOP34x controller (on the PO for the group paying for the prod system)
then use the 34x in your lab for 3-6 months
then evaluate for the next production use

Thanks Joe for your concise answer. Your help in these forums is saving a lot of people time and money. -Jonathan
I have been asked to clarify some language that may have been misleading with regards to the Areca support answer that I posted here in this forum. Below are two pieces of correspondence that should speak for themselves:<br /><br />&gt; —–Original Message—–<br />&gt; From: Billion Wu <a target="_blank" href=mailto:>mailto:</a> billion.wu _AT_ areca _DOT_ com _DOT_ tw]<br />&gt; Sent: Wednesday, April 11, 2007 9:35 PM<br />&gt; To: Jonathan Boarman<br />&gt; Subject: Re: Ask a question<br />&gt; <br />&gt; Hi Jonathan,<br />&gt; <br />&gt; You have posted our FAE Kevin Wang in the following link.<br />&gt;<a target="_blank" href=></a><br />&gt; <br />&gt; It is not corrected in the following part:<br />&gt; "we are just waiting for the processor become more reliable.<br />&gt; because the processor we used have some issues needed be fixed."<br />&gt; <br />&gt; There is no any issue on the Intel IOP processor on the IOP348. We have<br />&gt; delivered more than two thousands pieces same series processor IOP341 to<br />&gt; our<br />&gt; customer.The IOP341 has implemented on our 12/16/24 port<br />&gt; (ARC-1231Ml/1261ML/1280ML) SATA RAID adapter. Our SAS RAID adapter is<br />&gt; ready<br />&gt; for waiting Intel latest SAS protocol firmware. We will start to delivery<br />&gt; the sample for customer to verification.<br />&gt; Please help us to correct the information that you had posted on the above<br />&gt; link.<br />&gt; Our information will confuse to users and unfair to Intel SAS<br />&gt; controller(IOP34<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> term.<br />&gt; <br />&gt; <br />&gt; Regards<br />&gt; Billion Wu<br />&gt; <br />&gt; <br /><br /><br />&gt; —–Original Message—–<br />&gt; From: Areca Support <a target="_blank" href=mailto<img src=’/community/emoticons/emotion-7.gif’ alt=’:s’ />upport>mailto<img src=’/community/emoticons/emotion-7.gif’ alt=’:s’ />upport</a> _AT_ areca _DOT_ com _DOT_ tw]<br />&gt; Sent: Wednesday, April 11, 2007 11:09 PM<br />&gt; To: Jonathan Boarman<br />&gt; Subject: Re: Ask a question<br />&gt; <br />&gt; Dear Sir,<br />&gt; <br />&gt; at first, i have to apologize for my non-clarify reply.<br />&gt; we didn’t released our controller is because we are waiting for the<br />&gt; lastest<br />&gt; SAS firmware from Intel.<br />&gt; the current firmware have some compatible issues with SATA drives now.<br />&gt; it is not a silicon bug or hardware releated issue.<br />&gt; <br />&gt; <br />&gt; Best Regards,<br />&gt; <br />&gt; <br />&gt; Kevin Wang<br />&gt; <br />&gt; Areca Technology Tech-support Division<br />&gt; Tel : 886-2-87974060 Ext. 223<br />&gt; Fax : 886-2-87975970<br />&gt;<a target="_blank" href=Http://>Http://</a><br />&gt;<a target="_blank" href=Ftp://>Ftp://</a><br />&gt; <br /><br /><br />
funny things happens when english technical terms are translated to chinese, then back
the recent terms (less than 100 years) don’t really exist in old chinese
so either the english term is used, or the translator might find the closest equivalent: i guess firmware might translate to not hard wok
then imagine what that might get translated to coming back to english anyways,
i did some additional testing on my PERC 5 I can get 800MB/s in table scan to 10 disks, JBOD, no RAID,
heap organized table,
my scripts in the other post has a primary key clustered
i changed the script to have a clustered unique index,
populated the table, then dropped the index also, my 10 internal drives on the PERC5I where 15K Fujitsu MAX 3036
my 15 drives in the MD1000 on the PERC5E were a mix of Seagate Cheetah 15K.4 and Maxtor 15K II’s
neither as good as the Fujitsu’s in single drive performance
so even though the PERC5E was plugged into the PCI-e x8 slot,
i could not get better than 800MB/sec with 15 drives
even though I could get that on the 10 Fujitsu drives in the x4 slot

Here’s more info that may be helpful for those making purchasing decisions based on IOP chip models. I’m hoping to receive more definitive performance data in the future: > currently i don’t have a IOP348 controller performance test result, so i
> may
> not able to provide you more detail about it.
> for IOP341, the IOP341 have two seperate internal bus inside, the IOP333
> have one single internal bus only.
> so when the performance reach the controller bus bandwidth limitation, the
> IOP341 can provide you twice performance than IOP333.
> in our test, 8 drives should hit the IOP333 controller bottleneck, more
> drives can’t provide you higher performance.
> you can find a document shows the result in our ftp site.
> for IOP341, about 16 drives will hit the controller bottleneck.
> and of course the drive amount may vary with the drive specification.
> if you have 4 drives only, IOP333 have similar performance with IOP341.
> because the bottleneck will be the drives not controller. Joe, does this seem to mirror your findings? So, for example, let’s say we were to purchase two MD1000 enclosures with 30 15K.5 drives connected to PERC5/e controllers. It would seem that we should be using two controllers per enclosure using a 7/8 array split. (I know you prefer JBOD, but that won’t fly for our environment.)

i do JBOD so it is easy for me to generate performance vs # of disks curves
6-8 drives per array is the normal preferred config depending on which drive you have, 80MB/sec for 10K, 95MB/sec for most 15K and 120MB/sec for the new Seagate 15K.5
is the max sequential
so 6-10 drives can saturate a controller for pure sequential ops but is hard to get true sequential disk ops for a live SQL Server,
in many cases you might be getting 30-50MB/sec even in table/clustered index scan ops. thats why its ok to fill a MD1000 for one PERC5, or even have 2 MD1000’s for one PERC5,
but since the PERC5 is relatively cheap compared to 1 fully populated MD1000
i prefer 1 PERC5 per MD 1000 until you fill the PCI-e slots (4 for PE2900 + 1 for the PERC5i)
then if you need more disks for better random IO, it is OK to go 2 MD1000 per PERC5 I do like the HP ProLiant ML370 which has 6 PCI-e x4 slots instead of fewer mix of x8 and x4 as there are few adapters capable of saturare a x8 slot
does Dell let you specify the specific drive?
mine came with the mix of Fujitsu, Maxtor and Seagate drives, which is very annoying for test purposes

>> does Dell let you specify the specific drive?
>> mine came with the mix of Fujitsu, Maxtor and Seagate drives, which is very annoying for test purposes
That is definitely annoying. I will confirm whether this can be done. I asked for 15K.5 drives, but the quotes don’t list the drive manufacturers. I will check back on whether we can confirm the use of 15K.5 drives.