Official GIGABYTE Forum

GA-880GMA-USB3 RAID 10 with 4 SSD's

GA-880GMA-USB3 RAID 10 with 4 SSD's
« on: September 12, 2015, 03:52:22 pm »
I'm seeing frequent messages in the Windows 7 Event Log similar to this:

Task b0 timeout on disk (Port Number 1,Target ID 1) at LBA 0x0c24f00 (Length 0x0)

This error can refer to any of the four ports utilized.  Until last week I had used this board with a RAID 1 array and two Seagete spinners with no issues.  This problem began immediately upon replacing those drives with four Sandisk Plus 240 Gb Drives in a RAID 10 array.  I thought perhaps these errors were benign but last night one of the drives fell off the array.  I had to add that drive back & rebuild the array in order to get back to a functional array.

May I have thoughts & suggestions on where the problem might be?  I have already assured that the latest bios & drivers are loaded.

Re: GA-880GMA-USB3 RAID 10 with 4 SSD's
« Reply #1 on: September 12, 2015, 09:25:02 pm »
In retrospect, when I defined the array I chose the default stripe value.  I've read that it should be 4k and I believe the default is 128K.  Could this cause the problem I've described above?

shadowsports

  • 2258
  • 67
  • Xbox One, Drives STI, Use QVL RAM For Best Results
    • Gigabyte US
Re: GA-880GMA-USB3 RAID 10 with 4 SSD's
« Reply #2 on: September 13, 2015, 04:53:10 pm »
This is a difficult error to troubleshoot.  I have a few suggestions.

Replace the data cables for each disk
Check to see if updated firmware is available for the drives
http://kb.sandisk.com/app/answers/detail/a_id/15108/
If the same physical drive keeps falling off, I'd perform a Secure Erase on it before re-adding as a RAID member

Stripe size on RAID 0.  Smaller vs large.  I think you can find an equal number of posts for and against smaller vs. larger stripe sizes for RAID 0.  I am a fan of a larger stripe 64k or 128K.  This was more of an issue back in the day when we all used spinners.  Now that SSD's are pretty much the norm, using a larger stripe provides best overall read/write performance in the majority of cases.  Boot drive vs data drive.  I suspect the majority of the data you are accessing is much > than 4k.  Interleaving smaller data chunks between 2 or more disks adds greater overhead on the controller.  It can handle it, but really doesn't need to work that hard.  With the high level of performance you get from SSD there is not need to attempt manual optimization as the controller and SSD's will do this for you.

FYI  :) You cannot randomly change stripe size on the fly and test read/write performance.  You would have to restore the entire array at each iteration.  Also, ***please back up your data regularly.  RAID 10 does not mean you are 100% protected.   8)
« Last Edit: September 13, 2015, 06:18:47 pm by shadowsports »
Z390 AORUS PRO (F10) \850w, 9900K, 32GB GSkill TriZ RGB - 16-18-18-38, RTX 3080Ti FTW3 Ultra, 960 Pro_m.2, W11
Z370-HD3P (F5) \750w, 8350K, 8GB LPX 3200 - 16-18-18-38, GTX 970 FTW SC, Intel SSD, 2TB RAID1, W11
Z97X-UD5H \850w, 4790K, 32GB Vengeance, RTX 2080 FTW