This might be interesting for some
We bought two PS6000’s 10/09
One came with a bum controller card which was replaced Quickly. Then 12/09 we lost a drive on the same PS6000. Then the PS6000’s have note had issues (as of Jan 27 2010)
We also bought a PS4000 at the same time
It lost its first drive on 12/12/09
then its second drive on 12/15/09
then its third drive on 1/26/10
This has been a little too frequent for my tastes….
No data has been lost and the failovers have worked flawlessly. It makes me wonder about that whole SATA versus SAS reliability thing though.
Knock on wood, but so far no failures on my PS5000. I did have an experience about 3 years ago on an old server (gray box). A 3ware controller kept reporting in a sequential manner that x drive was bad, and to replace. I would swap that one out, then it would say the next one would be bad. Turns out, that just one of the drives in there had old firmware on the drives, and wasn’t reporting to the controller correcty. The same behavior as if one puts a non-RAID compliant SATA drive in a RAID array. Not saying that is the case here, but it rung a bell from a week in my life that wasn’t very fun.
Dumb question, but is your PS4000 racked or sitting on the floor? Vibration is supposedly the major factor in drive failures…
Interestingly enough, not racked but on top of a crate for the moment. That would be an interesting study though, two identical SAN’s one racked and the other not. And see which had more drive failures. On the other hand Ken travels with his all over the place and his drives have survived 🙂
[…] I don’t know if I dare post this, but no drives lost or hardware issues since 1/26/10 Yay! My previous post was here https://michaelellerbeck.com/2010/01/27/equallogic-maintenance-record/ […]
Have a 4000E with 250GB SATA’s – it has lost a drive every 2-3 months since install. It has always been racked 🙂
Had the same problem with one of my two PS6000s. Both were racked. No vibration issues that I can sense but a proper analysis was not conducted. What is strange is that on the unit that had failed drives, they failed in sequential order. Drive 6, 7, 8, 9, 10. And they were approximately 30 days appart. Since I moved to firmware 5.1.2 though the drive failures have ceased to be an issue. I remember talking to the techs and they indicated the drives were not really failing but were being detected as failed due to buggy firmware.