I have been running a PCLOS-based RAID server for several years using (6) 750GB WD drives without issue. I have a 7th drive in it, a 250GB drive, as the non-raid boot disk at /dev/sda. Been running like a champ. Asus P5QLE motherboard and (3) 4-port SATA cards in the PCI slots. No problems at all. However, like most servers, I ran out of space and since I had 6 more drive bays in my case, I dumped 6 Samsung 2TB drives into it. This is where it got interesting.
PCLOS now sees /dev/sda - /dev/sdk (11 drives). The drives that it sees are the boot drive (sda), the (6) 750GB drives that have been in it all along, (sdb - sdg) and 4 of the new 2TB drives (sdh-sdk). The last two 2TB drives are nowhere to be seen. They do not appear in /dev and they do not appear in diskdrake.
I swapped the drives around, just to check for controller card problems, bad cables and bad drives and drive bays, and different drives will show up when I swap components around, but never more than 11 drives. In this process I have verified that all of the SATA controller ports work, all of the drives work, and all of the drive bays are powered and connected properly. I just can't seem to get PCLOS to give me more than 11 drives.
The real kicker is that during the boot process, I can see the activity lights on all 13 drives occasionally flash near the beginning of the boot while the hardware is being polled, but as the boot progresses, two drive activity lights stop showing activity which makes me think that the OS has decided to ignore them.
Has anyone else run into this, or can anyone tell me which silver bullet to use to get all my drives to work?
Thanks a bunch!