I would install both drives first, arm the raid in the bios, decide which raid mode to go with before installing the OS.
The raid array has to be formatted, so all data will be erased. However I know some raid software will allow for the array to be installed after the OS install, does a on the fly conversion during the raid build. You would need to check the user manual for your board.
JBOD stands for "just a bunch of drives" and enables several hard drives to be interconnected.
RAID 0 is a setup configuration that does not live up to the name of "Redundant Array of Independent/Inexpensive Drives", because redundancy and/or data security does not exist. Here a so-called stripe set is set up using two or more hard drives, depending on the type of controller used, so that data can be written and read on all hard drives using an alternating procedure. That achieves highest possible performance, but risk of failure multiplies proportionally to the number of drives. If one drives coughs it, the whole array breaks down with it.
RAID 0 is suitable wherever data loss is not the end of the world, for example, in the case of temporary data or a fast drive for video cutters.
RAID 1, however, is the exact opposite: It offers maximum stability for minimum hardware workload. Here the content of a hard drive is simultaneously written to a second hard drive (this technique is known as mirroring) so that should one drive fail, the second drive carries on running. The big disadvantage of RAID 1 is that the available storage capacity is halved. Configurations with more than one mirror are also possible, but then the available capacity is accordingly lower. Good implementations of RAID 1 enable data to be read simultaneously from both hard drives, so that at least the read performance is higher than with a single drive.
RAID 1 is ideal for workstation computers or small servers that have to be constantly available or indeed for creation of an area for short-term back-ups.
RAID 3 is almost insignificant these days. It requires at least three hard drives and dedicates one drive to storing parity data. A stripe set is written to all the other drives, as in the case of RAID 0. Should the parity drive fail, the RAID array stays up and running. Should one of the stripe drives fail, its content has to be restored in realtime using the data on the parity drive. But this is where bottlenecks build up, because the parity drive determines the speed. And by the way, that applies to every write procedure, which is why RAID 3 has faded into obscurity.
In our opinion RAID 3 is only interesting for a few hard drives and in the case of systems where good read performance is imperative.
RAID 5 is another "fail proof" RAID mode, because parity data are saved in this case, too. Unlike in the case of RAID 3, however, these are distributed among all hard drives, so that the working speed of a RAID 5 array increases with every additional hard drive.
Looky here for more info
Last edited by Tubby; 21-07-2005 at 09:16 AM.