Discussion
Following on from my backup thread, I've decided to beef up the resilience of the storage on my PC by getting a cheap (£50) PCI RAID card and some extra hard disks.
I've been looking at the Adaptec and Promise cards, and both seem to do the job.
I can fit 4 extra drives into my PC, but was wondering what the best RAID option was for a mixture of performance and security.
I guess my options are:
1) 2 x disks mirrored
2) 4 x disks, 2 x striped and then mirrored
3) 4 x disks striped
4) 3 x disks striped + 1 hot standby
Will there be much difference in performance between any of these? My favoured solution is option 3 as this gives me the most storage space for the number of disks, but I guess option 2 or 4 would be more resilient.
Any thoughts?
I've been looking at the Adaptec and Promise cards, and both seem to do the job.
I can fit 4 extra drives into my PC, but was wondering what the best RAID option was for a mixture of performance and security.
I guess my options are:
1) 2 x disks mirrored
2) 4 x disks, 2 x striped and then mirrored
3) 4 x disks striped
4) 3 x disks striped + 1 hot standby
Will there be much difference in performance between any of these? My favoured solution is option 3 as this gives me the most storage space for the number of disks, but I guess option 2 or 4 would be more resilient.
Any thoughts?
if you are writing to two disks at once, write operations are much slower but reading is quicker and vice versa.
I would seriously think long and hard about a Raid array, I didn't notice any benefit to them and you always run the risk of a HDD failure (which is why you are getting Raid in the first place) but I found you can't read data without the Raid card if you stripe...
I'm not sure that makes sense.
Stefan
I would seriously think long and hard about a Raid array, I didn't notice any benefit to them and you always run the risk of a HDD failure (which is why you are getting Raid in the first place) but I found you can't read data without the Raid card if you stripe...
I'm not sure that makes sense.
Stefan
Yes, because there are more drives, the odds of one failing rise exponentially compared with one. And when I worked at PC Magazine, we tested a lot of RAIDs (say RAID not RAID array ) and mech failures were surprisingly frequent when there were more than a few spindles operational.
Remember, you may also have to beef up your PSU and that more HDDs also mean more noise...
Remember, you may also have to beef up your PSU and that more HDDs also mean more noise...
I recently opted for a Highpoint 'Rocket Raid 404' card to mirror the data drive in my server - purely as added protection against losing thousands of MP3s, install files, development projects etc etc. As such I now have two 80gb drives mirrored (RAID-1)
There are numerous reviews/comparisons of IDE RAID solutions on sites like Anandtech and TomsHardware. In the early days there was little performance to be gained by IDE RAID - not sure if thats changed now.
If your primary concern is safeguarding your data go for and IDE RAID mirror. If your after performance, go SCSI.
There are numerous reviews/comparisons of IDE RAID solutions on sites like Anandtech and TomsHardware. In the early days there was little performance to be gained by IDE RAID - not sure if thats changed now.
If your primary concern is safeguarding your data go for and IDE RAID mirror. If your after performance, go SCSI.
I have copied and pasted the following information from another site, I thought it may help you. Personally, I have 2x120GB Maxtore SATA hard drives set up in RAID0 configuration, (and that is the best ever performance upgrade I have ever done, with a very noticable increase in speed), and also have 2x120GB maxtor hard drives on the IDE channel, for very important work files and back up, giving me the best of both worlds, and seem to work very well.
Right, the info that I have copied from another site;
RAID 0: Also Known as Striping
On this level, data is striped (dispersed) and stored in multiple HDDs. Improved performance is sought by reading and writing to the HDDs in the array in parallel. In a single HDD, which does not form an array, n blocks of data is stored in Block 0 (D0) through Block n (Dn) on the HDD. When retrieving the data, each block is readout in order. Comparatively in a RAID 0 array, data is readout simultaneously from blocks on each HDD. Improved performance can be anticipated from RAID 0 with application sthat read/write comparatively large amount of data. Moreover, there is no redundant information overhead, which makes it possible to use the full capacity of the installed HDDs. On the other hand, it is not possible to restore data in case of an HDD failure, therefore one can not expect improvement in data reliability and protection.
The other types of Raid are:
RAID-0 is the fastest and most efficient array type but offers no fault-tolerance.
RAID-1 is the array of choice for performance-critical, fault-tolerant environments. In addition, RAID-1 is the only choice for fault-tolerance if no more than two drives are desired.
RAID-2 is seldom used today since ECC is embedded in almost all modern disk drives.
RAID-3 can be used in data intensive or single-user environments which access long sequential records to speed up data transfer. However, RAID-3 does not allow multiple I/O operations to be overlapped and requires synchronized-spindle drives in order to avoid performance degradation with short records.
RAID-4 offers no advantages over RAID-5 and does not support multiple simultaneous write operations.
RAID-5 is the best choice in multi-user environments which are not write performance sensitive. However, at least three, and more typically five drives are required for RAID-5 arrays
Hope that helps
Chris
Right, the info that I have copied from another site;
RAID 0: Also Known as Striping
On this level, data is striped (dispersed) and stored in multiple HDDs. Improved performance is sought by reading and writing to the HDDs in the array in parallel. In a single HDD, which does not form an array, n blocks of data is stored in Block 0 (D0) through Block n (Dn) on the HDD. When retrieving the data, each block is readout in order. Comparatively in a RAID 0 array, data is readout simultaneously from blocks on each HDD. Improved performance can be anticipated from RAID 0 with application sthat read/write comparatively large amount of data. Moreover, there is no redundant information overhead, which makes it possible to use the full capacity of the installed HDDs. On the other hand, it is not possible to restore data in case of an HDD failure, therefore one can not expect improvement in data reliability and protection.
The other types of Raid are:
RAID-0 is the fastest and most efficient array type but offers no fault-tolerance.
RAID-1 is the array of choice for performance-critical, fault-tolerant environments. In addition, RAID-1 is the only choice for fault-tolerance if no more than two drives are desired.
RAID-2 is seldom used today since ECC is embedded in almost all modern disk drives.
RAID-3 can be used in data intensive or single-user environments which access long sequential records to speed up data transfer. However, RAID-3 does not allow multiple I/O operations to be overlapped and requires synchronized-spindle drives in order to avoid performance degradation with short records.
RAID-4 offers no advantages over RAID-5 and does not support multiple simultaneous write operations.
RAID-5 is the best choice in multi-user environments which are not write performance sensitive. However, at least three, and more typically five drives are required for RAID-5 arrays
Hope that helps
Chris
If you only use one RAID controller then you still have a Single Point of Failure, i.e. if your RAID controller goes kaplooey you potentially will lose the lot. Also, if you only use one SCSI controller and it goes bent you'll lose all your disks regardless of how fault-tolerant your RAID setup is.
Accepting this, and if I read your post correctly, you have 1 disk in the machine already and room for an extra 4, i.e. 5 in total. What I would do for maximum resilience is this:
Create a RAID-1 (mirrored) set of two disks and install your OS and applications onto this. If you lose a disk your only problem will be increased read/write transaction time as the mirror rebuilds.
Create a RAID-5 (striped with parity) set with the other three disks and keep your data on this set. This gives acceptable read/write performance but maximum fault-tolerance as the parity checksums are also striped - you can lose any disk from the set and all the data will still be intact.
In theory you can lose one disk from each set and still run. If you lose two disks from either set you've had it but that's unlikely.
HtH
Accepting this, and if I read your post correctly, you have 1 disk in the machine already and room for an extra 4, i.e. 5 in total. What I would do for maximum resilience is this:
Create a RAID-1 (mirrored) set of two disks and install your OS and applications onto this. If you lose a disk your only problem will be increased read/write transaction time as the mirror rebuilds.
Create a RAID-5 (striped with parity) set with the other three disks and keep your data on this set. This gives acceptable read/write performance but maximum fault-tolerance as the parity checksums are also striped - you can lose any disk from the set and all the data will still be intact.
In theory you can lose one disk from each set and still run. If you lose two disks from either set you've had it but that's unlikely.
HtH
good advice from loaf.
As an aside, here is what can go wrong after making false assumptions about a RAID device: We bought an "network attached storage device" from a major hardware supplier that rhymes with Dull. The device is a standalone file server. It has two power supplies, two network cards and four discs. It is configured for RAID5 using 3 discs with the other left spare in case of a disc failure. Sounds fairly robust, yes?
Nope. Turns out to be a complete POS. We had two discs simultaneously indicate they had failed. There is no way that is a simultaneous mechanical failure of the discs. The odds against that must be huge. It must have been the RAID controller that went pop. So we ring up Dull. Turns out there is no way of replacing the failed RAID controller. Single point of failure in the device. Furthermore, they couldn't remove any of the discs from the device, because they are physically attached to its motherboard. That's right! Even in normal opertation if only one disc had failed, all you could do is get it to rebuild the array usig the spare disc, but then you can't replace the knackered disc and therefore you no longer have any protection against a second disc failure. In our case, as none of the discs could be removed, they couldn't be attached to another controller to see if it could read them.
Well, we're grown ups and have a big tape backup device that does weekly complete backups and daily incrementals, so even if the RAID device is FUBAR we've still got our data? Nope. No one had noticed that the backups had been failing since JANUARY! Moral of the story: if you get people into a daily routine, you simultaneously labotimise them and they don't pay any attention to what they are doing.
Mercifully I had recently made a complete backup of the device onto a cheap and cheerful USB2 external drive. Lost some data, but not much and we survived.
As an aside, here is what can go wrong after making false assumptions about a RAID device: We bought an "network attached storage device" from a major hardware supplier that rhymes with Dull. The device is a standalone file server. It has two power supplies, two network cards and four discs. It is configured for RAID5 using 3 discs with the other left spare in case of a disc failure. Sounds fairly robust, yes?
Nope. Turns out to be a complete POS. We had two discs simultaneously indicate they had failed. There is no way that is a simultaneous mechanical failure of the discs. The odds against that must be huge. It must have been the RAID controller that went pop. So we ring up Dull. Turns out there is no way of replacing the failed RAID controller. Single point of failure in the device. Furthermore, they couldn't remove any of the discs from the device, because they are physically attached to its motherboard. That's right! Even in normal opertation if only one disc had failed, all you could do is get it to rebuild the array usig the spare disc, but then you can't replace the knackered disc and therefore you no longer have any protection against a second disc failure. In our case, as none of the discs could be removed, they couldn't be attached to another controller to see if it could read them.
Well, we're grown ups and have a big tape backup device that does weekly complete backups and daily incrementals, so even if the RAID device is FUBAR we've still got our data? Nope. No one had noticed that the backups had been failing since JANUARY! Moral of the story: if you get people into a daily routine, you simultaneously labotimise them and they don't pay any attention to what they are doing.
Mercifully I had recently made a complete backup of the device onto a cheap and cheerful USB2 external drive. Lost some data, but not much and we survived.
Played with lots of setups in the past and probably will in the future.
Most of the techy stuff has been covered but outside of that, one bit of advice. Whatever you do test the bastard until you can break it. Get it up and running and pull the power cord, take drives out, anything to see what happens when it fails.
Once you now how it behaves don't forget to continue to do backups (these USB hard drives are great for that) and test your backups at least monthly.
Most of the techy stuff has been covered but outside of that, one bit of advice. Whatever you do test the bastard until you can break it. Get it up and running and pull the power cord, take drives out, anything to see what happens when it fails.
Once you now how it behaves don't forget to continue to do backups (these USB hard drives are great for that) and test your backups at least monthly.
Ta for all the advice.
I've decided to keep it simple, and have gone for an external USB/Firewire DVD writer and external 120GB Hard disk.
I've managed to find some decent backup software, and the disk (by Iomega) comes with Norton Ghost as well, so my plan is to simply backup everything to disk and DVD (file backups and ghost images), then do incremental backups to CD which will be stored in a fireproof safe along with the DVDs.
Hopefully that will do the trick!
I've decided to keep it simple, and have gone for an external USB/Firewire DVD writer and external 120GB Hard disk.
I've managed to find some decent backup software, and the disk (by Iomega) comes with Norton Ghost as well, so my plan is to simply backup everything to disk and DVD (file backups and ghost images), then do incremental backups to CD which will be stored in a fireproof safe along with the DVDs.
Hopefully that will do the trick!
ehasler said:I know this is a bit late, but non-one seems to have covered it - your best bet for resilience is
I guess my options are:
1) 2 x disks mirrored
2) 4 x disks, 2 x striped and then mirrored
3) 4 x disks striped
4) 3 x disks striped + 1 hot standby
5) 4 x disks, 2 x mirrored and then striped
This gives you more resilience, since if you have 1 disk die, the next disk to die has only a 1 in 3 chance of killing your system.
With 2), the second disk failure has 1 in 2 chance of killing your system.
If possible, always mirror first, then stripe, as you can then lose one disk out of every pair & still continue (Not all systems allow you to do this)
One last point,
Don't forget that if you're stuffing 4 disks into a vanilla case with a RAID controller you are gonna load the power supply heaps more and you are also gonna generate a load more heat and noise - all fine in a server room but can cause problems at home.
I think you've gone the right way with using a backup medium instead.#
Tom's Hardware Guide did some interesting stuff with RAID performance a while back (few years???). I think the were comparing some performance gains from useing IDE RAID controllers and got some really impressive reults with the right configuration.
best
Ex
Don't forget that if you're stuffing 4 disks into a vanilla case with a RAID controller you are gonna load the power supply heaps more and you are also gonna generate a load more heat and noise - all fine in a server room but can cause problems at home.
I think you've gone the right way with using a backup medium instead.#
Tom's Hardware Guide did some interesting stuff with RAID performance a while back (few years???). I think the were comparing some performance gains from useing IDE RAID controllers and got some really impressive reults with the right configuration.
best
Ex
Gassing Station | Computers, Gadgets & Stuff | Top of Page | What's New | My Stuff