This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better – doing several raid 1′s? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven’t thought of.
Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn’t mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0′s and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.
Take a look at this discussion detailing the disk layout for a RAID 1+0 setup on an HP ProLiant server:
A Smart Array controller configured in RAID 1+0 is a stripe across mirrored pairs. Depending on how you’ve arranged your drive cages and which controller you’re using, the disks will likely be paired across controller channels.
E.g. in a 4-disk setup:
Logical Drive: 1 Size: 558.7 GB Fault Tolerance: RAID 1+0 Logical Drive Label: AB3E858350123456789ABCDE6EEF Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 300 GB, OK)
physicaldrive 1I:1:1 pairs to physicaldrive 1I:1:3
physicaldrive 1I:1:2 pairs to physicaldrive 1I:1:4
With that number of disks, there’s no downside to leaving them in a single logical drive. You’ll get the benefits of more (MOAR) spindles for sequential workloads and increased random workload capabilities. I’d recommend tuning the controller cache to bias towards writes (lower-latency) and possibly make some choices at the OS level regarding filesystem choice (XFS!), I/O elevators (deadline) and block device tuning.
Which operating system distribution will this be running on?
- 6 Disk Raid 1+0
- HP Smart Array; How to safely remove a physcial drive with SMART predictive failure from array so it can be replaced?
- Force LUN in a HP Smart Array to rebuild
- Is it possible to shrink the size of an HP Smart Array logical drive?
- HP Smart Array 641 w/ failed drive – will not recognize replacement?
Leave a comment
- What is the easiest way to upgrade my existing Perl 5.14 to Perl 5.16 on FreeBSD 9 using the ports system?
- Know if mysql has done its job
- Redirect https .com to https .co.uk without a valid SSL cert on .com without DNS change
- Why is it a bad idea to use customer email as from address
- 100% packets dropped on first RX queue on 3/5 raid6 iSCSI NAS devices using intel igb (resolved)