[plug] dmraid vs mdadm
Tim White
weirdit at gmail.com
Sat Mar 3 18:08:39 WST 2012
Basically, linux will "auto assemble" raid devices it finds, even if
they aren't for the current system. Annoyingly, it'll ignore the names
the raid array say they want (/dev/md1, /dev/md2 etc) and give them
numbers starting at 125. (Which can break a NAS if you plug in the raid
drives into a normal machine, and don't set the names to the correct
numbers before putting them back in the NAS).
See https://bugzilla.novell.com/show_bug.cgi?id=638532#c1 for a solution
to getting it to name them correctly and hopefully on reboot also named
correctly.
What I'd do, boot off a live distro, assemble them with the correct
numbers and update the preferred minor stuff (see above link), then
mount them all and chroot into the system. From there, make sure that
things like /etc/mdadm.conf are correct, and maybe even update your
initramfs if needed.
I believe if you did a brand new OS install with an installer that
supported setting up RAID as part of the install process, it would be
simple. Which OS are you using? I understand that Debian should be
fairly easy to get running on RAID.
Tim
p.s. Stick with mdadm, it's much more portable when something breaks. As
for how it works, that depends on it's RAID level, and yes, it's
software RAID so no hardware accelerated RAID5 etc, I personally don't
use RAID5 preferring RAID 0, 1 and 10. Drives are (were) cheap.
On 03/03/12 13:26, Alexander Hartner wrote:
> I am setting up a new system. After partitioning /dev/sda,
> transferring my partition over to /dev/sdb using
>
> *sfdisk -d /dev/sda | sfdisk --force /dev/sdb*
>
>
> And setting up the raid arrays using :
>
> *mdadm --create --verbose /dev/md1 --assume-clean --level=1 -e 0.90
> --raid-devices=2 /dev/sda1 /dev/sdb1*
> *mdadm --create --verbose /dev/md2 --assume-clean --level=1 -e 0.90
> --raid-devices=2 /dev/sda2 /dev/sdb2*
> *mdadm --create --verbose /dev/md3 --assume-clean --level=1 -e 0.90
> --raid-devices=2 /dev/sda3 /dev/sdb3*
> *
> *
> Everything seems OK, however if I reboot all the drive names change to :
>
> *
> cat /proc/mdstat
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> md125 : active (auto-read-only) raid1 sdb1[1] sda1[0]
> 262080 blocks [2/2] [UU]
> md126 : active (auto-read-only) raid1 sdb2[1] sda2[0]
> 4194240 blocks [2/2] [UU]
> md127 : active (auto-read-only) raid1 sdb3[1] sda3[0]
> 972305024 blocks [2/2] [UU]
> unused devices: <none>
> *
> *
> *
> While this in itself may not be a problem, after I install my OS
> (Linux of course) I get a kernel panic. I suspect the kernel panic is
> due to the kernel not finding the partition I specified in grub.conf
> and fstab (i.e. : /dev/md1|2|3).
>
> I have been struggling with mdadm raid for several days now, with
> little progress to show. dmraid seems to leverage what little support
> is provided by my onboard RAID controller, which seems like a good
> think. mdadm seems to just keep both drives in sync with each other
> without leveraging hardware. I might well be wrong there. So far I
> only tried mdadm.
>
> Alex
>
>
> On 03/03/2012, at 09:33 , Marcos Raul Carot Collins wrote:
>
>> Are you installing the OS or are you trying to implement to an extra
>> hard disk
>> after installing?
>>
>> I only set it up in Debian at install time (mdadm) and although you
>> need some
>> partitioning background, it is prety easy. Let me know if that's your
>> case and
>> I can guide you.
>>
>> I haven't tried in other OSes...
>>
>> Cheers,
>>
>> Marcos
>>
>> On Sábado 03 Marzo 2012 05:17:12 Tim White escribió:
>>> On 03/03/12 04:36, Alexander Hartner wrote:
>>>> Has anybody got any experience with either / both ? Which one do you
>>>> suggest ? I have been trying to configure mdadm for the past week
>>>> without success. Should I persist or try dmraid ? Is mdadm really so
>>>> much better then dmraid ?
>>>
>>> I've never used dmraid (and a quick read suggests it's for "software
>>> raid" provided by certain bios).
>>> What are you trying to achieve? I have successfully used mdadm many
>>> times in the past, both with setting up raid and with repairing NAS's.
>>>
>>> Tim
>>> _______________________________________________
>>> PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>> http://lists.plug.org.au/mailman/listinfo/plug
>>> Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>>> PLUG Membership: http://www.plug.org.au/membership
>> _______________________________________________
>> PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>> http://lists.plug.org.au/mailman/listinfo/plug
>> Committee e-mail: committee at plug.org.au
>> PLUG Membership: http://www.plug.org.au/membership
>
>
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20120303/63a5e4b6/attachment.html>
More information about the plug
mailing list