[plug] dmraid vs mdadm

Brian Owens evansaussie at gmail.com
Sun Mar 4 09:04:09 WST 2012


Hi there,

When I did this this I didn't use I used parted to partition my drives (I
had 2TB drives and I found someone recommended to use a GPT partition
table) - but this is what 'parted -l' shows on my NAS debian box:

Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 3000MB 3000MB Root raid

And this is what I typed into parted to make a raid partition:

mkpart Root 17.4kB 3000MB
toggle 1 raid true

I have all the drive definitions in /etc/mdadm/mdadm.conf

Here is a copy of mine, see if it helps.
<--------->
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=425ad056:08f06a52:adb8459e:ccfa5ba6
name=blah:0
# This file was auto-generated on Sat, 02 Jul 2011 09:05:01 +0100
# by mkconf 3.1.4-1+8efb9d1
<------->

This is the output of my 'blkid' command (you need to be root).
<------->
/dev/sda1: UUID="425ad056-08f0-6a52-adb8-459eccfa5ba6" LABEL="blah:0"
TYPE="linux_raid_member"
/dev/sdb1: UUID="425ad056-08f0-6a52-adb8-459eccfa5ba6" LABEL="blah:0"
TYPE="linux_raid_member"
/dev/md0: UUID="90edef49-bc64-43e7-85f7-3bdb9c61b431" TYPE="ext4"
<------->

Notice that /dev/sda1 and sda2 have the same uuid.

And finally here's the important line in /etc/fstab
<------->
UUID=90edef49-bc64-43e7-85f7-3bdb9c61b431 / ext4 noatime,defaults 0 1
<------->

Finally I noticed that I had to run 'update-initramfs -u' to copy the new
mdadm.conf into the initramfs.

Goodluck!

Owain

On Mar 3, 2012 9:09 PM, "Alexander Hartner" <alex at j2anywhere.com> wrote:
>
> Hi Tim
>
> Thanks for your post. I tried this several times now, but everytime I
boot of the Live CD I get md125 again. I have't been able to boot of hard
drive as I keep on getting a kernel panic on boot up. I suspect the panic
is caused by the kernel also not being able to find the correct md devices.
I tried running the commands
>
> mdadm -S /dev/md125
> mdadm -A /dev/md1 --update=super-minor
>
> But they didn't fix the issue. Still on every reboot from LiveCD I get
back to md125. After my initial installation after I created the file
system on the raid array using :
>
> mkfs.ext4 /dev/md1
> mkswap /dev/md2
> swapon /dev/md2
> mkfs.ext4 /dev/md3
>
> and a reboot, fdisk now reports that the partition table for mdX is not
valid.
>
> Disk /dev/md127: 995.6 GB, 995640344576 bytes
> 2 heads, 4 sectors/track, 243076256 cylinders, total 1944610048 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md127 doesn't contain a valid partition table
>
> Disk /dev/md126: 268 MB, 268369920 bytes
> 2 heads, 4 sectors/track, 65520 cylinders, total 524160 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md126 doesn't contain a valid partition table
>
> Disk /dev/md125: 4294 MB, 4294901760 bytes
> 2 heads, 4 sectors/track, 1048560 cylinders, total 8388480 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md125 doesn't contain a valid partition table
>
> I wonder why it is not picking up the partition table I created.
>
> I am using Gentoo for this as I really like it. It makes things a little
bit more complicated but generally they work, well at least until now they
did.
>
>
> On 03/03/2012, at 18:08 , Tim White wrote:
>
> Basically, linux will "auto assemble" raid devices it finds, even if they
aren't for the current system. Annoyingly, it'll ignore the names the raid
array say they want (/dev/md1, /dev/md2 etc) and give them numbers starting
at 125. (Which can break a NAS if you plug in the raid drives into a normal
machine, and don't set the names to the correct numbers before putting them
back in the NAS).
> See https://bugzilla.novell.com/show_bug.cgi?id=638532#c1 for a solution
to getting it to name them correctly and hopefully on reboot also named
correctly.
>
> What I'd do, boot off a live distro, assemble them with the correct
numbers and update the preferred minor stuff (see above link), then mount
them all and chroot into the system. From there, make sure that things like
/etc/mdadm.conf are correct, and maybe even update your initramfs if needed.
>
> I believe if you did a brand new OS install with an installer that
supported setting up RAID as part of the install process, it would be
simple. Which OS are you using? I understand that Debian should be fairly
easy to get running on RAID.
>
> Tim
> p.s. Stick with mdadm, it's much more portable when something breaks. As
for how it works, that depends on it's RAID level, and yes, it's software
RAID so no hardware accelerated RAID5 etc, I personally don't use RAID5
preferring RAID 0, 1 and 10. Drives are (were) cheap.
>
> On 03/03/12 13:26, Alexander Hartner wrote:
>
> I am setting up a new system. After partitioning /dev/sda, transferring
my partition over to /dev/sdb using
>
> sfdisk -d /dev/sda | sfdisk --force /dev/sdb
>
>
> And setting up the raid arrays using :
>
> mdadm --create --verbose /dev/md1 --assume-clean --level=1 -e 0.90
--raid-devices=2 /dev/sda1 /dev/sdb1
> mdadm --create --verbose /dev/md2 --assume-clean --level=1 -e 0.90
--raid-devices=2 /dev/sda2 /dev/sdb2
> mdadm --create --verbose /dev/md3 --assume-clean --level=1 -e 0.90
--raid-devices=2 /dev/sda3 /dev/sdb3
>
> Everything seems OK, however if I reboot all the drive names change to :
>
> cat /proc/mdstat
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> md125 : active (auto-read-only) raid1 sdb1[1] sda1[0]
>       262080 blocks [2/2] [UU]
>
> md126 : active (auto-read-only) raid1 sdb2[1] sda2[0]
>       4194240 blocks [2/2] [UU]
>
> md127 : active (auto-read-only) raid1 sdb3[1] sda3[0]
>       972305024 blocks [2/2] [UU]
>
> unused devices: <none>
>
> While this in itself may not be a problem, after I install my OS (Linux
of course) I get a kernel panic. I suspect the kernel panic is due to the
kernel not finding the partition I specified in grub.conf and fstab (i.e. :
/dev/md1|2|3).
>
> I have been struggling with mdadm raid for several days now, with little
progress to show. dmraid seems to leverage what little support is provided
by my onboard RAID controller, which seems like a good think. mdadm seems
to just keep both drives in sync with each other without leveraging
hardware. I might well be wrong there. So far I only tried mdadm.
>
> Alex
>
>
> On 03/03/2012, at 09:33 , Marcos Raul Carot Collins wrote:
>
> Are you installing the OS or are you trying to implement to an extra hard
disk
> after installing?
>
> I only set it up in Debian at install time (mdadm) and although you need
some
> partitioning background, it is prety easy. Let me know if that's your
case and
> I can guide you.
>
> I haven't tried in other OSes...
>
> Cheers,
>
> Marcos
>
> On Sábado 03 Marzo 2012 05:17:12 Tim White escribió:
>
> On 03/03/12 04:36, Alexander Hartner wrote:
>
> Has anybody got any experience with either / both ? Which one do you
>
> suggest ? I have been trying to configure mdadm for the past week
>
> without success. Should I persist or try dmraid ? Is mdadm really so
>
> much better then dmraid ?
>
>
> I've never used dmraid (and a quick read suggests it's for "software
>
> raid" provided by certain bios).
>
> What are you trying to achieve? I have successfully used mdadm many
>
> times in the past, both with setting up raid and with repairing NAS's.
>
>
> Tim
>
> _______________________________________________
>
> PLUG discussion list: plug at plug.org.au
>
> http://lists.plug.org.au/mailman/listinfo/plug
>
> Committee e-mail: committee at plug.org.au
>
> PLUG Membership: http://www.plug.org.au/membership
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
>
>
>
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
>
>
>
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20120304/f4ffa0ee/attachment.html>


More information about the plug mailing list