[plug] Debian upgrade udev_0.114-2 -> udev_0.125-5 devfs error

Adam Davin byteme-its at westnet.com.au
Mon Aug 18 21:36:19 WST 2008


Hi Paul (and others), 

On Sun, 17 Aug 2008 23:05:24 +0800
Paul Dean <paul at thecave.ws> wrote:

> What bind you got yourself in....

Indeed.. !! 

> Hope this is not a production box, running unstable is for cutting
> edge new hardware testing, but anyways you are running it.

Only production for my personal use! :) I have been running unstable
for as long as I can remember. The debian stable policy tends to mean
that the changes to programs that are required don't happen to filter
through to stable for quite a while. 

> Do you have the box booting into the OS?

Yes the box boots fine, currently on udev 0.114-2, which still supports
devfs "emulation(?)"

> If so, try a `apt-get dist-upgrade`, you may find it will sort out
> the version jump problems.

Thanks, hadn't tried that, have now, unfortunately, same problem :(

> If that fails, you may need to roll back from the raid to a non-raid
> boot, remove udev, ie `apt-get purge udev`, then tidy-up the udev
> left behinds and reinstall udev again, then recreate the software
> raid.

The boot drives themselves are not on raid. I have just installed some
extra drives and added raid to these to allow me to play with raid.
Moving the base system to these drives to use them as the system drives
was going to be "the next step (tm)". 

The only mention of the raid system was to be thorough about my current
setup. As you can see from /proc/mdstat the drives in the arrays all
use standard notation though.. 

owl:/boot# cat /proc/mdstat
Personalities : [raid1] 
md4 : active raid1 sda6[0] sdb6[1]
      312496256 blocks [2/2] [UU]
      
md3 : active raid1 sda5[0] sdb5[1]
      141612864 blocks [2/2] [UU]
      
md1 : active raid1 sda2[0] sdb2[1]
      29302464 blocks [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      56128 blocks [2/2] [UU]

I had a look to see if there was a way to turn off the devfs
referencing in the kernel, but it seems that 2.6.25 doesn't have the
devfs modules in there anyway. 

> Very odd thou the fstab is not referencing the UUID of the drives and
> using /dev instead, as this the standard now set in lenny(testing),
> soon to be stable(woop woop yeh!).

This is why I am confused as to the kernel referencing the devices
using devfs-like names when fstab and any other system files that I can
find (or know to look at) are using standard addressing rather than
devfs.

> But be warned, DON'T shutdown/power off/blah the box when you do this
> cos it won't boot to the drive again, if you do for whatever reason
> make sure you got the lastest lenny netinst cd, it has a rescue
> option. 

I don't think the purge / reinstall of udev will work as the current
issue is that for some reason the kernel seems to be referencing the
file system via the devfs rule set in the udev installation, hence it
will not upgrade because the newer udev does not support the devfs
names that are currently in the system. 

> 
> On Sun, 17 Aug 2008 22:22:33 +0800
> Adam Davin <byteme-its at westnet.com.au> wrote:
> 
> >Recently I did a apt-get update && apt-get upgrade, but the upgrade
> >stopped at the udev upgrade with the message below: 
> >
> >Preparing to replace udev 0.114-2
> >(using .../archives/udev_0.125-5_i386.deb) ... Since release 0.124-1
> >udev does not support anymore devfs-like names. Please convert to
> >standard names before upgrading:
> >
> >rm
> >-f /etc/udev/rules.d/devfs.rules /etc/udev/rules.d/compat-full.rules /etc/udev/rules.d/compat.rules
> >ln -s ../udev.rules /etc/udev/rules.d/
> >
> >dpkg: error processing /var/cache/apt/archives/udev_0.125-5_i386.deb
> >(--unpack): subprocess pre-installation script returned error exit
> >status 1
> >
> >
> >My fstab file does not reference any drives by "devfs" style names
> >and hasn't for a while. I have recently created a couple of raid
> >partitions but checking /proc/mdstat all drives are listed as
> >sd[ab][1..4]
> >
> >
> >owl:/etc# cat fstab
> ># /etc/fstab: static file system information.
> >#
> ># <file system>	<mount point>	<type>
> ><options>		<dump>
> ><pass> /dev/hda3 /	ext3errors=remount-ro	0	1 
> >/dev/hda2 swap		swap	sw,pri=1	00 
> >/dev/sda3 swap		swap sw,pri=10		00 
> >/dev/sdb3 swap		swap	sw,pri=10 0	0
> >proc 	/proc		proc defaults00
> >/dev/cdrom3	/media/cd autodefaults,ro,user,noauto	00
> >/dev/hda5	/usr	ext3 defaults 0 2 
> >/dev/hda6	/var ext3	defaults	02 
> >/dev/hda7	/home		ext3 defaults 0	2
> >...
> >
> >typing mount however does show that the system is referencing block
> >devices using the devfs names. 
> >
> >Mount shows: 
> >owl:/etc# mount
> >/dev/ide/host0/bus0/target0/lun0/part3 on / type ext3
> >(rw,errors=remount-ro) 
> >tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) 
> >proc on /proc type proc (rw,noexec,nosuid,nodev)
> >sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> >procbususb on /proc/bus/usb type usbfs (rw)
> >udev on /dev type tmpfs (rw,mode=0755)
> >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> >devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> >fusectl on /sys/fs/fuse/connections type fusectl (rw)
> >/dev/ide/host0/bus0/target0/lun0/part5 on /usr type ext3 (rw)
> >/dev/ide/host0/bus0/target0/lun0/part6 on /var type ext3 (rw)
> >/dev/ide/host0/bus0/target0/lun0/part7 on /home type ext3 (rw)
> >/dev/ide/host0/bus0/target0/lun0/part8 on /mnt/store1 type ext3 (rw)
> >/dev/ide/host0/bus0/target0/lun0/part9 on /mnt/store2 type ext3 (rw)
> >/dev/md/1 on /mnt/newinst type ext3 (rw)
> >/dev/md/0 on /mnt/newinst/boot type ext3 (rw)
> >/dev/md/3 on /mnt/newinst/home type ext3 (rw)
> >/dev/md/4 on /mnt/newinst/mnt/store type ext3 (rw)
> >
> >I did find a page which mentioned
> >deleting /etc/udev/rules.d/compat-full.rules
> >and /etc/udev/rules.d/devfs.rules and creating /etc/udev/udev.rules.
> >I tried this manually (by renaming them to *.old rather than
> >deleting) and on rebooting, the raid partitions failed to be found
> >(invalid superblock) and once the system finished booting, I got a
> >message that getty (?) was spawning too fast and would be disabled
> >for 5 mins. Leaving the system for 20mins, no login showed. I was
> >also unable to C+A+D to reboot (Error no super user logged in).
> >Thankfully the power button shut down correctly.
> >
> >I was able to boot back into single user mode and restore the two
> >files I deleted and the system then rebooted fine as before. 
> >
> >I have also change the /etc/default/devfsd mount devfs on boot option
> >to no

Thanks again, 

Regards, 

-- 

Adam Davin
Byteme IT Services
Mob: 0422 893 898
Fax: 08 9493 4462
Email: byteme-its at westnet.com.au



More information about the plug mailing list