Booting Debian with root filesystem on LVM
For LVM1 / kernel 2.4.x and LVM2 / kernel 2.6.x


Note for sarge users: This procedure was originally carried out on machines running Debian woody with appropriate backports.The 2.6.x kernel machine described below, which was running woody plus backports at the time I did this, has since been dist-upgraded to sarge and continued to work. See also the note. I have not yet myself tried the full installation-from-scratch procedure on sarge but would expect the LVM2 information and the note to apply.

Setting up an initrd to boot a root filesystem on LVM can be a pain in the arse, especially if you've never done it before and don't really know what you're doing. Googling reveals a few guides which are of help, such as this one on setting up for an encrypted root filesystem, and innumerable ones on setting up for a root filesystem on LVM and RAID, which are useful but not as applicable as you might think... I wasn't after RAID, I just wanted to use LVM to make a reasonable-sized filesystem on a bunch of old, small but free hard drives. After a lot of buggering about I got it to boot, and decided to publish the procedure that Worked For Me(TM) in case anyone else wants to do it. YMMV.

I don't regard this as a "complete" solution because AFAIK you still need a non-LVM /boot partition. Google seems to think that grub can handle RAID but not LVM; one guy seems to disagree, but doesn't give details, and everyone else thinks it can't be done, so I don't think it's worth losing any sleep over.

Note that this is not about how to install with a root fs on LVM from scratch. I built the machine with a group of drives which were intended to be the LVM root fs, and another drive as /dev/hda with three partitions, /boot, swap and an installation partition on which I installed the Debian base system plus initrd-tools and lvm-10. I used this basic system to configure the LVM group on the other drives, installed the new kernel (I always build a custom, Debianised kernel for a system I'm building), set up the initrd to boot the LVM group, and copied the base installation from the installation partition to the LVM group. Once the system could be relied upon to boot from the LVM group, the installation partition can be wiped clean and its space added to the LVM group.

The second example, using LVM2 and kernel 2.6.10, relates to a system that had been in use for some time for desktop duties. The principle is the same though.

Don't skip the 2.4 stuff if it's 2.6 you're interested in, as the 2.4 stuff will provide useful context.

Note: I am told, though haven't yet investigated myself (been too busy to upgrade), that with sarge going stable the relevant procedures are now included - see the bottom of this page.

I'll skip over all the crap about configuring the kernel that you usually get in guides like this. If you want to boot LVM you need to make sure the kernel supports LVM. Duh... Though I do perhaps need to point out that you need support for tmpfs or ramfs to enable the initialisation script to create writable directories. Static or modules? Doesn't really matter; you still need an initrd to fire up the LVM group, so the usual "make support for root fs static to avoid an initrd" consideration doesn't apply. I'm assuming you have built your kernel as a Debian package and installed it, but it is not necessarily the same as the currently running kernel; in my first case, the currently running kernel was the "boot-floppies" version of 2.4.18 from the original system installation, but the LVM version of the system is to boot using a custom-compiled, Debianised 2.4.24. In the second case, the original and final kernels were the same (Debianised 2.6.10), but it doesn't make any difference.

I don't think I've forgotten any steps... if you try it and think I have, please let me know.


OK. Here we go...


LVM1 / Kernel 2.4.x


/etc/mkinitrd/mkinitrd.conf contains some general settings for the operation of mkinitrd. I have extended some of the comments appropriately to this particular endeavour.

# /etc/mkinitrd/mkinitrd.conf:
# Configuration file for mkinitrd(8). See mkinitrd.conf(5).
#
# This file is meant to be parsed as a shell script.

# What modules to install. (Play safe...)
MODULES=all

# The length (in seconds) of the startup delay during which linuxrc may be
# interrupted. Pressing return gives you a shell. Useful for debugging.
DELAY=5

# If this is set to probe mkinitrd will try to figure out what's needed to
# mount the root file system. This is equivalent to the old PROBE=on setting.
# I don't think we need this, we're telling it explicitly what to do.
ROOT=

# This controls the permission of the resulting initrd image.
UMASK=022

# Command to generate the initrd image.
MKIMAGE='mkcramfs %s %s > /dev/null'


/etc/mkinitrd/exe is a list of all the extra executables needed on the initrd (several "basic" ones are included anyway).

/sbin/vgchange
/sbin/vgscan
/sbin/lvmiopversion
/bin/cp
/bin/mv
/bin/ln
/bin/rm
/bin/ls
/sbin/lsmod
/bin/mkdir

Not all are strictly required to activate the LVM, but when it won't boot and you're trying to debug it with the initrd shell, not having things like ls is a pain.


/etc/mkinitrd/files is a list of other files that should be included... Setting MODULES=all in /etc/mkinitrd/mkinitrd.conf didn't always seem to work for some unknown reason, so I executed
for x in `find /lib/modules/2.4.24 -name '*'`; do [ -f $x ] && echo $x >> /etc/mkinitrd/files; done
to make sure. Also, the LVM tools require the contents of
/lib/lvm-10 to function, so a
find /lib/lvm-10 -name '*' >> /etc/mkinitrd/files
(there are no subdirectories involved here) sorted that out.


/etc/mkinitrd/modules lists the modules that need to be loaded to bring up the root fs.

# /etc/mkinitrd/modules: Kernel modules to load for initrd.
#
# This file should contain the names of kernel modules and their arguments
# (if any) that are needed to mount the root file system, one per line.
# Comments begin with a `#', and everything on the line after them are ignored.
#
# You must run mkinitrd(8) to effect this change.
#
# Examples:
#
# ext2
# wd io=0x300
#
# Change these as appropriate for your system.
# I have an ext3 filesystem...
ext3
# ...a VIA 82Cxxx chipset...
via82cxxx
# ...and a bunch of IDE disks...
ide-core
ide-disk
ide-detect
# ...which I'm running under LVM.
lvm-mod


/etc/mkinitrd/scripts/lvm-init
is run during the creation of the initrd, and does any tweaky twonky things that can't be handled by making appropriate entries in the above files... which is quite a lot really.

#!/bin/bash

# Add necessary code to the end of linuxrc, because I can't find
# any documentation on how the scripts in scripts/ on the initrd
# are supposed to be run, but empirically they seem to be run before
# linuxrc? linuxrc itself doesn't run them, I can't find anything that
# explicitly states that the kernel runs them, but if I create one
# and don't give it execute permissions I get an error message BEFORE
# (apparently) linuxrc is run... I want linuxrc to be run first, so I
# do this... clunky but it works.

# mkinitrd sets INITRDDIR to the temporary directory in which the
# initrd is being created.
cat >> $INITRDDIR/linuxrc << EOF

#
# this bit nicked from /etc/init.d/lvm with extra bits added
#
# lvm This script handles the LVM startup/shutdown
# so that LVMs are properly configured and available.
#

# try to load modules in case that hasn't been done yet
modprobe ext3
modprobe via82cxxx
modprobe ide-core
modprobe ide-disk
modprobe ide-detect
modprobe lvm-mod

# Make a writable /etc for lvmtab
mount -nt proc proc proc
[ -e /proc/lvm ] || exit 1
mkdir tmp/tmpetc
cp -a etc/* tmp/tmpetc
mount -nt tmpfs tmpfs etc || mount -nt ramfs ramfs etc
cp -a tmp/tmpetc/* etc

echo "Setting up LVM Volume Groups..."
/sbin/vgscan
/sbin/vgchange -a y

# end nicked bit

# mount LVM group
mount -n /dev/vg00/lv00 /newroot -t ext3

# copy /etc/lvm stuff to LVM group
cp /etc/lvmtab /newroot/etc
rm -rf /newroot/etc/lvmtab.d
cp -a /etc/lvmtab.d /newroot/etc

# change root
umount -n etc
umount -n proc
cp /tmp/root /newroot/tmp
umount -n /tmp
cd /newroot
# don't forget to ensure the initrd mount point exists in
# the root of the fs on the LVM group!
pivot_root . ./initrd
exec /usr/sbin/chroot . sh -c 'umount -f -n /initrd'
# that umount doesn't work, don't know why, doesn't seem to matter...
# probably because we get an 'illegal seek' message relating to /initrd
# ... no idea why we get that either. pivot_root buggers things up? It
# doesn't seem to have any subsequent adverse effects though.
# We also get a weird error message: 'Usage: init' followed by a great
# long string of apparently meaningless hex digits. You're saying that
# init requires a parameter which is a great long string of hex digits?
# New one on me.

EOF

chmod a+x $INITRDDIR/linuxrc
# end of linuxrc mods

# now make these
# 'mount -n' still gives errors... this is a futile attempt to stop it
touch $INITRDDDIR/etc/mtab
# Mount point for new root fs
mkdir $INITRDDIR/newroot

# Device nodes for IDE drives (modify for SCSI and/or more than 4 partitions)
for x in hda hdb hdc hdd; do
    for y in ' ' 1 2 3 4; do
        mknod `echo -n $INITRDDIR; ls -l /dev/$x$y | sed -e 's/,//g' | awk '{print $10 " b " $5 " " $6}'`
    done
done
chmod 660 $INITRDDIR/dev/hd*
chown root:disk $INITRDDIR/dev/hd*
# LVM device nodes
mknod -m 600 $INITRDDIR/dev/lvm c 109 0
mkdir $INITRDDIR/dev/vg00
mknod -m 640 $INITRDDIR/dev/vg00/group c 109 0
mknod -m 660 $INITRDDIR/dev/vg00/lv00 b 58 0
chown root:disk $INITRDDIR/dev/vg00/*


# and sort the libraries out
# we need (probably) this symlink
ln -s /lib/lvm-10 $INITRDDIR/lib/lvm-default
# Listing executables in /etc/mkinitrd/exe is supposed to include
# any necessary shared libraries, but it doesn't work for the LVM tools
# for no apparent reason, so we do it by hand
cp -d /lib/liblvm* $INITRDDIR/lib
ldconfig -r $INITRDDIR


Having set that up, we create the initrd:
cd /boot
mkinitrd -o initrd.img-2.4.24 2.4.24
(changing the 2.4.24 appropriately for your kernel)


Finally, the bootloader needs to be told how to boot the LVM system. I don't use lilo - can't stand the way a minor configuration cockup can leave you completely fucked when you find you can't boot. Grub is much nicer.
/boot/grub/menu.lst needs an appropriate entry (again, edit to suit your system):

title 2.4.24-LVM
kernel /vmlinuz-2.4.24 root=/dev/vg00/lv00 ro
initrd /initrd.img-2.4.24

(...if you use lilo I'm afraid you'll have to translate the above into liloese yourself...)


...and reboot. Good luck! :-)


A tarball of my /etc/mkinitrd is available here, and a "stripped" version of the resulting initrd image here. By "stripped" I mean that all the files in /lib, /usr, /bin and /sbin, and the device nodes in /dev, ie. the files that you wouldn't be wanting to read for example purposes, have been replaced with text files containing the output of
ls -l on the original entries. That way the information on directory structure and contents is still available but my connection doesn't have to cope with transferring 1.9 megs of redundant binaries. :-)


Note: Since setting all this up I have discovered the existence of something which might have saved me a lot of trouble if I'd found it before: the script /lib/lvm-10/lvmcreate_initrd. It even has its own man page. I haven't, however, tried to use it yet, as the box with the LVM root is a server which is now live. Next time it's due some heavy maintenance I might give it a shot.



LVM2 / Kernel 2.6.x


In this case, that potentially handy script no longer exists in LVM2, so it has to be all done by hand anyway. It has to be said that setting up the earlier version was useful experience :-)

This system is again Debian woody, running kernel 2.6.10 using the appropriate backports of the associated packages from backports.org. LVM2 is also obtained from backports.org. Attention sarge users: (but see also the note) The backports.org version of initrd-tools is not installed, as it seems to make things slightly more complicated and the woody version is adequate; this recipe requires the woody version and will not work with the backport unless modified. Also, the system is using a static /dev, not devfs or udev, as I haven't found a need to abandon the static /dev.

/etc/mkinitrd/mkinitrd.conf is is set up the same as for the 2.4 version.

/etc/mkinitrd/exe is not a lot different. /sbin/vgchange and /sbin/vgscan are not included, as they are symlinks to /sbin/lvmiopversion which are created later on along with a lot of other ones.

/sbin/lvmiopversion
/bin/cp
/bin/mv
/bin/ln
/bin/rm
/bin/ls
/sbin/lsmod
/bin/mkdir

/etc/mkinitrd/files
is a bit shorter than in the 2.4 case. Many of the files needed are either symlinks which are created by the script, or libraries which are copied in by the script so they are known to be in place when running ldconfig. As in 2.4, setting MODULES=all in /etc/mkinitrd/mkinitrd.conf didn't always seem to work for some unknown reason, but this particular kernel setup only had one module required - dm-mod, the device-mapper module. For some reason I also found it necessary to explicitly include modules.dep. Why I had to do this for 2.6 but not for 2.4 is currently mysterious to me. We also need the LVM config file which wasn't there in LVM1, and the main LVM executable. The file looks like this:

/lib/modules/2.6.10/kernel/drivers/md/dm-mod.ko
/lib/modules/2.6.10/modules.dep
/lib/lvm-200/lvm
/etc/lvm/lvm.conf


/etc/mkinitrd/modules
lists the modules that need to be loaded to bring up the root fs. In this case that's only the one. Enough to show the principle :-) Yours may have more, of course.

# /etc/mkinitrd/modules: Kernel modules to load for initrd.
#
# This file should contain the names of kernel modules and their arguments
# (if any) that are needed to mount the root file system, one per line.
# Comments begin with a `#', and everything on the line after them are ignored.
#
# You must run mkinitrd(8) to effect this change.
#
# Examples:
#
# ext2
# wd io=0x300
#
# This system needs only the device mapper module.
dm-mod


The scripts in /etc/mkinitrd/scripts/ are run during the creation of the initrd. In this case we have two. First, there is a script to create the devmapper nodes on non-devfs systems. Somewhat bizarrely, this script is to be found at /usr/share/doc/libdevmapper1.00/devmap_mknod.sh. This needs to be copied into /etc/mkinitrd/scripts, to have chmod a+x run on it, to be renamed as devmap-mknod because it won't be run if you don't, and have a minor change made to it:

#! /bin/sh

# Startup script to create the device-mapper control device
# on non-devfs systems.
# Non-zero exit status indicates failure.

# These must correspond to the definitions in device-mapper.h and dm.h
DM_DIR="mapper"
DM_NAME="device-mapper"

set -e

# This is the only change: the insertion of $INITRDDIR in the next line.
DIR="$INITRDDIR/dev/$DM_DIR"
CONTROL="$DIR/control"

# Check for devfs, procfs
if test -e /dev/.devfsd ; then
    echo "devfs detected: devmap_mknod.sh script not required."
    exit
fi

if test ! -e /proc/devices ; then
    echo "procfs not found: please create $CONTROL manually."
    exit 1
fi

# Get major, minor, and mknod
MAJOR=$(sed -n 's/^ *\([0-9]\+\) \+misc$/\1/p' /proc/devices)
MINOR=$(sed -n "s/^ *\([0-9]\+\) \+$DM_NAME\$/\1/p" /proc/misc)

if test -z "$MAJOR" -o -z "$MINOR" ; then
    echo "$DM_NAME kernel module not loaded: can't create $CONTROL."
    exit 1
fi

mkdir -p --mode=755 $DIR
test -e $CONTROL && rm -f $CONTROL

echo "Creating $CONTROL character device with major:$MAJOR minor:$MINOR."
mknod --mode=600 $CONTROL c $MAJOR $MINOR



Then we have /etc/mkinitrd/scripts/lvm-init, as in the 2.4 case, but in the 2.6 case it has a bit more stuff in it:


#!/bin/bash

# Add necessary code to the end of linuxrc, because I can't find
# any documentation on how the scripts in scripts/ are supposed to
# be run, but empirically they seem to be run before linuxrc?
cat >> $INITRDDIR/linuxrc << EOF

#
# this bit nicked from the LVM1 /etc/init.d/lvm with extra bits added
#
# lvm This script handles the LVM startup/shutdown
# so that LVMs are properly configured and available.
#

# try to load modules in case that hasn't been done yet
modprobe dm-mod

# Make a writable /etc for lvmtab
mount -nt proc proc proc
# With 2.6/lvm2 there's no /proc/lvm so this line is disabled
# [ -e /proc/lvm ] || exit 1
mkdir tmp/tmpetc
cp -a etc/* tmp/tmpetc
mount -nt tmpfs tmpfs etc || mount -nt ramfs ramfs etc
cp -a tmp/tmpetc/* etc
# and a writable /var so vg* can use /var/lock
mount -nt tmpfs tmpfs var || mount -nt ramfs ramfs var

echo "Setting up LVM Volume Groups..."
/sbin/vgscan
/sbin/vgchange -a y

# end nicked bit

# mount LVM group
mount -n /dev/vg00/lv00 /newroot -t ext3

# With LVM1, we copied /etc/lvm stuff to LVM group here.
# We don't need to with lvm2
#cp /etc/lvmtab /newroot/etc

# change root
cp /tmp/root /newroot/tmp
# A few more things to unmount in this case
umount -n tmp
umount -n var
umount -n proc
umount -n etc
cd /newroot
pivot_root . ./initrd 1>&2

# This works on 2.4, but on 2.6 it screws up init and causes a kernel panic
#exec /usr/sbin/chroot . sh -c 'umount -f -n /initrd'

# so we don't bother trying to umount the initrd. In any case, it seems
# to be taken care of by /etc/init.d/initrd-tools.sh
exec /usr/sbin/chroot . /sbin/init

EOF

chmod a+x $INITRDDIR/linuxrc

# now make these
# 2.6/lvm2 requires a bit more stuff than 2.4/lvm1 :-)
touch $INITRDDDIR/etc/mtab
mkdir $INITRDDIR/newroot
mkdir $INITRDDIR/var

# This system uses SCSI disks, not IDE
for x in sda sdb sdc sdd sde; do
    for y in ' ' 1 2 3 4; do
        mknod `echo -n $INITRDDIR; ls -l /dev/$x$y | sed -e 's/,//g' | awk '{print $10 " b " $5 " " $6}'`
    done
done
chmod 660 $INITRDDIR/dev/sd*
chown root:disk $INITRDDIR/dev/sd*
# The devmapper makes this a bit more complicated. We need to set up a few nodes
# that the devmap-mknod script on its own doesn't.
mkdir $INITRDDIR/dev/vg00
ln -s /dev/mapper/vg00-lv00 $INITRDDIR/dev/vg00/lv00
# LVM will try to create this, which will fail because the initrd /dev is on a
# read-only filesystem, so we have to make it statically:
mknod -m 600 $INITRDDIR/dev/mapper/vg00-lv00 b 254 0
mknod -m 600 $INITRDDIR/dev/lvm c 109 0
mknod -m 660 $INITRDDIR/dev/ram0 b 1 0

# With LVM1, /lib/lvm-10 contained a bunch of separate executables.
# With LVM2 all the files are now symlinks to the same executable, "lvm".
# We copy the executable into the initrd and make symlinks to it.
mkdir -p $INITRDDIR/lib/lvm-200
cp /lib/lvm-200/lvm $INITRDDIR/lib/lvm-200
( cd $INITRDDIR/lib/lvm-200
# everything in this dir, except lvm, is a symlink to lvm
    for x in `ls -1 /lib/lvm-200 | grep -v '^lvm$'`; do
        ln -s lvm $x
    done
)

# Similarly, all the LVM stuff we might need out of /sbin consists of
# symlinks to lvmiopversion. So we do the same sort of thing but in a
# slightly more complex way as there's a lot of other stuff in /sbin
# that we don't need.
mkdir -p $INITRDDIR/sbin
cp /sbin/lvmiopversion $INITRDDIR/sbin
( cd $INITRDDIR/sbin
# all the lvm commands in /sbin are symlinks to lvmiopversion
    for x in `ls -1 /sbin`; do
        if ls -l /sbin/$x | grep -- '-> lvmiopversion' > /dev/null; then
            ln -s lvmiopversion $x
        fi
    done
)

# and sort the libraries out
ln -s /lib/lvm-200 $INITRDDIR/lib/lvm-default
cp -d /lib/libdevmapper* $INITRDDIR/lib
# This library is new for LVM2
cp /lib/libdl-*.so $INITRDDIR/lib
ldconfig -r $INITRDDIR



Finally, we need a slightly different setup in /boot/grub/menu.lst:
title 2.6.10-LVM
kernel /vmlinuz-2.6.10 root=/dev/ram0 rw hdb=ide-cd init=/linuxrc initrd=/initrd.img-2.6.10
initrd /initrd.img-2.6.10



A tarball of my LVM2 /etc/mkinitrd is available here, and a "stripped" version of the resulting initrd image here.


Note: I am told that the Debian folks have got onto this and the procedures are now included as an "example" script in the sarge version of LVM2. My informant says:

> I think i have an update for your site. I stumbled across it after I
> upgraded from woody to sarge (successfully) and decided to switch to
> root-on-lvm2 (while having no LVM drives/experience)

> Instead of doing all those nifty things your page describes to make
> the initrd image, Debian folk seem to have finally caught on. mkinitrd
> is still buggy (I compiled in device-mapper support directly, and
> mkinitrd still fails to catch that), but the current lvm2 (sarge!)
> package contains a script in the documetation's "example" directory. I
> ran it, gave it the parameters it wanted (had to apt-get install
> busybox), and poof -- it made me a brilliant working initrd image that
> worked first try! The script is here after installing lvm2 with apt:

> /usr/share/doc/lvm2/examples/lvm2create_initrd.gz

> Oh, and I had forgotten to change my fstab over, did that too.

Which is jolly good news, I think you will agree... I shall investigate personally once I get some free time to do upgrades.

Did you find this page helpful?




Back to sarge page

Back to Pigeon's Nest


Be kind to pigeons




Valid HTML 4.01!