LVM HOWTO AJ Lewis                     Copyright © 2002-2003 Sistina Software, Inc Copyright © 2004-2005 Red Hat, Inc Revision History Revision 0.17 2005-10-03 Revised by: ajl Added FAQ entry for max size of LVs in LVM2; Did some cleanup of "Recover physical volume metadata" section; Updated e-mail address Revision 0.16 2005-07-15 Revised by: ajl Added lvm2 boot-time scripts info; Added "Recover physical volume metadata" section - thanks to Maximilian Attems for the patch Revision 0.15 2005-06-09 Revised by: ajl Removed references to xfs_freeze - it is no longer needed; Updated snapshots subsection in Anatomy of LVM section; Added a couple entries to the LVM2 FAQ; Fixed a couple typos Revision 0.14 2004-10-06 Revised by: ajl Added reference to lvm2_createinitrd in source tree; Adjusted lvcreate example slightly; Added 'vgchange -ay' in 'Moving a volume group to another system' recipe Revision 0.13 2004-08-16 Revised by: ajl Clarify symlink farm description; Fix dm control device major number; Remove /boot from vg in small lvm setup example; Add notes about /boot and / on LVM; Remove outdated link; Revision 0.12 2004-06-07 Revised by: ajl Updated LVM2 FAQ entries Revision 0.11 2004-05-03 Revised by: ajl Updated LVM2 FAQ entries Revision 0.10 2004-04-22 Revised by: ajl removed -print0 from find command after receiving reports that it doesn't work Revision 0.9 2004-04-16 Revised by: ajl Added -print0 to find command before piping it to cpio; Changed vgimport command line for LVM 2; Added ext3 to the ext2 resize section; Updated FAQ; Updated Links section Revision 0.8 2004-02-25 Revised by: ajl Updated CVS locations and FTP links; Added section on extending a JFS filesystem; Fixed typos - Ran aspell against document Revision 0.7 2004-02-16 Revised by: ajl Updated to include LVM 2 and device mapper information; Updated email addresses; Updated copyright; Added FAQ section; Added document license; Updated to docbook 4.2 Revision 0.6 2003-12-09 Revised by: ajl Updated for LVM 1.0.8; fixed broken link; Clarified redhat init script section; Revision 0.5 2003-02-10 Revised by: ajl Updated Redhat initscript information for 7.0 and above; Added information on removing a partition table from a disk if pvcreate fails; Default PE size is 32MB now; Updated method for snapshotting under XFS. Revision 0.4 2002-12-16 Revised by: ajl Updated for LVM 1.0.6 Revision 0.3 2002-09-16 Revised by: ajl removed example pvmove from Command Operations section - we now just point to the more detailed recipe on pvmove that contains various warnings and such Revision 0.2 2002-09-11 Revised by: ajl Updated for LVM 1.0.5 and converted to DocBook XML 4.1.2. Revision 0.1 2002-04-28 Revised by: gf Initial conversion from Sistina's LaTeX source and import to tLDP in LinuxDoc format. This document describes how to build, install, and configure LVM for Linux. A basic description of LVM is also included. This version of the HowTo is for LVM 2 with device-mapper, as well as LVM 1.0.8. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY, either expressed or implied. While every effort has been taken to ensure the accuracy of the information documented herein, the author(s)/editor(s)/maintainer(s)/contributor(s) assumes NO RESPONSIBILITY for any errors, or for any damages, direct or consequential, as a result of the use of the information documented herein. ----------------------------------------------------------------------------- Table of Contents Introduction 1. Latest Version 2. Disclaimer 3. Contributors 1. What is LVM? 2. What is Logical Volume Management? 2.1. Why would I want it? 2.2. Benefits of Logical Volume Management on a Small System 2.3. Benefits of Logical Volume Management on a Large System 3. Anatomy of LVM 3.1. volume group (VG) 3.2. physical volume (PV) 3.3. logical volume (LV) 3.4. physical extent (PE) 3.5. logical extent (LE) 3.6. Tying it all together 3.7. mapping modes (linear/striped) 3.8. Snapshots 4. Frequently Asked Questions 4.1. LVM 2 FAQ 4.2. LVM 1 FAQ 5. Acquiring LVM 5.1. Download the source 5.2. Download the development source via CVS 5.3. Before You Begin 5.4. Initial Setup 5.5. Checking Out Source Code 5.6. Code Updates 5.7. Starting a Project 5.8. Hacking the Code 5.9. Conflicts 6. Building the kernel modules 6.1. Building the device-mapper module 6.2. Build the LVM 1 kernel module 7. LVM 1 Boot time scripts 7.1. Caldera 7.2. Debian 7.3. Mandrake 7.4. Redhat 7.5. Slackware 7.6. SuSE 8. LVM 2 Boot Time Scripts 9. Building LVM from the Source 9.1. Make LVM library and tools 9.2. Install LVM library and tools 9.3. Removing LVM library and tools 10. Transitioning from previous versions of LVM to LVM 1.0.8 10.1. Upgrading to LVM 1.0.8 with a non-LVM root partition 10.2. Upgrading to LVM 1.0.8 with an LVM root partition and initrd 11. Common Tasks 11.1. Initializing disks or disk partitions 11.2. Creating a volume group 11.3. Activating a volume group 11.4. Removing a volume group 11.5. Adding physical volumes to a volume group 11.6. Removing physical volumes from a volume group 11.7. Creating a logical volume 11.8. Removing a logical volume 11.9. Extending a logical volume 11.10. Reducing a logical volume 11.11. Migrating data off of a physical volume 12. Disk partitioning 12.1. Multiple partitions on the same disk 12.2. Sun disk labels 13. Recipes 13.1. Setting up LVM on three SCSI disks 13.2. Setting up LVM on three SCSI disks with striping 13.3. Add a new disk to a multi-disk SCSI system 13.4. Taking a Backup Using Snapshots 13.5. Removing an Old Disk 13.6. Moving a volume group to another system 13.7. Splitting a volume group 13.8. Converting a root filesystem to LVM 1 13.9. Recover physical volume metadata A. Dangerous Operations A.1. Restoring the VG UUIDs using uuid_fixer A.2. Sharing LVM volumes B. Reporting Errors and Bugs C. Contact and Links C.1. Mail lists C.2. Links D. GNU Free Documentation License D.1. PREAMBLE D.2. APPLICABILITY AND DEFINITIONS D.3. VERBATIM COPYING D.4. COPYING IN QUANTITY D.5. MODIFICATIONS D.6. COMBINING DOCUMENTS D.7. COLLECTIONS OF DOCUMENTS D.8. AGGREGATION WITH INDEPENDENT WORKS D.9. TRANSLATION D.10. TERMINATION D.11. FUTURE REVISIONS OF THIS LICENSE D.12. ADDENDUM: How to use this License for your documents ----------------------------------------------------------------------------- Introduction This is an attempt to collect everything needed to know to get LVM up and running. The entire process of getting, compiling, installing, and setting up LVM will be covered. Pointers to LVM configurations that have been tested with will also be included. This version of the HowTo is for LVM 2 with device-mapper and LVM 1.0.8. All previous versions of LVM are considered obsolete and are only kept for historical reasons. This document makes no attempt to explain or describe the workings or use of those versions. ----------------------------------------------------------------------------- 1. Latest Version We will keep the latest version of this HowTo in the CVS with the other LDP Howtos. You can get it by checking out ``LDP/howto/docbook/LVM-HOWTO.xml'' from the tLDP CVS server. You should always be able to get a human readable version of this HowTo from the http://www.tldp.org/HOWTO/LVM-HOWTO.html ----------------------------------------------------------------------------- 2. Disclaimer This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY, either expressed or implied. While every effort has been taken to ensure the accuracy of the information documented herein, the author(s)/editor(s)/maintainer(s)/contributor(s) assumes NO RESPONSIBILITY for any errors, or for any damages, direct or consequential, as a result of the use of the information documented herein. ----------------------------------------------------------------------------- 3. Contributors List of everyone who has put words into this file.   *  [mailto:alewis@redhat.com_NOSPAM] AJ Lewis   *  [mailto:thornber@redhat.com_NOSPAM] Joe Thornber   *  [mailto:pcaulfie@redhat.com_NOSPAM] Patrick Caulfield   *  [mailto:agk@redhat.com_NOSPAM] Alasdair Kergon   *  Jochen Radmacher - JFS extend information Please notify the HowTo maintainer if you believe you should be listed above. ----------------------------------------------------------------------------- Chapter 1. What is LVM? LVM is a Logical Volume Manager for the Linux operating system. There are now two version of LVM for Linux:   *  LVM 2 - The latest and greatest version of LVM for Linux. LVM 2 is almost completely backward compatible with volumes created with LVM 1. The exception to this is snapshots (You must remove snapshot volumes before upgrading to LVM 2) LVM 2 uses the device mapper kernel driver. Device mapper support is in the 2.6 kernel tree and there are patches available for current 2.4 kernels.   *  LVM 1 - The version that is in the 2.4 series kernel, LVM 1 is a mature product that has been considered stable for a couple of years. The kernel driver for LVM 1 is included in the 2.4 series kernels, but this does not mean that your 2.4.x kernel is up to date with the latest version of LVM. Look at the [ftp://ftp.sistina.com/pub/LVM/1.0 /README] README for the latest information about which kernels have the current code in them. ----------------------------------------------------------------------------- Chapter 2. What is Logical Volume Management? Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users. Storage volumes created under the control of the logical volume manager can be resized and moved around almost at will, although this may need some upgrading of file system tools. The logical volume manager also allows management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as "development" and "sales" rather than physical disk names such as "sda" and "sdb". ----------------------------------------------------------------------------- 2.1. Why would I want it? Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two. ----------------------------------------------------------------------------- 2.2. Benefits of Logical Volume Management on a Small System One of the difficult decisions facing a new user installing Linux for the first time is how to partition the disk drive. The need to estimate just how much space is likely to be needed for system files and user files makes the installation more complex than is necessary and some users simply opt to put all their data into one large partition in an attempt to avoid the issue. Once the user has guessed how much space is needed for /home /usr / (or has let the installation program do it) then is quite common for one of these partitions to fill up even if there is plenty of disk space in one of the other partitions. With logical volume management, the whole disk would be allocated to a single volume group and logical volumes created to hold the / /usr and /home file systems. If, for example the /home logical volume later filled up but there was still space available on /usr then it would be possible to shrink / usr by a few megabytes and reallocate that space to /home. Another alternative would be to allocate minimal amounts of space for each logical volume and leave some of the disk unallocated. Then, when the partitions start to fill up, they can be expanded as necessary. As an example: Joe buys a PC with an 8.4 Gigabyte disk on it and installs Linux using the following partitioning system: /boot /dev/hda1 10 Megabytes swap /dev/hda2 256 Megabytes / /dev/hda3 2 Gigabytes /home /dev/hda4 6 Gigabytes This, he thinks, will maximize the amount of space available for all his MP3 files. Sometime later Joe decides that he want to install the latest office suite and desktop UI available but realizes that the root partition isn't large enough. But, having archived all his MP3s onto a new writable DVD drive there is plenty of space on /home. His options are not good: 1. Reformat the disk, change the partitioning scheme and reinstall. 2. Buy a new disk and figure out some new partitioning scheme that will require the minimum of data movement. 3. Set up a symlink farm on / pointing to /home and install the new software on /home With LVM this becomes much easier: Jane buys a similar PC but uses LVM to divide up the disk in a similar manner: /boot /dev/hda1 10 Megabytes swap /dev/vg00/swap 256 Megabytes / /dev/vg00/root 2 Gigabytes /home /dev/vg00/home 6 Gigabytes Note boot is not included on the LV because bootloaders don't understand LVM volumes yet. It's possible boot on LVM will work, but you run the risk of having an unbootable system. Warning root on LV should be used by advanced users only   root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this becomes less of an issue over time. When she hits a similar problem she can reduce the size of /home by a gigabyte and add that space to the root partition. Suppose that Joe and Jane then manage to fill up the /home partition as well and decide to add a new 20 Gigabyte disk to their systems. Joe formats the whole disk as one partition (/dev/hdb1) and moves his existing /home data onto it and uses the new disk as /home. But he has 6 gigabytes unused or has to use symlinks to make that disk appear as an extension of /home, say /home/joe/old-mp3s. Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk. Or, in fact, she could move the data from /home on the old disk to the new disk and then extend the existing root volume to cover all of the old disk. ----------------------------------------------------------------------------- 2.3. Benefits of Logical Volume Management on a Large System The benefits of logical volume management are more obvious on large systems with many disk drives. Managing a large disk farm is a time-consuming job, made particularly complex if the system contains many disks of different sizes. Balancing the (often conflicting) storage requirements of various users can be a nightmare. User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to "hold back" disk storage until it is required. It can then be added to the volume(user) group that has the most pressing need. When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary. It is also easy to take old drives out of service by moving the data from them onto newer drives - this can be done online, without disrupting user service. ----------------------------------------------------------------------------- Chapter 3. Anatomy of LVM This diagram gives a overview of the main elements in an LVM system: +-- Volume Group --------------------------------+ | | | +----------------------------------------+ | | PV | PE | PE | PE | PE | PE | PE | PE | PE | | | +----------------------------------------+ | | . . . . | | . . . . | | +----------------------------------------+ | | LV | LE | LE | LE | LE | LE | LE | LE | LE | | | +----------------------------------------+ | | . . . . | | . . . . | | +----------------------------------------+ | | PV | PE | PE | PE | PE | PE | PE | PE | PE | | | +----------------------------------------+ | | | +------------------------------------------------+ Another way to look at is this (courtesy of [mailto:erik@bagfors.nu_NOSPAM] Erik Bågfors on the linux-lvm mailing list): hda1 hdc1 (PV:s on partitions or whole disks) \ / \ / diskvg (VG) / | \ / | \ usrlv rootlv varlv (LV:s) | | | ext2 reiserfs xfs (filesystems) ----------------------------------------------------------------------------- 3.1. volume group (VG) The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of Logical Volumes and Physical Volumes into one administrative unit. ----------------------------------------------------------------------------- 3.2. physical volume (PV) A physical volume is typically a hard disk, though it may well just be a device that 'looks' like a hard disk (eg. a software raid device). ----------------------------------------------------------------------------- 3.3. logical volume (LV) The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. / home). ----------------------------------------------------------------------------- 3.4. physical extent (PE) Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group. ----------------------------------------------------------------------------- 3.5. logical extent (LE) Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group. ----------------------------------------------------------------------------- 3.6. Tying it all together A concrete example will help: Lets suppose we have a volume group called VG1, this volume group has a physical extent size of 4MB. Into this volume group we introduce 2 hard disk partitions, /dev/hda1 and /dev/hdb1. These partitions will become physical volumes PV1 and PV2 (more meaningful names can be given at the administrators discretion). The PV's are divided up into 4MB chunks, since this is the extent size for the volume group. The disks are different sizes and we get 99 extents in PV1 and 248 extents in PV2. We now can create ourselves a logical volume, this can be any size between 1 and 347 (248 + 99) extents. When the logical volume is created a mapping is defined between logical extents and physical extents, eg. logical extent 1 could map onto physical extent 51 of PV1, data written to the first 4 MB of the logical volume in fact be written to the 51st extent of PV1. ----------------------------------------------------------------------------- 3.7. mapping modes (linear/striped) The administrator can choose between a couple of general strategies for mapping logical extents onto physical extents: 1. Linear mapping will assign a range of PE's to an area of an LV in order eg., LE 1 - 99 map to PV1 and LE 100 - 347 map onto PV2. 2. Striped mapping will interleave the chunks of the logical extents across a number of physical volumes eg., 1st chunk of LE[1] -> PV1[1], 2nd chunk of LE[1] -> PV2[1], 3rd chunk of LE[1] -> PV3[1], 4th chunk of LE[1] -> PV1[2], and so on. In certain situations this strategy can improve the performance of the logical volume. Warning LVM 1 Caveat   LVs created using striping cannot be extended past the PVs they were originally created on in LVM 1. In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set. Are you confused yet? ----------------------------------------------------------------------------- 3.8. Snapshots A wonderful facility provided by LVM is 'snapshots'. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time. Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data. When the snapshot device has been finished with the system administrator can just remove the device. This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state - the VFS-lock patch for LVM1 makes sure that some filesystems do this automatically when a snapshot is created, and many of the filesystems in the 2.6 kernel do this automatically when a snapshot is created without patching. LVM1 has read-only snapshots. Read-only snapshots work by creating an exception table, which is used to keep track of which blocks have been changed. If a block is to be changed on the origin, it is first copied to the snapshot, marked as copied in the exception table, and then the new data is written to the original volume. In LVM2, snapshots are read/write by default. Read/write snapshots work like read-only snapshots, with the additional feature that if data is written to the snapshot, that block is marked in the exception table as used, and never gets copied from the original volume. This opens up many new possibilities that were not possible with LVM1's read-only snapshots. One example is to snapshot a volume, mount the snapshot, and try an experimental program that change files on that volume. If you don't like what it did, you can unmount the snapshot, remove it, and mount the original filesystem in its place. It is also useful for creating volumes for use with [http:// www.cl.cam.ac.uk/Research/SRG/netos/xen/] Xen. You can create a disk image, then snapshot it and modify the snapshot for a particular domU instance. You can then create another snapshot of the original volume, and modify that one for a different domU instance. Since the only storage used by a snapshot is blocks that were changed on the origin or the snapshot, the majority of the volume is shared by the domU's. Warning LVM 1 -> LVM 2 Upgrade Info   Make sure to remove snapshot LVs before upgrading from LVM 1 to LVM 2. (See Section 4.1) ----------------------------------------------------------------------------- Chapter 4. Frequently Asked Questions 4.1. LVM 2 FAQ 4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2? 4.1.2. Do I need a special lvm2 kernel module? 4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on? 4.1.4. Which commands and types of logical volumes are currently supported in LVM 2? 4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes? 4.1.6. Does LVM 2 support VGs and LVs created with LVM 1? 4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format? 4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives? 4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd? 4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks? 4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on? 4.1.12. How are snapshots in LVM2 different from LVM1? 4.1.13. What is the maximum size of a single LV? 4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2? Here's the Quick Start instructions :) 1. Start by removing any snapshot LVs on the system. These are not handled by LVM 2 and will prevent the origin from being activated when LVM 2 comes up. 2. Make sure you have some way of booting the system other than from your standard boot partition. Have the LVM 1 tools, standard system tools (mount) and an LVM 1 compatible kernel on it in case you need to get back and fix some things. 3. Grab the LVM 2 tools source and the device mapper source and compile them. You need to install the device mapper library using "make install" before compiling the LVM 2 tools. Also copy the dm/scripts/ devmap_mknod.sh script into /sbin. I recommend only installing the 'lvm' binary for now so you have access to the LVM 1 tools if you need them. If you have access to packages for LVM 2 and device-mapper, you can install those instead, but beware of them overwriting your LVM 1 tool set. 4. Get a device mapper compatible kernel, either built in or as a kernel module. 5. Figure out where LVM 1 was activated in your startup scripts. Make sure the device-mapper module is loaded by that point (if you are using device mapper as a module) and add '/sbin/devmap_mknod.sh; lvm vgscan; lvm vgchange -ay' afterward. 6. Install the kernel with device mapper support in it. Reboot. If all goes well, you should be running with lvm2. 4.1.2. Do I need a special lvm2 kernel module? No. You need device-mapper. The lvm2 tools use device-mapper to interface with the kernel and do all their device mapping (hence the name device-mapper). As long as you have device-mapper, you should be able to use LVM2. 4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on? The primary cause of this is not having run the "dmsetup mknodes" after rebooting into a dm capable kernel. This script generates the control node for device mapper. If you don't have the "dmsetup mknodes", don't despair! (Though you should probably upgrade to the latest version of device-mapper.) It's pretty easy to create the /dev/mapper/control file on your own: 1. Make sure you have the device-mapper module loaded (if you didn't build it into your kernel). 2. Run # cat /proc/misc | grep device-mapper | awk '{print $1}' and note the number printed. (If you don't get any output, refer to step 1.) 3. Run # mkdir /dev/mapper - if you get an error saying /dev/mapper already exists, make sure it's a directory and move on. 4. Run # mknod /dev/mapper/control c 10 $number where $number is the number printed in step 2. You should be all set now! 4.1.4. Which commands and types of logical volumes are currently supported in LVM 2? If you are using the stable 2.4 device mapper patch from the lvm2 tarball, all the major functionality you'd expect using lvm1 is supported with the lvm2 tools. (You still need to remove snapshots before upgrading from lvm1 to lvm2) If you are using the version of device mapper in the 2.6 kernel.org kernel series the following commands and LV types are not supported:   * pvmove   * snapshots The beginnings of support for these features are in the [http:// people.sistina.com/~thornber/dm/] unstable device mapper patches maintained by Joe Thornber. 4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes? Yes. LVM 2 uses lvm 2 format metadata. This format is much more flexible than the LVM 1 format metadata, removing or reducing most of the limitations LVM 1 had. 4.1.6. Does LVM 2 support VGs and LVs created with LVM 1? Yes. LVM 2 will activate and operate on VG and LVs created with LVM 1. The exception to this is snapshots created with LVM 1 - these should be removed before upgrading. Snapshots that remain after upgrading will have to be removed before their origins can be activated by LVM 2. 4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format? Yes. Use vgconvert to convert your VG and all LVs contained within it to the new lvm 2 format metadata. Be warned that it's not always possible to revert back to lvm 1 format metadata. 4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives? One possible cause of this is that some versions of LVM 1 (The user that reported this bug originally was using Mandrake 9.2, but it is not necessarily limited to that distribution) did not put a UUID into the PV and VG structures as they were supposed to. The most current versions of the LVM 2 tools automatically fill UUIDs in for the structures if they see they are missing, so you should grab a more current version and your problem should be solved. If not, post to the linux-lvm mailing list 4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd? Upgrading to LVM 2 is a bit trickier with root on LVM, but it's not impossible. You need to queue up a kernel with device-mapper support and install the lvm2 tools (you might want to make a backup of the lvm 1 tools, or find a rescue disk with the lvm tools built in, in case you need them before you're done). Then find a mkinitrd script that has support for your distro and lvm 2. Currently, this is the list of mkinitrd scripts that I know support lvm2, sorted by distro: mkinitrd scripts with lvm 2 support fedora The latest fedora core 2 [http://distro.ibiblio.org/pub/linux/ distributions/fedora/linux/core/development/i386/Fedora/RPMS/ mkinitrd-3.5.21-1.i386.rpm] mkinitrd handles lvm2, but it relies on a statically built lvm binary from the latest lvm 2 tarball. Redhat 9 users may be able to use this as well Debian There is an unofficial version [http://www.poochiereds.net/svn/lvm2/] here Generic There is a version in the lvm2 source tree under scripts/ lvm2_createinitrd/. See the documentation in that directory for more details. 4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks? It's fine - LVM identifies PVs by UUID, not by device name. Each disk (PV) is labeled with a UUID, which uniquely identifies it to the system. 'vgscan' identifies this after a new disk is added that changes your drive numbering. Most distros run vgscan in the lvm startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to run this by hand I think. On the other hand, if your vg is activated and being used, the renumbering should not affect it at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a vgscan with a complaint about a missing PV. Note The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it. 4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on? The 1.87 GB figure is rounded to 2 decimal places, so it's probably 1.866 GB or something. This is a human-readable output to give you a general idea of how big the VG is. If you want to specify an exact size, you must use extents instead of some multiple of bytes. In the case of vgdisplay, use the Free PE count instead of the human readable capacity. Free PE / Size 478 / 1.87 GB ^^^ So, this would indicate that you should do run # lvcreate vg -l478 Note that instead of an upper-case 'L', we used a lower-case 'l' to tell lvm to use extents instead of bytes. In the case of vgs, you need to instruct it to tell you how many extents are available: # vgs -o +vg_free_count,vg_extent_count This tell vgs to add the free extents and the total number of extents to the end of the vgs listing. Use the free extent number the same way you would in the above vgdisplay case. 4.1.12. How are snapshots in LVM2 different from LVM1? In LVM2 snapshots are read/write by default, whereas in LVM1, snapshots were only read-only. See Section 3.8 for more details 4.1.13. What is the maximum size of a single LV? The answer to this question depends upon the CPU architecture of your computer and the kernel you are a running:   *  For 2.4 based kernels, the maximum LV size is 2TB. For some older kernels, however, the limit was 1TB due to signedness problems in the block layer. Red Hat Enterprise Linux 3 Update 5 has fixes to allow the full 2TB LVs. Consult your distribution for more information in this regard.   *  For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.   *  For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. (Yes, that is a very large number.) ----------------------------------------------------------------------------- 4.2. LVM 1 FAQ 4.2.1. When will there be info here? 4.2.1. When will there be info here? When people start submitting FAQ entries ;) ----------------------------------------------------------------------------- Chapter 5. Acquiring LVM The first thing you need to do is get a copy of LVM.   * Download via FTP a tarball of LVM.   *  Download the source that is under active development via CVS ----------------------------------------------------------------------------- 5.1. Download the source   *  [ftp://sources.redhat.com/pub/dm/] Device Mapper   *  LVM 2 Make sure you also grab the device mapper source   *  LVM 1 Note The LVM 1 kernel patch must be generated using the LVM 1 source. More information regarding this can be found in Section 6.2 ----------------------------------------------------------------------------- 5.2. Download the development source via CVS Note: the state of code in the CVS repository fluctuates wildly. It will contain bugs. Maybe ones that will crash LVM or the kernel. It may not even compile. Consider it alpha-quality code. You could lose data. You have been warned. ----------------------------------------------------------------------------- 5.3. Before You Begin To follow the development progress of LVM, subscribe to the LVM mailing lists, linux-lvm and the appropriate commit list (see Section C.1). To build LVM from the CVS sources, you must have several GNU tools:   * the CVS client version 1.9 or better   * GCC 2.95.2   * GNU make 3.79   * autoconf, version 2.13 or better ----------------------------------------------------------------------------- 5.4. Initial Setup To make life easier in the future with regards to updating the CVS tree create the file $HOME/.cvsrc and insert the following lines. This configures useful defaults for the three most commonly used CVS commands. Do this now before proceeding any further. diff -u -b -B checkout -P update -d -P Also, if you are on a slow net link (like a dialup), you will want to add a line containing cvs -z5 in this file. This turns on a useful compression level for all CVS commands. ----------------------------------------------------------------------------- 5.5. Checking Out Source Code   *  Device Mapper library and tools The device mapper library is required to build LVM 2. The first time you download from cvs, you must login # cvs -d :pserver:cvs@sources.redhat.com:/cvs/dm login cvs The password is `cvs'. The command outputs nothing if successful and an error message if it fails. Only an initial login is required. All subsequent CVS commands read the password stored in the file $HOME /.cvspass for authentication. Use the following to check out a copy of the code # cvs -d :pserver:cvs@sources.redhat.com:/cvs/dm checkout device-mapper This will create a new directory device-mapper in your current directory containing the latest, up-to-the-minute device mapper code.   * LVM 2 The first time you download from cvs, you must login # cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 login cvs The password is `cvs'. The command outputs nothing if successful and an error message if it fails. Only an initial login is required. All subsequent CVS commands read the password stored in the file $HOME /.cvspass for authentication. Use the following to check out a copy of the code # cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 checkout LVM2 This will create a new directory LVM2 in your current directory containing the latest, up-to-the-minute LVM 2 code.   * LVM 1 The first time you download from cvs, you must login # cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm login cvs The password is `cvs'. The command outputs nothing if successful and an error message if it fails. Only an initial login is required. All subsequent CVS commands read the password stored in the file $HOME /.cvspass for authentication. Use the following to check out a copy of the code # cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm checkout LVM This will create a new directory LVM in your current directory containing the latest, up-to-the-minute LVM 1 code. CVS commands work from anywhere inside the source tree, and recurse downward. So if you happen to issue an update from inside the `tools' subdirectory it will work fine, but only update the tools directory and it's subdirectories. In the following command examples it is assumed that you are at the top of the source tree. ----------------------------------------------------------------------------- 5.6. Code Updates Code changes are made fairly frequently in the CVS repository. Announcements of this are automatically sent to the lvm-commit list. You can update your copy of the sources to match the master repository with the update command. It is not necessary to check out a new copy. Using update is significantly faster and simpler, as it will download only patches instead of entire files and update only those files that have changed since your last update. It will automatically merge any changes in the CVS repository with any local changes you have made as well. Just cd to the directory you'd like to update and then type the following. # cvs update If you did not specify a tag when you checked out the source, this will update your sources to the latest version on the main branch. If you specified a branch tag, it will update to the latest version on that branch. If you specified a version tag, it will not do anything. ----------------------------------------------------------------------------- 5.7. Starting a Project Discuss your ideas on the developers list before you start. Someone may be working on the same thing you have in mind or they may have some good ideas about how to go about it. ----------------------------------------------------------------------------- 5.8. Hacking the Code So, have you found a bug you want to fix? Want to implement a feature from the TODO list? Got a new feature to implement? Hacking the code couldn't be easier. Just edit your copy of the sources. No need to copy files to .orig or anything. CVS has copies of the originals. When you have your code in a working state and have tested as best you can with the hardware you have, generate a patch against the current sources in the CVS repository. # cvs update # cvs diff > patchfile Mail the patch to the linux-lvm or dm-devel list (Section C.1) with a description of what changes or additions you implemented. ----------------------------------------------------------------------------- 5.9. Conflicts If someone else has been working on the same files as you have, you may find that there are conflicting modifications. You'll discover this when you try to update your sources. # cvs update RCS file: LVM/tools/pvcreate.c,v retrieving revision 1.5 retrieving revision 1.6 Merging differences between 1.5 and 1.6 into pvcreate.c rcsmerge: warning: conflicts during merge cvs server: conflicts found in tools/pvcreate.c C tools/pvcreate.c Don't panic! Your working file, as it existed before the update, is saved under the filename .#pvcreate.c.1.5. You can always recover it should things go horribly wrong. The file named `pvcreate.c' now contains both the old (i.e. your) version and new version of lines that conflicted. You simply edit the file and resolve each conflict by deleting the unwanted version of the lines involved. <<<<<<< pvcreate.c j++; ======= j--; >>>>>>> 1.6 Don't forget to delete the lines with all the ``<'', ``='', and ``>'' symbols. ----------------------------------------------------------------------------- Chapter 6. Building the kernel modules 6.1. Building the device-mapper module Device mapper is in 2.6.9 and later, so you just need to make sure it is enabled either as a module or builtin to the kernel. Look for /sys/class/misc /device-mapper or look in /proc/devices for a device-mapper entry to see if it is enabled. If neither are there, try modprobe dm_mod, then check again. For versions previous to 2.6.9, either you or your distro must patch the kernel to support it. Check the device mapper web page for more information. ----------------------------------------------------------------------------- 6.2. Build the LVM 1 kernel module To use LVM 1 you will have to build the LVM 1 kernel module (recommended), or if you prefer rebuild the kernel with the LVM 1 code statically linked into it. Your Linux system is probably based on one of the popular distributions (eg., Red Hat, SuSE, Debian) in which case it is possible that you already have the LVM 1 module. Check the version of the tools you have on your system. You can do this by running any of the LVM command line tools with the '-h' flag. Use pvscan -h if you don't know any of the commands. If the version number listed at the top of the help listing is LVM 1.0.8, use your current setup and avoid the rest of this section. ----------------------------------------------------------------------------- 6.2.1. Building a patch for your kernel In order to patch the linux kernel to support LVM 1.0.8, you must do the following: 1. Unpack LVM 1.0.8 # tar zxf lvm_1.0.8.tar.gz 2. Enter the root directory of that version. # cd LVM/1.0.8 3. Run configure # ./configure You will need to pass the option --with-kernel_dir to configure if your linux kernel source is not in /usr/src/linux. (Run ./configure --help to see all the options available) 4. Enter the PATCHES directory # cd PATCHES 5. Run 'make' # make You should now have a patch called lvm-1.0.8-$KERNELVERSION.patch in the patches directory. This is the LVM kernel patch referenced in later sections of the howto. 6. Patch the kernel # cd /usr/src/linux ; patch -pX < /directory/lvm-1.0.8-$KERNELVERSION.patch ----------------------------------------------------------------------------- 6.2.2. Building the LVM module for Linux 2.2.17+ The 2.2 series kernel needs to be patched before you can start building, look elsewhere for instructions on how to patch your kernel. Patches: 1. rawio patch Stephen Tweedie's raw_io patch which can be found at [http:// www.kernel.org/pub/linux/kernel/people/sct/raw-io] http://www.kernel.org/ pub/linux/kernel/people/sct/raw-io 2. lvm patch The relevant LVM 1 patch which should be built out of the PATCHES sub-directory of the LVM distribution. More information can be found in Section 6.2.1, Building a patch for your kernel. Once the patches have been correctly applied, you need to make sure that the module is actually built, LVM 1 lives under the block devices section of the kernel config, you should probably request that the LVM /proc information is compiled as well. Build the kernel modules as usual. ----------------------------------------------------------------------------- 6.2.3. Building the LVM modules for Linux 2.4 The 2.4 kernel comes with LVM 1 already included although you should check at the Sistina web site for updates, (eg. v2.4.9 kernels and earlier must have the latest LVM 1 patch applied ). When configuring your kernel look for LVM 1 under Multi-device support (RAID and LVM). LVM 1 can be compiled into the kernel or as a module. Build your kernel and modules and install then in the usual way. If you chose to build LVM as a module it will be called lvm-mod.o If you want to use snapshots with ReiserFS, make sure you apply the linux-2.4.x-VFS-lock patch (there are copies of this in the LVM/1.0.8/PATCHES directory.) ----------------------------------------------------------------------------- 6.2.4. Checking the proc file system If your kernel was compiled with the /proc file system (most are) then you can verify that LVM is present by looking for a /proc/lvm directory. If this doesn't exist then you may have to load the module with the command # modprobe lvm-mod If /proc/lvm still does not exist then check your kernel configuration carefully. When LVM is active you will see entries in /proc/lvm for all your physical volumes, volume groups and logical volumes. In addition there is a "file" called /proc/lvm/global which gives a summary of the LVM status and also shows just which version of the LVM kernel you are using. ----------------------------------------------------------------------------- Chapter 7. LVM 1 Boot time scripts Boot-time scripts are not provided as part of the LVM distribution, however these are quite simple to do for yourself. The startup of LVM requires just the following two commands: # vgscan # vgchange -ay And the shutdown only one: # vgchange -an Follow the instructions below depending on the distribution of Linux you are running. ----------------------------------------------------------------------------- 7.1. Caldera It is necessary to edit the file /etc/rc.d/rc.boot. Look for the line that says "Mounting local filesystems" and insert the vgscan and vgchange commands just before it. You may also want to edit the the file /etc/rc.d/init.d/halt to deactivate the volume groups at shutdown. Insert the vgchange -an command near the end of this file just after the filesystems are unmounted or mounted read-only, before the comment that says "Now halt or reboot". ----------------------------------------------------------------------------- 7.2. Debian If you download the Debian lvm tool package, an initscript should be installed for you. If you are installing LVM from source, you will still need to build your own initscript: Create a startup script in /etc/init.d/lvm containing the following: #!/bin/sh case "$1" in start) /sbin/vgscan /sbin/vgchange -ay ;; stop) /sbin/vgchange -an ;; restart|force-reload) ;; esac exit 0 Then execute the commands # chmod 0755 /etc/init.d/lvm # update-rc.d lvm start 26 S . stop 82 1 . Note the dots in the last command. ----------------------------------------------------------------------------- 7.3. Mandrake No initscript modifications should be necessary for current versions of Mandrake. ----------------------------------------------------------------------------- 7.4. Redhat For Redhat 7.0 and up, you should not need to modify any initscripts to enable LVM at boot time if LVM is built into the kernel. If LVM is built as a module, it may be necessary to modify /etc/rc.d/rc.sysinit to load the LVM module by adding "modprobe lvm-mod" before the section that reads: # LVM initialization, take 2 (it could be on top of RAID) if [ -e /proc/lvm -a -x /sbin/vgchange -a -f /etc/lvmtab ]; then action $"Setting up Logical Volume Management:" /sbin/vgscan && /sbin/vgchange -a y fi Note This init script fragment is from Red Hat 7.3 - other versions of Redhat may look slightly different. For versions of Redhat older than 7.0, it is necessary to edit the file / etc/rc.d/rc.sysinit. Look for the line that says "Mount all other filesystems" and insert the vgscan and vgchange commands just before it. You should be sure that your root file system is mounted read/write before you run the LVM commands. You may also want to edit the the file /etc/rc.d/init.d/halt to deactivate the volume groups at shutdown. Insert the vgchange -an command near the end of this file just after the filesystems are mounted read-only, before the comment that says "Now halt or reboot". ----------------------------------------------------------------------------- 7.5. Slackware Slackware 8.1 requires no updating of boot time scripts in order to make LVM work. For versions previous to Slackware 8.1, you should apply the following patch to /etc/rc.d/rc.S cd /etc/rc.d cp -a rc.S rc.S.old patch -p0 < rc.S.diff (the cp part to make a backup in case). ----- snip snip file: rc.S.diff--------------- --- rc.S.or Tue Jul 17 18:11:20 2001 +++ rc.S Tue Jul 17 17:57:36 2001 @@ -4,6 +4,7 @@ # # Mostly written by: Patrick J. Volkerding, # +# Added LVM support PATH=/sbin:/usr/sbin:/bin:/usr/bin @@ -28,19 +29,21 @@ READWRITE=yes fi + # Check the integrity of all filesystems if [ ! READWRITE = yes ]; then - /sbin/fsck -A -a + /sbin/fsck -a / + # Check only the root fs first, but no others # If there was a failure, drop into single-user mode. if [ ? -gt 1 ] ; then echo echo - echo "*******************************************************" - echo "*** An error occurred during the file system check. ***" - echo "*** You will now be given a chance to log into the ***" - echo "*** system in single-user mode to fix the problem. ***" - echo "*** Running 'e2fsck -v -y ' might help. ***" - echo "*******************************************************" + echo "************************************************************" + echo "*** An error occurred during the root file system check. ***" + echo "*** You will now be given a chance to log into the ***" + echo "*** system in single-user mode to fix the problem. ***" + echo "*** Running 'e2fsck -v -y ' might help. ***" + echo "************************************************************" echo echo "Once you exit the single-user shell, the system will reboot." echo @@ -82,6 +85,44 @@ echo -n "get into your machine and start looking for the problem. " read junk; fi + # okay / fs is clean, and mounted as rw + # This was an addition, limits vgscan to /proc thus + # speeding up the scan immensely. + /sbin/mount /proc + + # Initialize Logical Volume Manager + /sbin/vgscan + /sbin/vgchange -ay + + /sbin/fsck -A -a -R + #Check all the other filesystem, including the LVM's, excluding / + + # If there was a failure, drop into single-user mode. + if [ ? -gt 1 ] ; then + echo + echo + echo "*******************************************************" + echo "*** An error occurred during the file system check. ***" + echo "*** You will now be given a chance to log into the ***" + echo "*** system in single-user mode to fix the problem. ***" + echo "*** Running 'e2fsck -v -y ' might help. ***" + echo "*** The root filesystem is ok and mounted readwrite ***" + echo "*******************************************************" + echo + echo "Once you exit the single-user shell, the system will reboot." + echo + + PS1="(Repair filesystem) #"; export PS1 + sulogin + + echo "Unmounting file systems." + umount -a -r + mount -n -o remount,ro / + echo "Rebooting system." + sleep 2 + reboot + fi + else echo "Testing filesystem status: read-write filesystem" if cat /etc/fstab | grep ' / ' | grep umsdos 1> /dev/null 2> /dev/null ; then @@ -111,14 +152,16 @@ echo -n "Press ENTER to continue. " read junk; fi + fi + # remove /etc/mtab* so that mount will create it with a root entry /bin/rm -f /etc/mtab* /etc/nologin /etc/shutdownpid # mount file systems in fstab (and create an entry for /) # but not NFS or SMB because TCP/IP is not yet configured -/sbin/mount -a -v -t nonfs,nosmbfs +/sbin/mount -a -v -t nonfs,nosmbfs,proc # Clean up temporary files on the /var volume: /bin/rm -f /var/run/utmp /var/run/*.pid /var/log/setup/tmp/* --snip snip snip end of file--------------- ----------------------------------------------------------------------------- 7.6. SuSE No changes should be necessary from 6.4 onward as LVM is included ----------------------------------------------------------------------------- Chapter 8. LVM 2 Boot Time Scripts For initrds, you should have: dmsetup mknodes vgscan --ignorelockingfailure vgchange -ay --ignorelockingfailure in the linuxrc to get the root LV activated before the root volume is accessed. Most distros seem to have this setup in their mkinitrd scripts now, and they also tend to have them in rc.sysinit or equivilant, so all volumes get activated on bootup. ----------------------------------------------------------------------------- Chapter 9. Building LVM from the Source 9.1. Make LVM library and tools Change into the LVM directory and do a ./configure followed by make. This will make all of the libraries and programs. If the need arises you can change some options with the configure script. Do a ./configure --help to determine which options are supported. Most of the time this will not be necessary. There should be no errors from the build process. If there are, see Reporting Errors and Bugs on how to report this. You are welcome to fix them and send us the patches too. Patches are generally sent to the linux-lvm list. ----------------------------------------------------------------------------- 9.2. Install LVM library and tools After the LVM source compiles properly, simply run make install to install the LVM library and tools onto your system. ----------------------------------------------------------------------------- 9.3. Removing LVM library and tools To remove the library and tools you just installed, run make remove. You must have the original source tree you used to install LVM to use this feature. ----------------------------------------------------------------------------- Chapter 10. Transitioning from previous versions of LVM to LVM 1.0.8 Transitioning from previous versions of LVM to LVM 1.0.8 should be fairly painless. We have come up with a method to read in PV version 1 metadata (LVM 0.9.1 Beta7 and earlier) as well as PV version 2 metadata (LVM 0.9.1 Beta8 and LVM 1.0). Warning: New PVs initialized with LVM 1.0.8 are created with the PV version 1 on-disk structure. This means that LVM 0.9.1 Beta8 and LVM 1.0 cannot read or use PVs created with 1.0.8. ----------------------------------------------------------------------------- 10.1. Upgrading to LVM 1.0.8 with a non-LVM root partition There are just a few simple steps to transition this setup, but it is still recommended that you backup your data before you try it. You have been warned. 1. Build LVM kernel and modules Follow the steps outlined in Chapter 5 - Section 6.2 for instructions on how to get and build the necessary kernel components of LVM. 2. Build the LVM user tools Follow the steps in Chapter 9 to build and install the user tools for LVM. 3. Setup your init scripts Make sure you have the proper init scripts setup as per Chapter 7. 4. Boot into the new kernel Make sure your boot-loader is setup to load the new LVM-enhanced kernel and, if you are using LVM modules, put an insmod lvm-mod into your startup script OR extend /etc/modules.conf (formerly /etc/conf.modules) by adding alias block-major-58 lvm-mod alias char-major-109 lvm-mod to enable modprobe to load the LVM module (don't forget to enable kmod). Reboot and enjoy. ----------------------------------------------------------------------------- 10.2. Upgrading to LVM 1.0.8 with an LVM root partition and initrd This is relatively straightforward if you follow the steps carefully. It is recommended you have a good backup and a suitable rescue disk handy just in case. The "normal" way of running an LVM root file system is to have a single non-LVM partition called /boot which contains the kernel and initial RAM disk needed to start the system. The system I upgraded was as follows: # df Filesystem 1k-blocks Used Available Use% Mounted on /dev/rootvg/root 253871 93384 147380 39% / /dev/hda1 17534 12944 3685 78% /boot /dev/rootvg/home 4128448 4568 3914168 0% /home /dev/rootvg/usr 1032088 332716 646944 34% /usr /dev/rootvg/var 253871 31760 209004 13% /var /boot contains the old kernel and an initial RAM disk as well as the LILO boot files and the following entry in /etc/lilo.conf # ls /boot System.map lost+found vmlinux-2.2.16lvm map module-info boot.0300 boot.b os2_d.b chain.b initrd.gz # tail /etc/lilo.conf image=/boot/vmlinux-2.2.16lvm label=lvm08 read-only root=/dev/rootvg/root initrd=/boot/initrd.gz append="ramdisk_size=8192" 1. Build LVM kernel and modules Follow the steps outlined in Chapter 5 - Section 6.2 for instructions on how to get and build the necessary kernel components of LVM. 2. Build the LVM user tools Follow the steps in Section 6.2 to build and install the user tools for LVM. Install the new tools. Once you have done this you cannot do any LVM manipulation as they are not compatible with the kernel you are currently running. 3. Rename the existing initrd.gz This is so it doesn't get overwritten by the new one # mv /boot/initrd.gz /boot/initrd08.gz 4. Edit /etc/lilo.conf Make the existing boot entry point to the renamed file. You will need to reboot using this if something goes wrong in the next reboot. The changed entry will look something like this: image=/boot/vmlinux-2.2.16lvm label=lvm08 read-only root=/dev/rootvg/root initrd=/boot/initrd08.gz append="ramdisk_size=8192" 5. Run lvmcreate_initrd to create a new initial RAM disk # lvmcreate_initrd 2.4.9 Don't forget the put the new kernel version in there so that it picks up the correct modules. 6. Add a new entry into /etc/lilo.conf This new entry is to boot the new kernel with its new initrd. image=/boot/vmlinux-2.4.9lvm label=lvm10 read-only root=/dev/rootvg/root initrd=/boot/initrd.gz append="ramdisk_size=8192" 7. Re-run lilo This will install the new boot block # /sbin/lilo 8. Reboot When you get the LILO prompt select the new entry name (in this example lvm10) and your system should boot into Linux using the new LVM version. If the new kernel does not boot, then simply boot the old one and try to fix the problem. It may be that the new kernel does not have all the correct device drivers built into it, or that they are not available in the initrd. Remember that all device drivers (apart from LVM) needed to access the root device should be compiled into the kernel and not as modules. If you need to do any LVM manipulation when booted back into the old version, then simply recompile the old tools and install them with # make install If you do this, don't forget to install the new tools when you reboot into the new LVM version. When you are happy with the new system remember to change the ``default='' entry in your lilo.conf file so that it is the default kernel. ----------------------------------------------------------------------------- Chapter 11. Common Tasks The following sections outline some common administrative tasks for an LVM system. This is no substitute for reading the man pages. ----------------------------------------------------------------------------- 11.1. Initializing disks or disk partitions Before you can use a disk or disk partition as a physical volume you will have to initialize it: For entire disks:   *  Run pvcreate on the disk: # pvcreate /dev/hdb This creates a volume group descriptor at the start of disk.   *  If you get an error that LVM can't initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one. If you are very sure that it is, run the following: Warning DANGEROUS   The following commands will destroy the partition table on the disk being operated on. Be very sure it is the correct disk. # dd if=/dev/zero of=/dev/diskname bs=1k count=1 # blockdev --rereadpt /dev/diskname For partitions:   *  When using LVM 1 on PCs with DOS partitions, set the partition type to 0x8e using fdisk or some other similar program. This step is unnecessary on PPC systems or when using LVM 2.   *  Run pvcreate on the partition: # pvcreate /dev/hdb1 This creates a volume group descriptor at the start of the /dev/hdb1 partition. ----------------------------------------------------------------------------- 11.2. Creating a volume group Use the 'vgcreate' program: # vgcreate my_volume_group /dev/hda1 /dev/hdb1 NOTE: If you are using LVM 1 with devfs it is essential to use the full devfs name of the device rather than the symlinked name in /dev. So the above would be: # vgcreate my_volume_group /dev/ide/host0/bus0/target0/lun0/part1 \ /dev/ide/host0/bus0/target1/lun0/part1 LVM 2 does not have this restriction. You can also specify the extent size with this command if the default of 32MB is not suitable for you with the '-s' switch. In addition you can put some limits on the number of physical or logical volumes the volume can have. ----------------------------------------------------------------------------- 11.3. Activating a volume group After rebooting the system or running vgchange -an, you will not be able to access your VGs and LVs. To reactivate the volume group, run: # vgchange -a y my_volume_group ----------------------------------------------------------------------------- 11.4. Removing a volume group Make sure that no logical volumes are present in the volume group, see later section for how to do this. Deactivate the volume group: # vgchange -a n my_volume_group Now you actually remove the volume group: # vgremove my_volume_group ----------------------------------------------------------------------------- 11.5. Adding physical volumes to a volume group Use 'vgextend' to add an initialized physical volume to an existing volume group. # vgextend my_volume_group /dev/hdc1 ^^^^^^^^^ new physical volume ----------------------------------------------------------------------------- 11.6. Removing physical volumes from a volume group Make sure that the physical volume isn't used by any logical volumes by using then 'pvdisplay' command: # pvdisplay /dev/hda1 --- Physical volume --- PV Name /dev/hda1 VG Name myvg PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB] PV# 1 PV Status available Allocatable yes (but full) Cur LV 1 PE Size (KByte) 4096 Total PE 499 Free PE 0 Allocated PE 499 PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7 If the physical volume is still used you will have to migrate the data to another physical volume using pvmove. Then use 'vgreduce' to remove the physical volume: # vgreduce my_volume_group /dev/hda1 ----------------------------------------------------------------------------- 11.7. Creating a logical volume To create a 1500MB linear LV named 'testlv' and its block device special '/ dev/testvg/testlv': # lvcreate -L1500 -ntestlv testvg To create a 100 LE large logical volume with 2 stripes and stripe size 4 KB. # lvcreate -i2 -I4 -l100 -nanothertestlv testvg If you want to create an LV that uses the entire VG, use vgdisplay to find the "Total PE" size, then use that when running lvcreate. # vgdisplay testvg | grep "Total PE" Total PE 10230 # lvcreate -l 10230 testvg -n mylv This will create an LV called mylv filling the testvg VG. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. # lvcreate -L 1500 -ntestlv testvg /dev/sdg ----------------------------------------------------------------------------- 11.8. Removing a logical volume A logical volume must be closed before it can be removed: # umount /dev/myvg/homevol # lvremove /dev/myvg/homevol lvremove -- do you really want to remove "/dev/myvg/homevol"? [y/n]: y lvremove -- doing automatic backup of volume group "myvg" lvremove -- logical volume "/dev/myvg/homevol" successfully removed ----------------------------------------------------------------------------- 11.9. Extending a logical volume To extend a logical volume you simply tell the lvextend command how much you want to increase the size. You can specify how much to grow the volume, or how large you want it to grow to: # lvextend -L12G /dev/myvg/homevol lvextend -- extending logical volume "/dev/myvg/homevol" to 12 GB lvextend -- doing automatic backup of volume group "myvg" lvextend -- logical volume "/dev/myvg/homevol" successfully extended will extend /dev/myvg/homevol to 12 Gigabytes. # lvextend -L+1G /dev/myvg/homevol lvextend -- extending logical volume "/dev/myvg/homevol" to 13 GB lvextend -- doing automatic backup of volume group "myvg" lvextend -- logical volume "/dev/myvg/homevol" successfully extended will add another gigabyte to /dev/myvg/homevol. After you have extended the logical volume it is necessary to increase the file system size to match. how you do this depends on the file system you are using. By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you don't need to worry about specifying the same size for each of the two commands. 1. ext2/ext3 Unless you have patched your kernel with the ext2online patch it is necessary to unmount the file system before resizing it. (It seems that the online resizing patch is rather dangerous, so use at your own risk) # umount /dev/myvg/homevol/dev/myvg/homevol # resize2fs /dev/myvg/homevol # mount /dev/myvg/homevol /home If you don't have e2fsprogs 1.19 or later, you can download the ext2resize command from [http://ext2resize.sourceforge.net] ext2resize.sourceforge.net and use that: # umount /dev/myvg/homevol/dev/myvg/homevol # ext2resize /dev/myvg/homevol # mount /dev/myvg/homevol /home For ext2 there is an easier way. LVM 1 ships with a utility called e2fsadm which does the lvextend and resize2fs for you (it can also do file system shrinking, see the next section). Warning LVM 2 Caveat   There is currently no e2fsadm equivalent for LVM 2 and the e2fsadm that ships with LVM 1 does not work with LVM 2. so the single command # e2fsadm -L+1G /dev/myvg/homevol is equivalent to the two commands: # lvextend -L+1G /dev/myvg/homevol # resize2fs /dev/myvg/homevol Note Note   You will still need to unmount the file system before running e2fsadm. 2. reiserfs Reiserfs file systems can be resized when mounted or unmounted as you prefer:   +  Online: # resize_reiserfs -f /dev/myvg/homevol   +  Offline: # umount /dev/myvg/homevol # resize_reiserfs /dev/myvg/homevol # mount -treiserfs /dev/myvg/homevol /home 3. xfs XFS file systems must be mounted to be resized and the mount-point is specified rather than the device name. # xfs_growfs /home 4. jfs Just like XFS the JFS file system must be mounted to be resized and the mount-point is specified rather than the device name. You need at least Version 1.0.21 of the jfs-utils to do this. # mount -o remount,resize /home Warning Known Kernel Bug   Some kernel versions have problems with this syntax (2.6.0 is known to have this problem). In this case you have to explicitly specify the new size of the filesystem in blocks. This is extremely error prone as you must know the blocksize of your filesystem and calculate the new size based on those units. Example: If you were to resize a JFS file system to 4 gigabytes that has 4k blocks, you would write: # mount -o remount,resize=1048576 /home ----------------------------------------------------------------------------- 11.10. Reducing a logical volume Logical volumes can be reduced in size as well as increased. However, it is very important to remember to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise you risk losing data. 1. ext2 If you are using LVM 1 with ext2 as the file system then you can use the e2fsadm command mentioned earlier to take care of both the file system and volume resizing as follows: # umount /home # e2fsadm -L-1G /dev/myvg/homevol # mount /home Warning LVM 2 Caveat   There is currently no e2fsadm equivalent for LVM 2 and the e2fsadm that ships with LVM 1 does not work with LVM 2. If you prefer to do this manually you must know the new size of the volume in blocks and use the following commands: # umount /home # resize2fs /dev/myvg/homevol 524288 # lvreduce -L-1G /dev/myvg/homevol # mount /home 2. reiserfs Reiserfs seems to prefer to be unmounted when shrinking # umount /home # resize_reiserfs -s-1G /dev/myvg/homevol # lvreduce -L-1G /dev/myvg/homevol # mount -treiserfs /dev/myvg/homevol /home 3. xfs There is no way to shrink XFS file systems. 4. jfs There is no way to shrink JFS file systems. ----------------------------------------------------------------------------- 11.11. Migrating data off of a physical volume To take a disk out of service it must first have all of its active physical extents moved to one or more of the remaining disks in the volume group. There must be enough free physical extents in the remaining PVs to hold the extents to be copied from the old disk. For further detail see Section 13.5. ----------------------------------------------------------------------------- Chapter 12. Disk partitioning 12.1. Multiple partitions on the same disk LVM allows you to create PVs (physical volumes) out of almost any block device so, for example, the following are all valid commands and will work quite happily in an LVM environment: # pvcreate /dev/sda1 # pvcreate /dev/sdf # pvcreate /dev/hda8 # pvcreate /dev/hda6 # pvcreate /dev/md1 In a "normal" production system it is recommended that only one PV exists on a single real disk, for the following reasons:   *  Administrative convenience It's easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails.   *  To avoid striping performance problems LVM can't tell that two PVs are on the same physical disk, so if you create a striped LV then the stripes could be on different partitions on the same disk resulting in a decrease in performance rather than an increase. However it may be desirable to do this for some reasons:   *  Migration of existing system to LVM On a system with few disks it may be necessary to move data around partitions to do the conversion (see Section 13.8)   *  Splitting one big disk between Volume Groups If you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the drive into more than one area. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes. The recommended method of partitioning a disk is to create a single partition that covers the whole disk. This avoids any nasty accidents with whole disk drive device nodes and prevents the kernel warning about unknown partition types at boot-up. ----------------------------------------------------------------------------- 12.2. Sun disk labels You need to be especially careful on SPARC systems where the disks have Sun disk labels on them. The normal layout for a Sun disk label is for the first partition to start at block zero of the disk, thus the first partition also covers the area containing the disk label itself. This works fine for ext2 filesystems (and is essential for booting using SILO) but such partitions should not be used for LVM. This is because LVM starts writing at the very start of the device and will overwrite the disk label. If you want to use a disk with a Sun disk label with LVM, make sure that the partition you are going to use starts at cylinder 1 or higher. ----------------------------------------------------------------------------- Chapter 13. Recipes This section details several different "recipes" for setting up lvm. The hope is that the reader will adapt these recipes to their own system and needs. ----------------------------------------------------------------------------- 13.1. Setting up LVM on three SCSI disks For this recipe, the setup has three SCSI disks that will be put into a logical volume using LVM. The disks are at /dev/sda, /dev/sdb, and /dev/sdc. ----------------------------------------------------------------------------- 13.1.1. Preparing the disks Before you can use a disk in a volume group you will have to prepare it: Warning Warning!   The following will destroy any data on /dev/sda, /dev/sdb, and /dev/ sdc Run pvcreate on the disks # pvcreate /dev/sda # pvcreate /dev/sdb # pvcreate /dev/sdc This creates a volume group descriptor area (VGDA) at the start of the disks. ----------------------------------------------------------------------------- 13.1.2. Setup a Volume Group 1. Create a volume group # vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc/ 2. Run vgdisplay to verify volume group # vgdisplay # vgdisplay --- Volume Group --- VG Name my_volume_group VG Access read/write VG Status available/resizable VG # 1 MAX LV 256 Cur LV 0 Open LV 0 MAX LV Size 255.99 GB Max PV 256 Cur PV 3 Act PV 3 VG Size 1.45 GB PE Size 4 MB Total PE 372 Alloc PE / Size 0 / 0 Free PE / Size 372/ 1.45 GB VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks. ----------------------------------------------------------------------------- 13.1.3. Creating the Logical Volume If the volume group looks correct, it is time to create a logical volume on top of the volume group. You can make the logical volume any size you like. (It is similar to a partition on a non LVM setup.) For this example we will create just a single logical volume of size 1GB on the volume group. We will not use striping because it is not currently possible to add a disk to a stripe set after the logical volume is created. # lvcreate -L1G -nmy_logical_volume my_volume_group lvcreate -- doing automatic backup of "my_volume_group" lvcreate -- logical volume "/dev/my_volume_group/my_logical_volume" successfully created ----------------------------------------------------------------------------- 13.1.4. Create the File System Create an ext2 file system on the logical volume # mke2fs /dev/my_volume_group/my_logical_volume mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09 Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 9 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Writing superblocks and filesystem accounting information: done ----------------------------------------------------------------------------- 13.1.5. Test the File System Mount the logical volume and check to make sure everything looks correct # mount /dev/my_volume_group/my_logical_volume /mnt # df Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda1 1311552 628824 616104 51% / /dev/my_volume_group/my_logical_volume 1040132 20 987276 0% /mnt If everything worked properly, you should now have a logical volume with and ext2 file system mounted at /mnt. ----------------------------------------------------------------------------- 13.2. Setting up LVM on three SCSI disks with striping For this recipe, the setup has three SCSI disks that will be put into a logical volume using LVM. The disks are at /dev/sda, /dev/sdb, and /dev/sdc. Note Note   It is not currently possible to add a disk to a striped logical volume in LVM 1. Use LVM 2 with the lvm 2 format metadata if you wish to be able to do so able to do so. ----------------------------------------------------------------------------- 13.2.1. Preparing the disk partitions Before you can use a disk in a volume group you will have to prepare it: Warning Warning!   The following will destroy any data on /dev/sda, /dev/sdb, and /dev/ sdc Run pvcreate on the disks: # pvcreate /dev/sda # pvcreate /dev/sdb # pvcreate /dev/sdc This creates a volume group descriptor area (VGDA) at the start of the disks. ----------------------------------------------------------------------------- 13.2.2. Setup a Volume Group 1. Create a volume group # vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc 2. Run vgdisplay to verify volume group # vgdisplay --- Volume Group --- VG Name my_volume_group VG Access read/write VG Status available/resizable VG # 1 MAX LV 256 Cur LV 0 Open LV 0 MAX LV Size 255.99 GB Max PV 256 Cur PV 3 Act PV 3 VG Size 1.45 GB PE Size 4 MB Total PE 372 Alloc PE / Size 0 / 0 Free PE / Size 372/ 1.45 GB VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks. ----------------------------------------------------------------------------- 13.2.3. Creating the Logical Volume If the volume group looks correct, it is time to create a logical volume on top of the volume group. You can make the logical volume any size you like (up to the size of the VG you are creating it on; it is similar to a partition on a non LVM setup). For this example we will create just a single logical volume of size 1GB on the volume group. The logical volume will be a striped set using for the 4k stripe size. This should increase the performance of the logical volume. # lvcreate -i3 -I4 -L1G -nmy_logical_volume my_volume_group lvcreate -- rounding 1048576 KB to stripe boundary size 1056768 KB / 258 PE lvcreate -- doing automatic backup of "my_volume_group" lvcreate -- logical volume "/dev/my_volume_group/my_logical_volume" successfully created Note Note   If you create the logical volume with a '-i2' you will only use two of the disks in your volume group. This is useful if you want to create two logical volumes out of the same physical volume, but we will not touch that in this recipe. ----------------------------------------------------------------------------- 13.2.4. Create the File System Create an ext2 file system on the logical volume # mke2fs /dev/my_volume_group/my_logical_volume mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09 Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 132192 inodes, 264192 blocks 13209 blocks (5.00%) reserved for the super user First data block=0 9 block groups 32768 blocks per group, 32768 fragments per group 14688 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Writing superblocks and filesystem accounting information: done ----------------------------------------------------------------------------- 13.2.5. Test the File System Mount the file system on the logical volume # mount /dev/my_volume_group/my_logical_volume /mnt and check to make sure everything looks correct # df Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda1 1311552 628824 616104 51% / /dev/my_volume_group/my_logical_volume 1040132 20 987276 0% /mnt If everything worked properly, you should now have a logical volume mounted at /mnt. ----------------------------------------------------------------------------- 13.3. Add a new disk to a multi-disk SCSI system 13.3.1. Current situation A data centre machine has 6 disks attached as follows: # pvscan pvscan -- ACTIVE PV "/dev/sda" of VG "dev" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sdb" of VG "sales" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sdc" of VG "ops" [1.95 GB / 44 MB free] pvscan -- ACTIVE PV "/dev/sdd" of VG "dev" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sde1" of VG "ops" [996 MB / 52 MB free] pvscan -- ACTIVE PV "/dev/sde2" of VG "sales" [996 MB / 944 MB free] pvscan -- ACTIVE PV "/dev/sdf1" of VG "ops" [996 MB / 0 free] pvscan -- ACTIVE PV "/dev/sdf2" of VG "dev" [996 MB / 72 MB free] pvscan -- total: 8 [11.72 GB] / in use: 8 [11.72 GB] / in no VG: 0 [0] # df Filesystem 1k-blocks Used Available Use% Mounted on /dev/dev/cvs 1342492 516468 757828 41% /mnt/dev/cvs /dev/dev/users 2064208 2060036 4172 100% /mnt/dev/users /dev/dev/build 1548144 1023041 525103 66% /mnt/dev/build /dev/ops/databases 2890692 2302417 588275 79% /mnt/ops/databases /dev/sales/users 2064208 871214 1192994 42% /mnt/sales/users /dev/ops/batch 1032088 897122 134966 86% /mnt/ops/batch As you can see the "dev" and "ops" groups are getting full so a new disk is purchased and added to the system. It becomes /dev/sdg. ----------------------------------------------------------------------------- 13.3.2. Prepare the disk partitions The new disk is to be shared equally between ops and dev so it is partitioned into two physical volumes /dev/sdg1 and /dev/sdg2 : # fdisk /dev/sdg Device contains neither a valid DOS partition table, nor Sun or SGI disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1000, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1000, default 1000): 500 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (501-1000, default 501): Using default value 501 Last cylinder or +size or +sizeM or +sizeK (501-1000, default 1000): Using default value 1000 Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Unknown) Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 8e Changed system type of partition 2 to 8e (Unknown) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information. Next physical volumes are created on this partition: # pvcreate /dev/sdg1 pvcreate -- physical volume "/dev/sdg1" successfully created # pvcreate /dev/sdg2 pvcreate -- physical volume "/dev/sdg2" successfully created ----------------------------------------------------------------------------- 13.3.3. Add the new disks to the volume groups The volumes are then added to the dev and ops volume groups: # vgextend ops /dev/sdg1 vgextend -- INFO: maximum logical volume size is 255.99 Gigabyte vgextend -- doing automatic backup of volume group "ops" vgextend -- volume group "ops" successfully extended # vgextend dev /dev/sdg2 vgextend -- INFO: maximum logical volume size is 255.99 Gigabyte vgextend -- doing automatic backup of volume group "dev" vgextend -- volume group "dev" successfully extended # pvscan pvscan -- reading all physical volumes (this may take a while...) pvscan -- ACTIVE PV "/dev/sda" of VG "dev" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sdb" of VG "sales" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sdc" of VG "ops" [1.95 GB / 44 MB free] pvscan -- ACTIVE PV "/dev/sdd" of VG "dev" [1.95 GB / 0 free] pvscan -- ACTIVE PV "/dev/sde1" of VG "ops" [996 MB / 52 MB free] pvscan -- ACTIVE PV "/dev/sde2" of VG "sales" [996 MB / 944 MB free] pvscan -- ACTIVE PV "/dev/sdf1" of VG "ops" [996 MB / 0 free] pvscan -- ACTIVE PV "/dev/sdf2" of VG "dev" [996 MB / 72 MB free] pvscan -- ACTIVE PV "/dev/sdg1" of VG "ops" [996 MB / 996 MB free] pvscan -- ACTIVE PV "/dev/sdg2" of VG "dev" [996 MB / 996 MB free] pvscan -- total: 10 [13.67 GB] / in use: 10 [13.67 GB] / in no VG: 0 [0] ----------------------------------------------------------------------------- 13.3.4. Extend the file systems The next thing to do is to extend the file systems so that the users can make use of the extra space. There are tools to allow online-resizing of ext2 file systems but here we take the safe route and unmount the two file systems before resizing them: # umount /mnt/ops/batch # umount /mnt/dev/users We then use the e2fsadm command to resize the logical volume and the ext2 file system on one operation. We are using ext2resize instead of resize2fs (which is the default command for e2fsadm) so we define the environment variable E2FSADM_RESIZE_CMD to tell e2fsadm to use that command. # export E2FSADM_RESIZE_CMD=ext2resize # e2fsadm /dev/ops/batch -L+500M e2fsck 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09 Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/ops/batch: 11/131072 files (0.0