Let’s discuss Logical Volume Management (LVM) with practical examples using Ubuntu 22

Let’s discuss Logical Volume Management (LVM) with practical examples using Ubuntu 22

LVM (Logical Volume Management) is a tool for managing storage devices and partitions in Linux systems. It allows you to create logical volumes, which are flexible and resizable partitions that can span across multiple physical devices. The advantage of using logical volumes is that you can adjust the size and location of your partitions according to your needs, without having to repartition your disk or lose data. You can also create snapshots of logical volumes, which are copies of the data at a certain point in time. Snapshots can be used for backup, testing, or cloning purposes. In summary, LVM provides several advantages over the traditional partition-based method, such as:

  1. You can easily resize, extend, or reduce the logical volumes without affecting the data or the file system.
  2. You can create snapshots of the logical volumes, which are point-in-time copies that can be used for backup or testing purposes.
  3. You can use striping or mirroring to improve the performance or reliability of the logical volumes.
  4. You can add or remove physical devices to the logical volumes without disrupting the system or the users.
  5. You can use encryption or compression to enhance the security or efficiency of the logical volumes.

LVM is widely used in server disk management, as it provides more flexibility and control over the storage resources. LVM can help you to optimize the disk space utilization, improve the system performance, and simplify the backup and recovery processes. LVM can also enable you to use advanced features, such as RAID, clustering, or virtualization, on your server. To demonstrate LVM, we shall use lvm2 package installed on Ubuntu 22.

Use the command below to check if you already have lvm package on your server:

# lvm version

If lvm is not installed, use the command below to install it:

# apt install lvm2

The server am using for this exercise has 5 physical disks (sda, sdb, sdc, sde and sdd), “sda” was used during the OS installation to host the boot partition and you will notice it already has the default ubuntu logical volume: “ubuntu-lv”

Use the command below to list all the disks that are currently attached to your server:

# fdisk -l | grep -i /dev/sd

Use the command below to see which of the available disks above are mounted and in use by the file system:

# df -h

From the output above, we can clearly see that “/dev/sda2” which is a logical partition on disk “/dev/sda” is mounted on “/boot” and in-use by the file system. In this exercise, we shall create a logical volume using two free disks “/dev/sdb” and “/dev/sdc”, and later expand the logical volume using another free disk “/dev/sdd”. So, let’s get into the action:

Before using any physical disk in a Logical Volume (LV), we need to first of all define it as a Physical Volume (PV). A physical volume (PV) can be created from a whole disk or just a partition on a disk. To create a physical volumes, use the “pvcreate” command, followed by the name of the disk or partition you want to use.

Create two physical volumes on two free disks “/dev/sdb” and “/dev/sdc”:

# pvcreate /dev/sdb
# pvcreate /dev/sdc

Use the “pvs”, “pvdisplay”, or “pvscan” commands to see a summary and details of the physical volumes you have created in the step above:

# pvs
# pvdisplay /dev/sdb
# pvdisplay /dev/sdc

Notice the newly created physical volumes “/dev/sdb” and “/dev/sdc”, each with disk size of 835.75g, also notice that the newly created PV is not associated to any VG! We shall get to VGs shortly!

Before we can proceed to creating the Logical Volume (LV), we need to first put the Physical Volumes (PVs) that we created into a pool also known as a Volume Group (VG). A Volume Group (VG) is a collection of physical volumes (PVs) that creates a pool of disk space out of which logical volumes (LVs) can be allocated. The significance of a volume group is that it enables you to create logical volumes that can span multiple physical volumes, or use only a part of a physical volume. To create a Volume Group (VG), use the “vgcreate” command, followed by the name of the volume group and the physical volumes you want to include.

In this example, we are going to create a volume group named “techjunction_vg” with two physical volumes “/dev/sdb” and “/dev/sdc”:

# vgcreate techjunction_vg /dev/sdb /dev/sdc

To see the details of the volume groups we created, use the “vgs” or “vgdisplay”commands (The VG we created has a size of 835.75GB x 2 = Approx 1.64TB):

# vgs
# vgdisplay techjunction_vg

At this point, we are ready to create our Logical Volume (LV). To create a logical volume, use the “lvcreate” command, followed by the name of the Volume Group (VG) and the size to allocate.

In this example, we are going to create a logical volume named “techjunction_lv” with 1.63TB of space in the volume group “techjunction_vg”:

# lvcreate -L 1.63T -n techjunction_lv techjunction_vg

To see the details of the logical volumes, use the “lvs” or “lvdisplay” commands (Take note of the LV Path as we shall need it when formatting and mounting the LV):

# lvs
# lvdisplay /dev/techjunction_vg/techjunction_lv

To be able to use the logical volume that we have created, we need to format it, create a mount point, and mount the logical volume.

In this example, we are going to use the “ext4” file system to format the logical volume “techjunction_lv”, create a mount point “/techjunction_backups” and mount the logical volume:

# mkfs.ext4 /dev/techjunction_vg/techjunction_lv

The "ext4" is a Linux file system developed as the successor to “ext3“. It has significant advantages over its predecessor such as improved design, better performance, reliability, and new features. It can support files and file systems up to 16 terabytes in size. It also supports transparent encryption, snapshots, and data deduplication.

A mount point is simply a directory (dir) in a linux file system and we use the “mkdir” to create the directory and the “mount” command to mount the logical volume:

# mkdir techjunction_backups
# mount /dev/techjunction_vg/techjunction_lv /techjunction_backups/

Use the “df -h” command to display the new file system; notice the size, and the mount point:

# df -h

At this point we have successfully created our logical volume and made it available for use in the file system. And we can test this by changing the director to “/techjunction_backups” and creating a few text files which we can read/write on. However, there is one small step remaining and this is because the mount points created using the “mount” command are not persistent through system reboots and this is not good for a server that you are preparing for a production environment because that means that you will lose your data when the server reboots.

Data loss can occur when you mount the disk again using the “mount” command in Linux if the disk was not properly unmounted or synced before. This can happen if you remove the disk abruptly, power off the system, or encounter a system crash. When you mount a disk, the system may cache some data in memory to improve the performance and efficiency of the disk operations. However, this also means that some data may not be written to the disk immediately, and may remain in the cache until the system flushes them to the disk. If you unmount the disk without syncing the data, or if the system loses power or crashes, the data in the cache may be lost or corrupted. This can cause inconsistency or damage to the file system on the disk, and lead to data loss or errors when you mount the disk again. To prevent data loss, you should always unmount the disk properly using the “umount” command, or use the “sync” command to force the system to write all the cached data to the disk. You should also avoid removing the disk or shutting down the system while the disk is in use. To recover data from a damaged disk, you may need to use the “fsck” command to check and repair the file system.

To mount our logical volume permanently, edit the “fstab” by adding the mount point and run the “mount -a” command:

# echo '/dev/techjunction_vg/techjunction_lv /techjunction_backups ext4  defaults 0 0' | sudo  tee -a /etc/fstab
# mount -a

Note: The “tee” command is used for reading from the standard input and writing to both the standard output and a file simultaneously. The “tee -a” option means append the output to the “/etc/fstab” file instead of overwriting it.

The “mount -a” command is useful when you want to mount all the file systems that are configured in the /etc/fstab file at once, without having to specify each device or directory individually. This can save time and avoid errors when you need to access multiple file systems on your system. However, the “mount -a” command also has some limitations and risks. For example, it may fail to mount some file systems if they are not available or ready, such as network file systems or removable devices. It may also cause data loss or corruption if the file systems are not properly configured or compatible with the system.  Therefore, it is recommended to use the “mount -a” command with caution, and only when you are sure that the file systems are safe and stable to mount.

At this point, we have completed the exercise of creating a logical volume and making it ready for use by the file system. We have also ensured that this mount point configuration data is persistent throughout server reboots by editing the “fstab” file.

Now that we have successfully created our logical volume (lv)techjunction_lv” of size 1.6T, let’s test the advantage of LVM by expanding the size of this LV. But first let’s put some test directory and test files on our logical volume to make sure that our data will be preserved during this expansion exercise. In fact, if you want to experience the true beauty of LVM, you can test this on a live application. For example, an application using a database that is running from your newly created LV. And during the resizing of the LV, your application should not experience any downtime or hiccups. That’s the true potential of LVM compared to the conventional partitioning!

To show the hard disks that are not in use, we use the “lsblk" command, which can list all the block devices in the system, such as disks, partitions, and logical volumes. This command can also show the mount points of the devices that are in use.

For example, to see all the block devices, you can type:

# lsblk

From the above output, you can see that drives “sdd” and “sde” don’t have any partitions, logical volumes or mountpoints defined under them. And we can proceed to use “sdd” for our expansion exercise.

Once again, before using the physical disk “sdd” in our logical volume (LV) “techjunction_lv”, we need to define it as a Physical Volume (PV) using the “pvcreate” command, followed by the name of the disk “/dev/sdd”:

# pvcreate /dev/sdd

Next, we need to add the new physical volume “/dev/sdd” to the volume group (VG) “techjunction_vg” that contains the logical volume (LV) “techjunction_lv” using the “vgextend” command:

# vgextend techjunction_vg /dev/sdd

Next, we use the “lvextend” command to extend the size of the logical volume “techjunction_lv”. When you extend a logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it.

In this example, we are going to use all the space of the new physical volume (PV) that we just added to the volume group. i.e., 837.8G:

# lvextend -l +100%FREE /dev/techjunction_vg/techjunction_lv

We are not done resizing the logical volume, in fact if you check the file system at this point, you will realize that the size change is not yet in effect!

The last step is to resize the file system on the logical volume using the “resize2fs” command:

# resize2fs /dev/techjunction_vg/techjunction_lv

As you can see from the output above, our logical volume (lv) “techjunction_lv” has been resized from 1.7T to 2.5T without having to restart the server and without losing the data that was on the existing logical volume.

Hope this article has helped you to appreciate the power of using LVM as opposed to the legacy partitioning system. However, it’s important to note that LVM is not a substitute for RAID because it does not provide any protection against disk failures. LVM and RAID are two different technologies that serve different purposes. LVM is a logical layer that allows you to create, resize, and manage partitions on your disks without being constrained by the physical layout of the disks. RAID is a physical layer that allows you to combine multiple disks into one or more arrays that provide redundancy, performance, or both. Without RAID implementation on your system, if one of the physical volumes that belongs to a logical volume fails, the logical volume will become inaccessible and the data on it will be lost. LVM does not have any mechanism to replicate or recover the data from the failed disk. RAID, on the other hand, can protect the data from disk failures by using techniques such as mirroring, striping, or parity. Depending on the RAID level, RAID can tolerate one or more disk failures without losing any data. RAID can also rebuild the data from the surviving disks to a new disk in case of a failure.

Therefore, LVM and RAID should be used together to enhance system reliability. By using RAID, you can create a reliable and performant storage layer that can withstand disk failures. By using LVM on top of RAID, you can create flexible and manageable partitions that can span multiple RAID arrays or use only a part of a RAID array. For example, you can create a RAID 1 array with two disks to provide mirroring, and then create a logical volume on top of the RAID 1 array to store your critical data. You can then resize, move, or rename the logical volumes as you wish, without affecting the RAID arrays.

CriterionLogical Volume (LV)RAID
Data AvailabilityLow, as data may become unavailable if a device fails.High, as data can be available even if one or more devices fail, depending on the RAID level and configuration.
Data IntegrityLow, as data may become corrupted if a device fails or encounters an error.High, as data can be verified and corrected using checksums, parity blocks, or mirror copies, depending on the RAID level and configuration.
Data RecoveryDifficult, as data may be lost or damaged if a device fails or encounters an error.Easy, as data can be recovered or rebuilt using the remaining devices, depending on the RAID level and configuration.
Data ProtectionLow, as data may be exposed or altered if a device is stolen or compromised.High, as data can be encrypted or authenticated using various methods, such as dm-crypt, LUKS, or MDADM, depending on the RAID level and configuration.
LVM vs RAID

JoshuaProfile

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

Spread the word: