How to safely resize an LVM volume on Linux

I have several Oracle Cloud Infrastructure (OCI) based Lab environments, which I build with Terraform and corresponding shell scripts. Unfortunately the labs are not one size fits all. Depending on what I’m testing, I have different requirements for the available filesystems. So every now and then I face the problem that I have to increase or decrease the size of the corresponding logical volumes. And just as often I then look for the commands from my notes. I would say that it is time for a blog post.

Caution: Make sure you have a full backup of your logical volume, as tampering with the file system or logical volume can lead to data loss if done incorrectly or something goes wrong. I assume no liability for any errors that may occur as a result of this blog post.

This article is based on examples of how I made changes to the volume groups, logical volumes, and file systems in my lab environment. It may not cover all aspects. Be careful when performing similar steps in your environment.

Check Available Space

First of all, we check our current configuration as well as the used and available memory space. Let’s verify how the situation looks on the file systems. In the following example, we restrict the query to all Oracle mount points with /u0x.

df -kh /u0?
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgora-vol_u01   76G   12G   60G  17% /u01
/dev/mapper/vgora-vol_u02   76G   20G   52G  28% /u02
/dev/mapper/vgora-vol_u04   76G  9.5G   62G  14% /u04

With lvs we display information about our logical volumes.

sudo lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  oled    ocivolume -wi-ao----  10.00g                                                    
  root    ocivolume -wi-ao----  35.47g                                                    
  vol_u01 vgora     -wi-ao---- <76.80g                                                    
  vol_u02 vgora     -wi-ao---- <76.80g                                                    
  vol_u04 vgora     -wi-ao---- <76.80g

Using vgs shows us if there is some space left on the logical volume vgora.

sudo vgs
  VG        #PV #LV #SN Attr   VSize    VFree  
  ocivolume   1   2   0 wz--n-   45.47g      0 
  vgora       1   3   0 wz--n- <256.00g <25.61g

If there is some space left, we can go ahead and extend the logical volume vol_ora01. If not, we must either expand the volume group vgora with an additional disk or shrink another logical volume in the group.

Extend Volume Group

List all block devices using lsblk

sudo lsblk

Create a partition using fdisk. Make sure the system id of the partitions should be set to “Linux LVM” (8e)

fdisk /dev/sdc

Create a physical volume using pvcreate

sudo pvcreate /dev/sdc1

List the new LVM devices using lvmdiskscan

sudo lvmdiskscan -l

Finally extend the volume group vgora with the new device

sudo vgextend vgora /dev/sdc

Extend Logical Volume

We now extend the logical volume vol_u01 from 76G to 95G. Check the man page of lvextend for a couple of other option to extend the logical volume.

sudo lvextend -L 95G /dev/mapper/vgora-vol_u01
  Size of logical volume vgora/vol_u01 changed from <76.80 GiB (19660 extents) to 95.00 GiB (24320 extents).
  Logical volume vgora/vol_u01 successfully resized.

Resize the filesystem using resize2fs

sudo resize2fs /dev/mapper/vgora-vol_u01
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/mapper/vgora-vol_u01 is mounted on /u01; on-line resizing required
old_desc_blocks = 10, new_desc_blocks = 12
The filesystem on /dev/mapper/vgora-vol_u01 is now 24903680 (4k) blocks long.

Verify the new size of the volumes using df

df -kh /u0?
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgora-vol_u01   93G   24G   66G  27% /u01
/dev/mapper/vgora-vol_u02   76G   14G   59G  19% /u02
/dev/mapper/vgora-vol_u04   76G   24G   49G  33% /u04

Shrink Logical Volume

Shrinking a logical volume essentially consists of the same steps as increasing it, only in reverse order. In addition, however, the logical volume is checked beforehand and taken offline for this purpose. In the following we will perform this using the example of the logical volume vol_u02.

Caution: Make sure you have a full backup of your logical volume, as downsizing can lead to data loss if you do it wrong or something goes wrong.

Unmount the logical volume vol_u02

sudo umount /dev/mapper/vgora-vol_u02

Run a filesystem check using e2fsck

sudo e2fsck -f /dev/mapper/vgora-vol_u02
e2fsck 1.46.2 (28-Feb-2021)
Pass 1: Checking inodes, blocks, and sizes
Inode 4325387 extent tree (at level 2) could be narrower.  Optimize<y>? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/mapper/vgora-vol_u02: ***** FILE SYSTEM WAS MODIFIED *****
/dev/mapper/vgora-vol_u02: 65/5038080 files (24.6% non-contiguous), 3908739/20131840 blocks

We resize the filesystem using resize2fs. But be careful when you set new files. And don’t get nervous, it can take a few seconds longer… 😉

sudo resize2fs /dev/mapper/vgora-vol_u02 25G
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/vgora-vol_u02 to 6553600 (4k) blocks.

The filesystem on /dev/mapper/vgora-vol_u02 is now 6553600 (4k) blocks long.

After reducing the filesystem size we finally can reduce the size of the logical volume using lvreduce.

sudo lvreduce -L 25G /dev/mapper/vgora-vol_u02
  WARNING: Reducing active logical volume to 25.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vgora/vol_u02? [y/n]: y
  Size of logical volume vgora/vol_u02 changed from <76.80 GiB (19660 extents) to 25.00 GiB (6400 extents).
  Logical volume vgora/vol_u02 successfully resized.

Run resize2fs again to set the new file system size of the logical volume.

sudo resize2fs /dev/mapper/vgora-vol_u02
resize2fs 1.46.2 (28-Feb-2021)
The filesystem is already 6553600 (4k) blocks long.  Nothing to do!

Finally mount the filesystem again and check the new space.

sudo mount /dev/mapper/vgora-vol_u02
df -kh /u0?
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgora-vol_u01   93G   24G   66G  27% /u01
/dev/mapper/vgora-vol_u02   25G   14G  9.7G  58% /u02
/dev/mapper/vgora-vol_u04   76G   24G   49G  33% /u04

Conclusion

Manipulating file systems, logical volumes or volume groups is not as complicated as it looks at first. Nevertheless, you must be aware that some things can go wrong during these steps. It is like an open heart surgery. It is always recommended to have enough disk space from the beginning and to keep manipulations as low as possible. And if you have to do something anyway, you should have appropriate backups of the affected file systems, databases, etc. In my case this is the Terraform configuration of my LAB to rebuild it.

References

A few links and references used to create this blog post. Whereby most of them are much more extensive than my contribution. But as I said, my blog post is just a personal note.

There are a thousand other good sites on this subject….

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.