If you over-allocated the size of EBS volume for EC2 instance, it may seem to be a problem to downsize it because Amazon Web Services don’t offer this operation, which is understandable.
Suppose you allocated 30 Gb just in case, and it turned out thatĀ your data actually take about only 4Gb. Taking into account possible growth potential, you figure that standard 8Gb size of AWS micro-instance might actually be very reasonable.
It’s not a big deal, you can downsize your disk and save a few bucks. This is step-by step instruction.
- In AWS console, EC2 services, create a copy of the 30Gb you want to shrink in the same availability zone as the original 30 Gb volume and attach it to a running EC2 instance you want to downsize, as /dev/sdf. When attached, inside the instance it will show up asĀ /dev/xvdf. From now on we’ll call it “source disk”.
- Create a new empty 8Gb volume in the same availability zone as the original 30Gb volume and attach it to a running EC2 instance you want to downsize as /dev/sdg. When attached, inside the instance it will show up as /dev/xvdg. From now on we’ll call it “target disk”.
- Login to the running EC2 instance with SSH and find both volumes under names /dev/xvdf and /dev/xvdg.
- Check the file system of the source disk’s 1st partition and fix it if there are any errors:
root@ip-172-31-29-116:/etc# e2fsck -f /dev/xvdf1 e2fsck 1.42.9 (4-Feb-2014) cloudimg-rootfs: recovering journal Clearing orphaned inode 27023 (uid=0, gid=0, mode=0100644, size=0) Clearing orphaned inode 27022 (uid=0, gid=0, mode=0100644, size=0) Clearing orphaned inode 27021 (uid=0, gid=0, mode=0100666, size=0) Clearing orphaned inode 270812 (uid=0, gid=0, mode=0100640, size=2790) Clearing orphaned inode 1513 (uid=106, gid=112, mode=0100600, size=0) Clearing orphaned inode 1511 (uid=106, gid=112, mode=0100600, size=10915) Clearing orphaned inode 1510 (uid=106, gid=112, mode=0100600, size=0) Clearing orphaned inode 1509 (uid=106, gid=112, mode=0100600, size=0) Clearing orphaned inode 306 (uid=106, gid=112, mode=0100600, size=0) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Free blocks count wrong (2341759, counted=6909214). Fix? yes Free inodes count wrong (1582797, counted=1871780). Fix? yes cloudimg-rootfs: ***** FILE SYSTEM WAS MODIFIED ***** cloudimg-rootfs: 94300/1966080 files (0.1% non-contiguous), 952595/7861809 blocks
- Shrink the OS size of the source disk to 7 Gb (less than 8, it’s important to avoid possible “No space left on device” message when you’ll copy data):
root@ip-172-31-29-116:/etc# resize2fs /dev/xvdf1 7G resize2fs 1.42.9 (4-Feb-2014) Resizing the filesystem on /dev/xvdf1 to 1835008 (4k) blocks. The filesystem on /dev/xvdf1 is now 1835008 blocks long.
- Install “gparted” package if you don’t have it. In case of Debian linux, such us Ubuntu:
root@ip-172-31-29-116:/etc# apt-get install gparted
Use your package installation utility if you have RedHat or other linux flavor. GParted is an X-Windows utility for re-partitioning, and you will need an X-terminal to connect to you EC2 instance. You can use free putty with XMing X-Windows server: How to install and configure putty and XMing.
- Using gparted, resize the 1st partition of the source volume to 7Gb (to the same size as OS before):
- Lookup or calculate how many 512 byte sectors in 8 Gb:
root@ip-172-31-29-116:/etc# fdisk /dev/xvdg Command (m for help): p Disk /dev/xvdg: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c2ed2
8 x 1024 x 1024 x 1024 / 512 = 16777216 (512 bytes blocks) or
8 x 1024 x 1024 x 1024 / 4096 = 2097152 (4K blocks) - Copy first 8Gb of data (16777216 sectors) from source to target (the bigger block size (“bs”), the faster):
root@ip-172-31-29-116:/etc# dd bs=4K if=/dev/xvdf of=/dev/xvdg count=2097152 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 1678.95 s, 5.1 MB/s
This operation is going to take some time.
- Right-click and using gparted, resize the partition on the small disk to the maximum available size (increase “New Size” to the value making “Free space following” zero. New size should be close to 8Gb:
Apply all changes: - Go to AWS console, EC2 services, and detach source and target disks from the running instance.
- Create a snapshot of the target disk.
- Once snapshot is ready, use it to create an AMI
and then use the new AMI to launch another EC2 instance in the same availability zone as the one you wanted to downsize.
Once the new instance is up and running, you can:
- Delete the target 8 Gb disk (you don’t need it anymore because it’s been used to create a new AMI volume you started a downsized instance from).
- Delete the source 30 Gb disk.
- Terminate the EC2 instance you wanted to downsize and replace it with the new one (you may need to reassign Elactic IP to it, reconfigure ELB, auto-scaling group etc – depending on what EC2 services you are using).
Recent Comments