Thicket data repository for the EEG
at main 6.9 kB view raw
1{ 2 "id": "https://www.tunbury.org/2025/04/21/ubuntu-dm-cache", 3 "title": "Ubuntu cloud-init with LVM and dm-cache", 4 "link": "https://www.tunbury.org/2025/04/21/ubuntu-dm-cache/", 5 "updated": "2025-04-21T00:00:00", 6 "published": "2025-04-21T00:00:00", 7 "summary": "dm-cache has been part of the mainline Linux kernel for over a decade, making it possible for faster SSD and NVMe drives to be used as a cache within a logical volume. This technology brief from Dell gives a good overview of dm-cache and the performance benefits. Skip to the graph on page 25, noting the logarithmic scale.", 8 "content": "<p><a href=\"https://en.wikipedia.org/wiki/Dm-cache\">dm-cache</a> has been part of the mainline Linux kernel for over a decade, making it possible for faster SSD and NVMe drives to be used as a cache within a logical volume. <a href=\"https://videos.cdn.redhat.com/summit2015/presentations/17856_getting-the-most-out-of-your-nvme-ssd.pdf\">This technology brief from Dell</a> gives a good overview of <code>dm-cache</code> and the performance benefits. Skip to the graph on page 25, noting the logarithmic scale.</p>\n\n<p>Given a system with a small SATADOM module, <code>/dev/sdd</code>, an SSD drive <code>/dev/sdc</code> and a couple of large-capacity spinning disks, <code>/dev/sd[ab]</code>, can we use cloud-init to configure RAID1 on the capacity disks with the SSD being used as a cache?</p>\n\n<p>Unfortunately, the <code>storage:</code> / <code>config:</code> nodes are not very flexible when it comes to even modest complexity. For example, given an LVM volume group consisting of multiple disk types, it isn’t possible to create a logical volume on a specific disk as <code>devices:</code> is not a parameter to <code>lvm_partition</code>. It is also not possible to specify <code>raid: raid1</code>.</p>\n\n<p>I have taken the approach of creating two volume groups, <code>vg_raid</code> and <code>vg_cache</code>, on disks <code>/dev/sd[ab]</code> and <code>/dev/sdc</code>, respectively, thereby forcing the use of the correct devices. On the <code>vg_raid</code> group, I have created a single logical volume without RAID. On <code>vg_cache</code>, I have created the two cache volumes, <code>lv-cache</code> and <code>lv-cache-meta</code>.</p>\n\n<p>The <code>lv-cache</code> and <code>lv-cache-meta</code> should be sized in the ratio 1000:1.</p>\n\n<p>As the final step of the installation, I used <code>late-commands</code> to configure the system as I want it. These implement RAID1 for the root logical volume, deactivate the two cache volumes as a necessary step before merging <code>vg_raid</code> and <code>vg_cache</code>, create the cache pool from the cache volumes, and finally enable the cache. The cache pool can be either <em>writethrough</em> or <em>writeback</em>, with the default being <em>writethrough</em>. In this mode, data is written to both the cache and the original volume, so a failure in the cache device doesn’t result in any data loss. <em>Writeback</em> has better performance as writes initially only go to the cache volume and are only written to the original volume later.</p>\n\n<div><div><pre><code>lvconvert -y --type raid1 -m 1 /dev/vg_raid/lv_data\nlvchange -an vg_cache/lv_cache\nlvchange -an vg_cache/lv_cache_meta\nvgmerge vg_raid vg_cache\nlvconvert -y --type cache-pool --poolmetadata vg_raid/lv_cache_meta vg_raid/lv_cache\nlvconvert -y --type cache --cachemode writethrough --cachepool vg_raid/lv_cache vg_raid/lv_data\n</code></pre></div></div>\n\n<p>I have placed <code>/boot</code> and <code>/boot/EFI</code> on the SATADOM so that the system can be booted.</p>\n\n<p>My full configuration given below.</p>\n\n<div><div><pre><code>#cloud-config\nautoinstall:\n version: 1\n storage:\n config:\n # Define the physical disks\n - { id: disk-sda, type: disk, ptable: gpt, path: /dev/sda, preserve: false }\n - { id: disk-sdb, type: disk, ptable: gpt, path: /dev/sdb, preserve: false }\n - { id: disk-sdc, type: disk, ptable: gpt, path: /dev/sdc, preserve: false }\n - { id: disk-sdd, type: disk, ptable: gpt, path: /dev/sdd, preserve: false }\n\n # Define the partitions\n - { id: efi-part, type: partition, device: disk-sdd, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: true, offset: 1048576}\n - { id: boot-part, type: partition, device: disk-sdd, size: 1G, wipe: superblock, number: 2, preserve: false, grub_device: false }\n\n # Create volume groups\n - { id: vg-raid, type: lvm_volgroup, name: vg_raid, devices: [disk-sda, disk-sdb] }\n - { id: vg-cache, type: lvm_volgroup, name: vg_cache, devices: [disk-sdc] }\n\n # Create logical volume which will be for RAID\n - { id: lv-data, type: lvm_partition, volgroup: vg-raid, name: lv_data, size: 1000G, preserve: false}\n\n # Create cache metadata logical volume on SSD VG (ratio 1000:1 with cache data)\n - { id: lv-cache-meta, type: lvm_partition, volgroup: vg-cache, name: lv_cache_meta, size: 1G, preserve: false }\n\n # Create cache data logical volume on SSD VG\n - { id: lv-cache, type: lvm_partition, volgroup: vg-cache, name: lv_cache, size: 1000G, preserve: false }\n\n # Format the volumes\n - { id: root-fs, type: format, fstype: ext4, volume: lv-data, preserve: false }\n - { id: efi-fs, type: format, fstype: fat32, volume: efi-part, preserve: false }\n - { id: boot-fs, type: format, fstype: ext4, volume: boot-part, preserve: false }\n\n # Mount the volumes\n - { id: mount-1, type: mount, path: /, device: root-fs }\n - { id: mount-2, type: mount, path: /boot, device: boot-fs }\n - { id: mount-3, type: mount, path: /boot/efi, device: efi-fs }\n identity:\n hostname: unnamed-server\n password: \"$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0\"\n username: mte24\n ssh:\n install-server: yes\n authorized-keys:\n - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7UrJmBFWR3c7jVzpoyg4dJjON9c7t9bT9acfrj6G7i mark.elvers@tunbury.org\n allow-pw: no\n packages:\n - lvm2\n - thin-provisioning-tools\n user-data:\n disable_root: false\n late-commands:\n - lvconvert -y --type raid1 -m 1 /dev/vg_raid/lv_data\n - lvchange -an vg_cache/lv_cache\n - lvchange -an vg_cache/lv_cache_meta\n - vgmerge vg_raid vg_cache\n - lvconvert -y --type cache-pool --poolmetadata vg_raid/lv_cache_meta vg_raid/lv_cache\n - lvconvert -y --type cache --cachemode writethrough --cachepool vg_raid/lv_cache vg_raid/lv_data\n</code></pre></div></div>", 9 "content_type": "html", 10 "author": { 11 "name": "Mark Elvers", 12 "email": "mark.elvers@tunbury.org", 13 "uri": null 14 }, 15 "categories": [ 16 "cloud-init,dm-cache,Ubuntu", 17 "tunbury.org" 18 ], 19 "source": "https://www.tunbury.org/atom.xml" 20}