{ "id": "https://anil.recoil.org/notes/komodo-docker-compose", "title": "Using Komodo to manage Docker compose on a small cluster", "link": "https://anil.recoil.org/notes/komodo-docker-compose", "updated": "2025-05-05T00:00:00", "published": "2025-05-05T00:00:00", "summary": "

With the sunsetting of Equinix Metal\nI've also been migrating the Recoil machines over to new hosts in Mythic\nBeasts. This time around, rather than manually\nsetting up services, I've turned to a nice new tool called\nKomodo which helps with deploying Docker\ncontainers across multiple servers. Unlike many other\ncontainer management solutions, Komodo is refreshingly simple. It has a mode\nwhere it can take existing Docker compose files on a\ngiven host, and run them, and provide a web-based monitor to keep an eye on a\nfew machines.

\n

The Komodo interface

\n

There's an online demo of Komodo available (user/pass\nis demo/demo). The basic idea is that you first register servers (see below for\n"Periphery"), and then add in "Stacks" which represent a service each.

\n

\n\"The\nThe list of Stacks running on Recoil

\n

Every stack is configured to run a docker-compose.yml service that is already\npresent on the host, and the web UI has a convenient way of pulling, deploying\nand polling the Docker Hub to check for updates.

\n

\n\"The\nThe stack view for a Tangled.sh knot running on Recoil

\n

The autoupdate functionality is quite cool (if a touch risky), as it polls for the\nimages on the Docker Hub and updates to those automagically. While I've activated\nthis for services I'm happy autoupdating, it's also accompanied by a healthy\ndose of ZFS snapshotting so I can roll back if anything\nuntoward happens.

\n

\n\"The\nThe alert view of autoupdates from polling the Hub

\n

Most importantly to me is that I can always switch away from Komodo at any time\nand directly interact with the services on the host using the normal docker CLI.\nKomodo is just coordinating the compose invocations in the lightest way possible,\nand not wrapping them in such a way that I lose access.

\n

Setting up Periphery with a Wireguard mesh and dsnet

\n

Komodo operates across multiple hosts by using something called a periphery agent\nwhich the main host issues RPCs to in order to do something. This is obviously quite a privileged operation, and so rather than\nexpose it to the Internet I setup a Wireguard tunnel mesh across the Recoil hosts for these operations to go over.

\n

The easiest way to do this was via dsnet, which generates the configurations and keys\nsuitable for a wg-quick service to run on each host and connect\nto their peers. Following the instructions let me setup this mesh in minutes; this is a much simpler solution than\nTailscale due to the lack of flexibility, but all I want here is few hosts connected by static interfaces\nand with no need for complex NAT punching. Once the dsnet configuration is\nsetup, all that's needed is to activate the wg-quick service on each of the hosts, and they spin up a virtual interface.

\n

After this, the Periphery setup was straightforward but with one twist. I configured the agent to bind to the wireguard IP, e.g.:

\n
/etc/komodo/periphery.config.toml\n################################\n# 🦎 KOMODO PERIPHERY CONFIG 🦎 #\n################################\n\nport = 8120\nbind_ip = "10.100.0.2"\n
\n

But then on reboot the periphery agent would fail to startup due to the wireguard service being too low a priority in the boot order. This was fixed by a systemd tweak (which took me longer to figure out than the rest of the entire setup altogether, since I find systemd utterly inscrutable).

\n
/etc/systemd/system/periphery.service\n[Unit]\nDescription=Agent to connect with Komodo Core\nAfter=wg-quick@wg0.service\n
\n

This little tweak to the script, followed by umpteen daemon-reload prods and\nreboots to get systemd happy, did the trick.

\n

I'm pretty happy with Komodo, thank you to the devs! It's a system that's simple enough that I can try\nit out progressively, and can bypass easily if required, and provides a very\nuseful part of the selfhosting jigsaw puzzle.

", "content": "

With the sunsetting of Equinix Metal\nI've also been migrating the Recoil machines over to new hosts in Mythic\nBeasts. This time around, rather than manually\nsetting up services, I've turned to a nice new tool called\nKomodo which helps with deploying Docker\ncontainers across multiple servers. Unlike many other\ncontainer management solutions, Komodo is refreshingly simple. It has a mode\nwhere it can take existing Docker compose files on a\ngiven host, and run them, and provide a web-based monitor to keep an eye on a\nfew machines.

\n

The Komodo interface

\n

There's an online demo of Komodo available (user/pass\nis demo/demo). The basic idea is that you first register servers (see below for\n"Periphery"), and then add in "Stacks" which represent a service each.

\n

\n\"The\nThe list of Stacks running on Recoil

\n

Every stack is configured to run a docker-compose.yml service that is already\npresent on the host, and the web UI has a convenient way of pulling, deploying\nand polling the Docker Hub to check for updates.

\n

\n\"The\nThe stack view for a Tangled.sh knot running on Recoil

\n

The autoupdate functionality is quite cool (if a touch risky), as it polls for the\nimages on the Docker Hub and updates to those automagically. While I've activated\nthis for services I'm happy autoupdating, it's also accompanied by a healthy\ndose of ZFS snapshotting so I can roll back if anything\nuntoward happens.

\n

\n\"The\nThe alert view of autoupdates from polling the Hub

\n

Most importantly to me is that I can always switch away from Komodo at any time\nand directly interact with the services on the host using the normal docker CLI.\nKomodo is just coordinating the compose invocations in the lightest way possible,\nand not wrapping them in such a way that I lose access.

\n

Setting up Periphery with a Wireguard mesh and dsnet

\n

Komodo operates across multiple hosts by using something called a periphery agent\nwhich the main host issues RPCs to in order to do something. This is obviously quite a privileged operation, and so rather than\nexpose it to the Internet I setup a Wireguard tunnel mesh across the Recoil hosts for these operations to go over.

\n

The easiest way to do this was via dsnet, which generates the configurations and keys\nsuitable for a wg-quick service to run on each host and connect\nto their peers. Following the instructions let me setup this mesh in minutes; this is a much simpler solution than\nTailscale due to the lack of flexibility, but all I want here is few hosts connected by static interfaces\nand with no need for complex NAT punching. Once the dsnet configuration is\nsetup, all that's needed is to activate the wg-quick service on each of the hosts, and they spin up a virtual interface.

\n

After this, the Periphery setup was straightforward but with one twist. I configured the agent to bind to the wireguard IP, e.g.:

\n
/etc/komodo/periphery.config.toml\n################################\n# 🦎 KOMODO PERIPHERY CONFIG 🦎 #\n################################\n\nport = 8120\nbind_ip = "10.100.0.2"\n
\n

But then on reboot the periphery agent would fail to startup due to the wireguard service being too low a priority in the boot order. This was fixed by a systemd tweak (which took me longer to figure out than the rest of the entire setup altogether, since I find systemd utterly inscrutable).

\n
/etc/systemd/system/periphery.service\n[Unit]\nDescription=Agent to connect with Komodo Core\nAfter=wg-quick@wg0.service\n
\n

This little tweak to the script, followed by umpteen daemon-reload prods and\nreboots to get systemd happy, did the trick.

\n

I'm pretty happy with Komodo, thank you to the devs! It's a system that's simple enough that I can try\nit out progressively, and can bypass easily if required, and provides a very\nuseful part of the selfhosting jigsaw puzzle.

", "content_type": "html", "author": { "name": "Anil Madhavapeddy", "email": "anil@recoil.org", "uri": "https://anil.recoil.org" }, "categories": [], "rights": "(c) 1998-2025 Anil Madhavapeddy, all rights reserved", "source": "https://anil.recoil.org/news.xml" }