My personal website and Gemini capsule
1# Yet Another Homelab Setup
2
3I have a new homelab setup. I've been homelab-hopping for the past year or so, and with each new setup, something or other just never sat right with me. It always felt like there was something that could be made better or more effcient with the resources I had. I recently discovered the wonderful world of LXD, and I was excited over how robust the technology is, so I endeavored to make my homelab setup with it.
4
5My homelab consists of two physical machines: a System76 Thelio Major, and an ASUS mini PC.
6
7## Specs
8
9Thelio Major:
10* OS: Ubuntu Server 22.04 LTS
11* CPU: AMD Ryzen 7 7700X, 8 cores, 16 threads @ 4.5 GHz
12* GPU: AMD ATI Radeon RX 6600 XT/6600M
13* RAM: 32 GB
14* Internal HD: 1 TB NVMe
15* External HD: two 5 TB SSD, formatted as a ZFS mirror pool
16
17ASUS Mini PC:
18* OS: TrueNAS CORE
19* CPU: AMD Ryzen 7 5700U, 8 cores, 16 threads @ 1.8 GHz
20* GPU: AMD Radeon
21* RAM: 16 GB
22* Internal HD: 500 GB NVMe
23* External HD: 5 TB SSD, used as the main storage pool
24
25## Main "control center"
26
27> I may rename this machine and set its hostname to "nexus.local", because it seems fitting given its purpose, and I happen to like the word "nexus". :-)
28
29The bulk of my homelab activity resides on the Thelio Major. The web services that my homelab runs are separated into LXD containers. I'm using LXD as a more resource-friendly alternative to virtual machines. I have two 5 TB external SSDs that make up a ZFS mirror pool. I have a dataset on my ZFS mirror being used for an LXD storage pool, with LXD's ZFS storage driver. My LXD setup consists of the following containers:
30
31* debian-archive
32* debian-serv
33* fedora-transmission
34* ubuntu-mastodon
35
36I have a Linode VPS running HAproxy on Rocky Linux 9. My domain, hyperreal.coffee, and subdomains mastodon.hyperreal.coffee, irc.hyperreal.coffee, and rss.hyperreal.coffee all point to this VPS, and HAproxy takes care of routing traffic to them to their respective backends. Tailscale is installed on the VPS as well as in my debian-serv and ubuntu-mastodon LXD containers. The LXD containers' Tailnet IP addresses are used for the backends that HAproxy routes requests to. It's roughly this:
37
38* hyperreal.coffee -> debian-serv
39* irc.hyperreal.coffee -> debian-serv
40* rss.hyperreal.coffee -> debian-serv
41* mastodon.hyperreal.coffee -> ubuntu-mastodon
42
43
44### Creating the containers
45
46To create the LXD containers, I run the lxc init command and supply it with an image, the name of the container, and the storage pool I want the container to use:
47
48```
49lxc init images:debian/12/cloud debian-archive --storage lxd-pool
50```
51
52I need to use images suffixed with /cloud in order to use cloud-init to initialize the containers. With the container created, I then supply it with a cloud-init configuration as shown below:
53
54```
55lxc config set debian-archive cloud-init.user-data - <<- EOF
56#cloud-config
57users:
58 - name: debian
59 ssh_authorized_keys:
60 - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIOmibToJQ8JZpSFLH3482oxvpD56QAfu4ndoofbew5t jas@si.local
61 sudo: 'ALL=(ALL) NOPASSWD: ALL'
62 shell: /bin/bash
63 lock_passwd: true
64apt:
65 sources_list: |
66 deb http://deb.debian.org/debian $RELEASE main
67 deb http://deb.debian.org/debian $RELEASE-updates main
68 deb http://deb.debian.org/debian-security/ $RELEASE-security main
69 deb http://deb.debian.org/debian $RELEASE-backports
70package_update: true
71package_upgrade: true
72packages:
73 - curl
74 - debian-keyring
75 - debsig-verify
76 - git
77 - nodejs
78 - npm
79 - notmuch
80 - offlineimap3
81 - pass
82 - python3-dev
83 - python3-pip
84 - ripgrep
85 - ssh
86 - wget
87 - xauth
88 - youtube-dl
89rsyslog:
90 configs:
91 - content: "*.* @10.0.0.41:514"
92 filename: 99-forward.conf
93 remotes:
94 moonshadow: 10.0.0.41
95timezone: America/Chicago
96EOF
97```
98
99After setting the cloud-init configuration, I then start the container and monitor the progress of cloud-init:
100
101```
102lxc start debian-archive
103lxc exec debian-archive -- cloud-init status --wait
104```
105
106When this finishes, the container is ready to go. I have Ansible roles for setting up my homelab services. These roles can be viewed at the Codeberg repository below.
107
108=> https://codeberg.org/hyperreal/ansible-homelab
109
110### Snapshots
111
112Each of my LXD containers except for fedora-transmission are on daily snapshot schedules. This is configured with the lxc command as shown below:
113
114```
115lxc config set debian-archive snapshots.schedule "0 23 * * *"
116```
117
118I can also set the snapshot naming pattern:
119
120```
121lxc config set debian-archive snapshots.pattern "{{ creation_date|date:'2006-01-02_15-04-05' }}"
122```
123
124A cool thing about these snapshots is that they are easily pluggable. If the container instance fails for whatever reason, I can rollback to a previous working state using the lxc command:
125
126```
127lxc restore debian-archive 2023-06-08_22-59-17
128```
129
130The snapshots can also include running state information like process memory state and TCP connections by passing the --stateful flag when creating the snapshot:
131
132```
133lxc snapshot debian-archive snapshot0 --stateful
134```
135
136Because I'm not lacking for storage space, I set the expiry date for 1 week, which keeps a week's worth of snapshots on disk:
137
138```
139lxc config set debian-archive snapshots.expiry "1w"
140```
141
142LXD uses the ZFS storage driver and creates ZFS snapshots. I have a task on my TrueNAS server that replicates these snapshots daily into an offsite dataset.
143
144### debian-archive
145
146A Debian container that runs my ArchiveBox instance and stores my mail offline. I have the Proton Mail bridge running in a fake tty to keep the connection open locally, and offlineimap runs daily to download my mail from my Proton Mail account. The mail is then indexed by notmuch. This container is not accessible from the public Internet, so only I have access to it from my workstation machine.
147
148### debian-serv
149
150A Debian container that runs Caddy web server and Molly Brown Gemini server; these serve my HTTP website, Gemini server, The Lounge IRC instance, and FreshRSS instance. These are accessible from the public Internet, but FreshRSS and The Lounge are only used by me.
151
152* hyperreal.coffee -> web and Gemini
153* irc.hyperreal.coffee -> The Lounge IRC
154* rss.hyperreal.coffee -> FreshRSS
155
156Because HAproxy doesn't deal with the Gemini protocol, I have a firewalld rule on the VPS that forwards port 1965 to port 1965 in the debian-serv LXD container via the Tailnet. Currently, I have ~/public on my workstation as a sort of mirror for ~/public on debian-serv. When I edit my web site or Gemini caspule, I edit the files in ~/public on my workstation and just rsync the directory to debian-serv. I have port 4444 on the LXD host mapped to port 22 (SSH) in debian-serv, so when I rsync the files I have to use the -e 'ssh -p 4444' as an rsync argument. I'm looking for a way to keep those directories constantly in sync, and lsyncd seems to be the way to go. NixOS doesn't install a systemd service for lsyncd, so I'd have to write my own.
157
158### fedora-transmission
159
160A Fedora container that runs transmission-daemon. I chose Fedora because, unlike Debian and Ubuntu, Fedora has Transmission version 4. Not that my use-case specifically relies on version 4, but I just prefer to use the latest stable versions of things wherever feasible. I learned that Alpine Linux has Transmission version 4 in their repositories, so I'll eventually use an Alpine LXD container for the transmission-daemon. This LXD container is not accessible from the public Internet; I have port 9091 forwarded from the LXD host to port 9091 in the container, so I access the Transmission web interface from my local subnet. I also use the Transmission RPC API client for Go to manage torrents programmatically.
161
162=> https://github.com/hekmon/transmissionrpc
163
164> Side note: I've updated hekmon/transmissionrpc to support Transmission version 4, which uses RPC v17. My pull request can be seen by following the link below. It works on my machine for the tasks I use it for, but it still needs testers so that hekmon can merge it into main.
165> => https://github.com/hekmon/transmissionrpc/pull/21
166
167### ubuntu-mastodon
168
169An Ubuntu container that runs my Mastodon instance. I chose Ubuntu because it's easier to setup Mastodon than it is on Fedora, but it was mostly done as a sort of proof-of-concept when I was initially learning about LXD, so I may migrate it to Fedora eventually. I prefer Fedora's package and tooling ecosystem and the security benefits of SELinux. My Mastodon instance is available from the public Internet (it has to be), so this LXD container forms a part of my Tailnet, and receives HTTP/S requests to mastodon.hyperreal.coffee from HAproxy upstream on the VPS.
170
171
172## TrueNAS server
173
174My TrueNAS server is used solely as a NAS. It currently only has one 5 TB external HD that it uses as the main storage pool, but I may eventually get another one to create a ZFS mirror. It has a replication task that runs once a day and pulls LXD snapshots from the main nexus server. I also have a dataset on here that receives daily snapshots from my NixOS workstation machine via znapzend. I recently ordered a new laptop, a Lenovo Thinkpad X1 Carbon Gen 10 Intel (14") with Linux pre-installed -- though, of course, I'll install my own OS when I receive it. I intend to install NixOS on ZFS root on it, which I will configure to send daily snapshots to my TrueNAS server. The ZFS on root setup for NixOS is based on the repository below, which is geared toward setting up multiple hosts:
175
176=> https://github.com/ne9z/dotfiles-nixos
177
178## Closing
179
180As I mentioned above, and as anyone who's been following me on here or other places on the Internet can tell, I've been super indecisive when it comes to my homelab setup. I've hopped between several different setups over the past year, never feeling quite satisfied with any. I can't say whether I will change my setup again in the future (it's possible, and given my track record, pretty likely)... but, with my current hardware, LXD containers, and TrueNAS CORE, I can honestly say that I've never been more satisifed with a homelab setup.
181
182## END
183Last updated: 2023-06-11
184
185=> ./ Gemlog archive
186=> ../ hyperreal.coffee