NixOS Service Modules#
This directory contains reusable NixOS service modules for deploying applications.
Architecture#
Each service module follows a common pattern for deploying TypeScript/Bun applications:
Directory Structure#
- Data Directory:
/var/lib/<service-name>/app/- Git repository clonedata/- Application data (databases, uploads, etc.)
Systemd Service Pattern#
-
ExecStartPre (runs as root with
+prefix):- Creates data directories
- Sets ownership to service user
- Ensures proper permissions
-
preStart (runs as service user):
- Clones git repository if needed
- Pulls latest changes (if
autoUpdateenabled) - Runs
bun install - Initializes database if needed
-
ExecStart (runs as service user):
- Starts the application with
bun start
- Starts the application with
Common Options#
All service modules support:
atelier.services.<service-name> = {
enable = true; # Enable the service
domain = "app.example.com"; # Domain for Caddy reverse proxy
port = 3000; # Port the app listens on
dataDir = "/var/lib/<service>"; # Data storage location
secretsFile = path; # agenix secrets file
repository = "https://..."; # Git repository URL
autoUpdate = true; # Git pull on service restart
};
Secrets Management#
Secrets are managed using agenix:
-
Add secret to
secrets/secrets.nix:"service-name.age".publicKeys = [ kierank ]; -
Create and encrypt the secret:
agenix -e secrets/service-name.age -
Add environment variables (one per line):
DATABASE_URL=postgres://... API_KEY=xxxxx SECRET_TOKEN=yyyyy -
Reference in machine config:
age.secrets.service-name = { file = ../../secrets/service-name.age; owner = "service-name"; }; atelier.services.service-name = { secretsFile = config.age.secrets.service-name.path; };
Reverse Proxy (Caddy)#
Each service automatically configures a Caddy virtual host with:
- Cloudflare DNS challenge for TLS
- Reverse proxy to the application port
GitHub Actions Deployment#
Services can be deployed via GitHub Actions using SSH over Tailscale.
Prerequisites#
-
Tailscale OAuth Client:
- Create at https://login.tailscale.com/admin/settings/oauth
- Required scope:
auth_keys(to authenticate ephemeral nodes) - Add to GitHub repo secrets:
TS_OAUTH_CLIENT_IDTS_OAUTH_SECRET
-
SSH Access:
- Add the service user to Tailscale SSH ACLs
- Example in
tailscale.com/admin/acls:"ssh": [ { "action": "accept", "src": ["tag:ci"], "dst": ["tag:server"], "users": ["cachet", "hn-alerts", "root"] } ]
Workflow Template#
Create .github/workflows/deploy-<service>.yaml:
name: Deploy <Service Name>
on:
push:
branches:
- main
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Tailscale
uses: tailscale/github-action@v3
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:ci
use-cache: "true"
- name: Configure SSH
run: |
mkdir -p ~/.ssh
echo "StrictHostKeyChecking no" >> ~/.ssh/config
- name: Deploy to server
run: |
ssh <service-user>@<hostname> << 'EOF'
cd /var/lib/<service>/app
git fetch --all
git reset --hard origin/main
bun install
sudo /run/current-system/sw/bin/systemctl restart <service>.service
EOF
- name: Wait for service to start
run: sleep 10
- name: Health check
run: |
HEALTH_URL="https://<domain>/health"
MAX_RETRIES=6
RETRY_DELAY=5
for i in $(seq 1 $MAX_RETRIES); do
echo "Health check attempt $i/$MAX_RETRIES..."
RESPONSE=$(curl -s -w "\n%{http_code}" "$HEALTH_URL" || echo "000")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | sed '$d')
if [ "$HTTP_CODE" = "200" ]; then
echo "✅ Service is healthy"
echo "$BODY"
exit 0
fi
echo "❌ Health check failed with HTTP $HTTP_CODE"
echo "$BODY"
if [ $i -lt $MAX_RETRIES ]; then
echo "Retrying in ${RETRY_DELAY}s..."
sleep $RETRY_DELAY
fi
done
echo "❌ Health check failed after $MAX_RETRIES attempts"
exit 1
Deployment Flow#
- Push to
mainbranch triggers workflow - GitHub Actions runner joins Tailscale network
- SSH to service user on target server
- Git pull latest changes
- Install dependencies
- Restart systemd service
- Verify health check endpoint
Creating a New Service Module#
- Copy an existing module (e.g.,
cachet.nixorhn-alerts.nix) - Update service name, user, and group
- Adjust environment variables as needed
- Add database initialization if required
- Configure secrets in
secrets/secrets.nix - Import in machine config
- Create GitHub Actions workflow (if needed)
Example Services#
- cachet - Slack emoji/profile cache
- hn-alerts - Hacker News monitoring and alerts