Rebuilding the Homelab Step 1
This is the first post in a series documenting my homelab migration to a k3s Kubernetes cluster. Everything built here is IaC — reproducible, version-controlled, and built with enterprise patterns in mind.
The Goal
I'm rebuilding my homelab from scratch with two objectives:
- Build enterprise-relevant skills — Kubernetes, GitOps, Terraform, Ansible, secrets management
- Cut idle power consumption — from ~188W to under 80W
Before touching the local hardware, I needed somewhere to host a blog to document the journey. That meant standing up a cloud VPS with proper infrastructure — not just apt install nginx and call it done.
This post covers what I built on a Hostinger KVM2 VPS: a fully IaC-managed stack with Traefik, HashiCorp Vault, and Ghost, with zero plaintext secrets anywhere on disk.
The Stack
Internet
│
Traefik v3.6 (reverse proxy + Let's Encrypt TLS)
├── blog.fexel.net → Ghost (this blog)
└── vault.fexel.net → HashiCorp Vault (secrets management)
All services run in Docker Compose stacks, provisioned by Ansible, with infrastructure managed by Terraform.
Why Traefik Instead of Nginx?
The obvious choice for a reverse proxy is nginx. But on a single VPS running multiple HTTPS services, nginx creates an immediate problem: only one process can bind to port 443.
The typical workaround is a single "gateway" nginx container that proxies to per-service containers — but then you're managing nginx configs, certbot timers, and certificate paths across multiple services manually.
Traefik solves this cleanly:
- One container handles ports 80 and 443 for all services
- Docker label routing — add labels to any container and Traefik discovers it automatically
- Automatic TLS — Let's Encrypt certificates managed entirely by Traefik, renewed automatically
- Maps directly to k3s — Traefik is the default ingress controller in k3s, so this pattern transfers
# Adding a new service is just labels on the container
labels:
- "traefik.enable=true"
- "traefik.http.routers.ghost.rule=Host(`blog.fexel.net`)"
- "traefik.http.routers.ghost.tls.certresolver=letsencrypt"
The DNS-01 Challenge
One thing I ran into: Hostinger's infrastructure blocks inbound connections from Let's Encrypt's servers, which breaks the standard HTTP-01 and TLS-ALPN-01 challenge methods.
The fix is DNS-01 — instead of Let's Encrypt connecting to your server, it verifies domain ownership via a DNS TXT record. Traefik handles this automatically using the Hostinger DNS API. No inbound connectivity required.
certificatesResolvers:
letsencrypt:
acme:
dnsChallenge:
provider: hostinger
HashiCorp Vault for Secrets Management
This is the part I'm most proud of. The goal was zero plaintext secrets anywhere — not on disk, not in environment variables, not in git.
The Problem with Ansible Vault Alone
The standard approach is ansible-vault: encrypt a YAML file containing your secrets, commit the encrypted file, and pass the vault password when running playbooks. But this just moves the problem — now you have a plaintext vault password file sitting on your machine.
The Solution: Vault All the Way Down
HashiCorp Vault (vault.fexel.net)
├── secret/ansible → ansible-vault password
├── secret/ghost → DB passwords (randomly generated, never seen)
└── secret/terraform → Hostinger API token
Ansible fetches the vault password from HashiCorp Vault at runtime using AppRole authentication:
# ansible/scripts/vault-password.sh
VAULT_TOKEN=$(curl -sf --request POST \
--data "{\"role_id\":\"${VAULT_ROLE_ID}\",\"secret_id\":\"${VAULT_SECRET_ID}\"}" \
"${VAULT_ADDR}/v1/auth/approle/login" \
| python3 -c "import json,sys; print(json.load(sys.stdin)['auth']['client_token'])")
curl -sf -H "X-Vault-Token: ${VAULT_TOKEN}" \
"${VAULT_ADDR}/v1/secret/data/ansible" \
| python3 -c "import json,sys; print(json.load(sys.stdin)['data']['data']['vault_password'])"
Running a playbook looks like this:
export VAULT_ADDR=https://vault.fexel.net
export VAULT_ROLE_ID=<role_id>
export VAULT_SECRET_ID=<secret_id>
ansible-playbook playbooks/deploy-ghost.yml \
--vault-password-file scripts/vault-password.sh
No password files. No plaintext secrets. The AppRole credentials are the only thing you need, and those live in your password manager.
Terraform Gets the Same Treatment
Terraform providers are initialized before data sources run, so you can't use a Vault data source to configure another provider directly. The enterprise solution is a wrapper script:
# tf.sh — fetches secrets from Vault, injects as TF_VAR_* env vars
SECRETS=$(curl -sf -H "X-Vault-Token: ${VAULT_TOKEN}" \
"${VAULT_ADDR}/v1/secret/data/terraform")
export TF_VAR_hostinger_api_token=$(echo "$SECRETS" | \
python3 -c "import json,sys; print(json.load(sys.stdin)['data']['data']['hostinger_api_token'])")
terraform "$@"
Usage: ./tf.sh plan, ./tf.sh apply. This is exactly how CI/CD pipelines handle Vault + Terraform.
Human Access with TOTP MFA
AppRole is for machines. For human UI access to Vault, the setup is:
userpassauth method with a username and password- TOTP MFA enforced on login (Google Authenticator / Authy)
- Scoped
homelab-adminpolicy
The entire MFA setup is codified in an Ansible playbook (setup-vault-mfa.yml) that generates the TOTP secret and outputs the otpauth:// URL to scan.
Root Token Hygiene
Vault generates a root token on initialization. The enterprise practice:
- Use it only for initial configuration
- Revoke it immediately after
- When elevated access is needed later, generate a temporary root token from the unseal key via
vault operator generate-root, use it, revoke it
This is codified in fix-vault-policies.yml — a playbook that uses vars_prompt to accept the unseal key, generates a root token, applies policy changes, and revokes the token, all in one run.
Ghost Blog
With Traefik handling TLS and Vault handling secrets, deploying Ghost is anticlimactic:
# ghost-compose.yml.j2
services:
ghost:
image: ghost:5-alpine
environment:
url: "https://blog.fexel.net"
database__connection__password: "{{ ghost_db_password }}"
labels:
- "traefik.enable=true"
- "traefik.http.routers.ghost.rule=Host(`blog.fexel.net`)"
- "traefik.http.routers.ghost.tls.certresolver=letsencrypt"
The ghost_db_password is fetched from Vault at deploy time — randomly generated and never known to anyone.
Infrastructure as Code
Everything is reproducible from a fresh VPS:
# 1. Base OS provisioning
ansible-playbook playbooks/logos-setup.yml
# 2. Reverse proxy
ansible-playbook playbooks/deploy-traefik.yml --vault-password-file scripts/vault-password.sh
# 3. Secrets manager
ansible-playbook playbooks/deploy-vault.yml
# 4. Seed secrets (one-time bootstrap)
ansible-playbook playbooks/seed-vault-secrets.yml --vault-password-file vault_pass
# 5. MFA setup
ansible-playbook playbooks/setup-vault-mfa.yml
# 6. Blog
ansible-playbook playbooks/deploy-ghost.yml --vault-password-file scripts/vault-password.sh
DNS records are managed by Terraform:
resource "hostinger_dns_record" "blog" {
zone = "fexel.net"
name = "blog"
type = "A"
value = hostinger_vps.logos.ipv4_address
}
What's Next
This VPS is the foundation. The real project is migrating my local homelab to a 3-node k3s cluster:
- Control plane — spare Intel box (16GB)
- Worker 1 — Z170 Intel (32GB) + 6x HDD for NAS duties
- Worker 2 — Ryzen 3700X (64GB) + RTX 2080 Ti for GPU workloads (ComfyUI, ML inference)
Services migrating to the cluster: Plex, Pi-hole, LubeLogger, Home Assistant (VM with USB passthrough), BitTorrent, and ComfyUI with GPU scheduling.
Storage will be two-tier: Longhorn (NVMe, replicated block storage) + democratic-csi (TrueNAS NFS for bulk HDD storage). GitOps via Flux CD.
The goal is a cluster that mirrors cloud-native patterns closely enough to count as hands-on experience for AWS/GCP certifications.
Next post: setting up the k3s cluster with Ansible.
All IaC for this project is in my GitHub repo.