kubernetes
Kubernetes
Overview
Multi-node Kubernetes cluster to be deployed using kubeadm across four Proxmox nodes. This is the next major build phase following the single-node Proxmox baseline.
⚠ Planned Phase
This section is forward-looking documentation. Build begins after the multi-node Proxmox cluster is established. Docs will be updated as each component is deployed.
On this page
- [Cluster Architecture](#cluster-architecture)
- [Node Assignments](#node-assignments)
- [Build Order](#build-order)
- [Identity Stack](#identity-stack--freeipa--keycloak)
- [Observability](#observability)
- [GitOps — ArgoCD](#gitops--argocd)
Cluster Architecture
┌─────────────────────────────────────────────────┐
│ M90Q Gen 3 (192.168.x.10) │
│ ┌─────────────────┐ ┌──────────────────────┐ │
│ │ k8s-control │ │ k8s-worker-01 │ │
│ │ Control Plane │ │ Worker Node │ │
│ └─────────────────┘ └──────────────────────┘ │
│ ┌──────────────────────────────────────────┐ │
│ │ Keycloak · Prometheus · Grafana · Ingress │ │
│ └──────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
┌──────────────────────┐ ┌──────────────────────┐
│ M720Q #1 │ │ M720Q #2 │
│ k8s-worker-02 │ │ k8s-worker-03 │
│ 6GB RAM assigned │ │ 6GB RAM assigned │
└──────────────────────┘ └──────────────────────┘
┌──────────────────────┐
│ M710Q #3 (7th gen) │
│ FreeIPA node │
│ 4GB RAM │
│ Kerberos · LDAP │
└──────────────────────┘
Node Assignments
| VM | Host | Role | RAM |
|---|---|---|---|
| k8s-control | M90Q | Control plane | 4GB |
| k8s-worker-01 | M90Q | Worker | 8GB |
| k8s-worker-02 | M710Q #1 | Worker | 6GB |
| k8s-worker-03 | M710Q #2 | Worker | 6GB |
| freeipa-01 | M710Q #3 | FreeIPA / Kerberos | 3.5GB |
Build Order
Week 1 — Proxmox cluster
pvecmmulti-node cluster- Private network bridge
vmbr1on10.10.0.0/24 - Shared storage (optional NFS)
Week 2 — FreeIPA
- Rocky Linux / AlmaLinux VM on M710Q #3
- FreeIPA install (Kerberos KDC, LDAP, integrated DNS)
- User/group provisioning
Week 3 — Keycloak
- Keycloak deployed in Docker (later migrated to k8s)
- LDAP federation to FreeIPA
- OIDC realm configured
Week 4 — Kubernetes
# On all nodes — pre-flight
swapoff -a
modprobe br_netfilter
sysctl net.bridge.bridge-nf-call-iptables=1
# Install containerd, kubeadm, kubelet, kubectl
apt install -y containerd kubeadm kubelet kubectl
apt-mark hold kubeadm kubelet kubectl
# On control plane
kubeadm init --pod-network-cidr=10.244.0.0/16
# CNI — Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
# Join worker nodes
kubeadm join <control-plane-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Week 5 — Integration & GitOps
- Kerberos/OIDC integration with Kubernetes API server (
--oidc-issuer-url) - ArgoCD install and GitHub repo connection
- Nginx ingress controller
- TLS via cert-manager + Let’s Encrypt
- GitHub portfolio documentation
Identity Stack — FreeIPA + Keycloak
The identity architecture follows an enterprise-like pattern:
FreeIPA (Kerberos KDC + LDAP)
└── Keycloak (LDAP federation)
└── OIDC provider
├── Kubernetes API (--oidc-issuer-url)
└── ArgoCD SSO
This mirrors patterns used in real enterprise environments — centralized identity, federated via an IDP, consumed by platform tooling.
Observability
Stack deployed into the cluster via Helm or manifests:
| Tool | Purpose |
|---|---|
| Prometheus | Metrics scraping |
| Grafana | Dashboards |
| Alertmanager | Alert routing |
| node_exporter | Per-node OS metrics |
| kube-state-metrics | Kubernetes object metrics |
| cAdvisor | Container metrics |
GitOps — ArgoCD
ArgoCD will manage all cluster workloads from a GitHub repository. Target pattern:
github.com/ryn/homelab-gitops
├── apps/
│ ├── argocd/
│ ├── keycloak/
│ ├── prometheus-stack/
│ └── ingress-nginx/
└── clusters/
└── hoshinotama/
All changes pushed to Git → ArgoCD syncs to cluster. No manual kubectl apply in production state.