Hello!
I don’t like writing blogs, so naturally I started a blog! To make things interesting, I’ve over-engineered it, making it a great starting point for explaining all the intricate components behind my setup. And so, what’s a better first blog than one that explains this over-the-top infrastructure? I won’t go into all the details behind this so it doesn’t become too long. Instead, I’ll just explain the main components and how they work together to make this blog possible.
Infrastructure
This blog is built on Hugo, hosted in a private Gitea instance, built by WoodpeckerCI, container image updated by Renovate deployed into my home Kubernetes cluster by ArgoCD, and served by Cloudflare.
Hugo
Hugo is a static site generator built with Go. It converts Markdown files into HTML, creating this entire blog! Since I’m running it in a container, Running Hugo in a container necessitates building it with a Dockerfile and mine looks like this:
FROM floryn90/hugo:debian AS hugo
USER root
RUN apt-get update
RUN apt-get install -y git
COPY . /src
RUN chown -R hugo:hugo /src
USER hugo
WORKDIR /src
RUN ls -la
RUN git config --global --add safe.directory /src
RUN git version
RUN git submodule init
RUN git submodule update
RUN hugo
FROM nginx
COPY --from=hugo /src/public /usr/share/nginx/html
EXPOSE 1313
Gitea
Gitea is an open-source Git service. It’s used to host and version-control source code much like GitHub. I’m using it to host the repo that creates this blog.
WoodpeckerCI
WoodpeckerCI is a CI/CD engine compatible with Gitea. While CI/CD engines can be used for nearly infnite cases, they’re most often used to build Docker containers, and that’s what I’m using it for here! WoodpeckerCI automatically builds the container using my provided Dockerfile and pushes it to my Gitea registry every time a new commit is pushed to my Hugo blog repo. Here’s a look at my Woodpecker workflow with redactions:
steps:
build:
image: woodpeckerci/plugin-docker-buildx
privileged: true
backend_options:
kubernetes:
securityContext:
privileged: true
settings:
dockerfile: Dockerfile
dry_run: false
registry: gitea.domain.com
repo: gitea.domain.com/duck/duck-hugo-blog
auto_tag: false
username:
from_secret: GITEA_USERNAME
password:
from_secret: GITEA_PASSWORD
tags:
- latest
when:
branch: master
event:
- push
Renovate
Renovate is a dependency checking bot that helps keep your dependencies up-to-date. Since my Kubernetes cluster is defined entirely within a Gitea repo, including this blog, I use renovate to check for newer builds of my blog’s container image. And yes, this bot also runs within the cluster. When Renovate runs (on a cron), it reads the helm ArgoCD manifest that’s defining the blog’s deployment. When a new version is available, Renovate creates and merges a PR request to update the blog. For simplicity, I’ll only include the chunk of the JSON that looks for the blog image:
"labels": [
"Kind/Dependency"
],
"packageRules": [
{
"matchDatasources": [
"docker"
],
"matchPackageNames": [
"renovate/renovate",
"gitea.domain.com/duck/duck-hugo-blog"
],
"versioning": "semver",
"automerge": true,
"ignoreTests": true
}
]
ArgoCD
ArgoCD deploys manifests from a given git repo into a Kubernernetes cluster. I wrote a very simple manifest to deploy my blog with bjw-s’ common app-template helm chart. This employs nginx-ingress for the web server and cert-manager for generating and handling CA certs. These are the manifests I use:
Chart:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: hugo
namespace: argocd
spec:
project: default
source:
chart: app-template
# https://github.com/bjw-s/helm-charts/tree/main/charts/library/common
repoURL: https://bjw-s.github.io/helm-charts
targetRevision: 3.5.1
helm:
values: |
defaultPodOptions:
imagePullSecrets:
- name: gitea-registry
controllers:
main:
containers:
hugo:
image:
repository: gitea.domain.com/duck/duck-hugo-blog
tag: latest@sha256:2e7fa2c0aa8abe4bfb2ddfbd2afc23112790becd580df85faf6889c8562bfad6
service:
main:
controller: main
ports:
http:
port: 80
destination:
server: "https://kubernetes.default.svc"
namespace: hugo
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
Ingress and Cert:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: duck-hugo-blog-tls-secret
namespace: hugo
spec:
secretName: duck-hugo-blog-tls-secret
issuerRef:
name: cloudflare-issuer
kind: ClusterIssuer
dnsNames:
- &host blog.duckdefense.cc
- www.blog.duckdefense.cc
commonName: *host
---
# Ingresses always listen on port 80 & 443
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: &app duck-hugo-blog
namespace: hugo
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hajimari.io/enable: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "3600"
labels:
app.kubernetes.io/name: *app
app.kubernetes.io/instance: *app
spec:
ingressClassName: nginx
rules:
- host: &host blog.duckdefense.cc
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hugo
port:
number: 80
tls:
- hosts:
- *host
secretName: duck-hugo-blog-tls-secret
Cloudflare
With everything set up, I have a running blog! However, I wanted a little extra layer of security since this will be public facing. This is where I slotted Cloudflare in. It serves to mask my real public IP and provide a basic WAF and DDOS protection. I use timothymiller’s cloudflare-ddns tool to keep my dynamic IP updated in Cloudflare. This is what my config for this domain looks like:
{
"cloudflare": [
{
"authentication": {
"api_token": "super-real-and-secret-api-key"
},
"zone_id": "zone-id",
"subdomains": [
{
"name": "blog",
"proxied": true
}
]
}
],
"a": true,
"aaaa": false,
"purgeUnknownRecords": false,
"ttl": 300
}
And that’s it!
Simple right? I know this is absolutely unnecessary for a static blog with this size, but I already use most of this infrastructure for my other homelab projects and I love doing stuff like this.