Manually Issuing a Let’s Encrypt Certificate for a Kubernetes Ingress
Sometimes I do not want to install cert-manager.
Not because cert-manager is bad. It is the right answer for production. But for a sandbox cluster, a demo environment, or a short-lived test setup, installing cert-manager, configuring an Issuer, checking RBAC, and debugging the ACME flow can be more work than the cluster is worth.
In that case, a manual Let's Encrypt certificate is a perfectly reasonable shortcut.
The goal is simple:
- run Certbot manually;
- answer the HTTP-01 challenge from inside the Kubernetes cluster;
- let Certbot write the certificate locally;
- package the certificate as a Kubernetes TLS Secret;
- reference that Secret from the real Ingress.
This is not elegant. It is not automated. It is absolutely not how I would run production.
But for a development cluster that will exist for a few weeks, it is a useful trick.
The setup
Assume the domain is:
sub.example.com
and it already points to the load balancer or ingress controller in front of the cluster.
That DNS part matters. Let's Encrypt needs to reach:
http://sub.example.com/.well-known/acme-challenge/<token>
over plain HTTP.
No HTTPS redirect. No authentication. No VPN. No "works from my laptop." It has to be reachable by Let's Encrypt from the public internet.
That is the whole point of the HTTP-01 challenge.
Starting Certbot
Run Certbot manually:
sudo certbot certonly \
--manual \
--preferred-challenges http \
-d sub.example.com
Certbot will print something like this:
Create a file containing just this data:
<token-value>
And make it available on your web server at:
http://sub.example.com/.well-known/acme-challenge/<token-name>
Do not press Enter yet.
Certbot is waiting. We now need Kubernetes to serve that token.
Temporary ACME responder
For the challenge, create a tiny temporary responder.
The exact implementation does not matter. It can be Nginx, BusyBox, a tiny static file server, or any container that can return the token at the expected path.
For example, use a ConfigMap with the challenge token:
apiVersion: v1
kind: ConfigMap
metadata:
name: acme-challenge
data:
<token-name>: |
<token-value>
Then mount it into a small Nginx container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: acme-responder
spec:
replicas: 1
selector:
matchLabels:
app: acme-responder
template:
metadata:
labels:
app: acme-responder
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: acme-challenge
mountPath: /usr/share/nginx/html/.well-known/acme-challenge
readOnly: true
volumes:
- name: acme-challenge
configMap:
name: acme-challenge
Expose it with a Service:
apiVersion: v1
kind: Service
metadata:
name: acme-responder
spec:
selector:
app: acme-responder
ports:
- port: 80
targetPort: 80
Then add a temporary Ingress rule for the ACME path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: acme-responder
spec:
rules:
- host: sub.example.com
http:
paths:
- path: /.well-known/acme-challenge/
pathType: Prefix
backend:
service:
name: acme-responder
port:
number: 80
Apply it:
kubectl -n my-namespace apply -f acme-responder.yaml
Then check the challenge URL yourself:
curl -i http://sub.example.com/.well-known/acme-challenge/<token-name>
You want a plain HTTP 200 OK response containing exactly the token value Certbot gave you.
No redirect.
No HTML error page.
No ingress-controller default backend.
No corporate proxy being "helpful."
Once that works, go back to the Certbot prompt and press Enter.
A few seconds later, Certbot should write the certificate files locally:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
/etc/letsencrypt/live/sub.example.com/privkey.pem
Packaging the certificate as a Kubernetes Secret
The certificate now exists on the machine where Certbot ran.
The cluster needs it as a TLS Secret.
First copy the files somewhere your user can read them:
sudo cp /etc/letsencrypt/live/sub.example.com/fullchain.pem ./fullchain.pem
sudo cp /etc/letsencrypt/live/sub.example.com/privkey.pem ./privkey.pem
sudo chown "$USER":"$USER" ./fullchain.pem ./privkey.pem
Then create or update the TLS Secret:
kubectl -n my-namespace create secret tls sub-example-com-tls \
--cert=./fullchain.pem \
--key=./privkey.pem \
--dry-run=client -o yaml | kubectl apply -f -
This creates a Secret of type:
kubernetes.io/tls
with the standard keys:
tls.crt
tls.key
The Secret must be in the same namespace as the Ingress that references it. Ingress resources cannot reference TLS Secrets from another namespace.
After the Secret is created, remove the local copies:
rm ./fullchain.pem ./privkey.pem
The private key has done enough travelling for one day.
Referencing the Secret from the real Ingress
Now the real Ingress can use the Secret:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
spec:
tls:
- hosts:
- sub.example.com
secretName: sub-example-com-tls
rules:
- host: sub.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 8080
Apply it:
kubectl -n my-namespace apply -f my-app-ingress.yaml
Then test HTTPS:
curl -Iv https://sub.example.com/
At this point the browser should see a valid Let's Encrypt certificate.
The load balancer in front of the cluster keeps doing its thing. The Ingress terminates TLS using the Secret. The application stays unchanged.
Cleaning up the responder
Once the real Ingress is working, delete the temporary ACME responder:
kubectl -n my-namespace delete ingress acme-responder
kubectl -n my-namespace delete service acme-responder
kubectl -n my-namespace delete deployment acme-responder
kubectl -n my-namespace delete configmap acme-challenge
The responder was only needed for the HTTP-01 challenge. It is not part of the application.
Use staging when debugging
If this is your first time wiring the flow, use Let's Encrypt staging first:
sudo certbot certonly \
--manual \
--preferred-challenges http \
--server https://acme-staging-v02.api.letsencrypt.org/directory \
-d sub.example.com
The staging certificate will not be trusted by browsers, but it lets you test the DNS, Ingress, and challenge-response flow without burning production attempts.
Once the staging flow works, repeat with the production Let's Encrypt endpoint.
This is boring advice. It is also the advice that keeps you away from rate-limit-shaped sadness.
When not to do this
Worth saying explicitly: this is for development, demos, and short-lived environments.
Do not put this near production.
There are several reasons.
It expires
Let's Encrypt certificates are short-lived.
Today, the default lifetime is 90 days, and Let's Encrypt recommends automated renewal well before expiration. Certificate lifetimes are also scheduled to get shorter over the next few years.
With the manual approach, there is no renewal automation.
The warning email will land in a forgotten inbox, the certificate will expire on a Saturday, and everyone will pretend to be surprised.
The challenge responder is a one-off
The responder exists only for the challenge.
If you delete the cluster, change ingress controllers, move DNS, or rebuild the environment before renewal, you will be doing the whole thing again.
By hand.
With a calendar open.
Like it is 2009.
There is no rotation story
You do not get automatic renewal.
You do not get automatic Secret updates.
You do not get monitoring.
You do not get alerting.
You do not get a clean audit trail of certificate lifecycle events.
For real traffic, use cert-manager with an Issuer or ClusterIssuer, and let it manage the certificate and Secret lifecycle inside Kubernetes.
That is the right tool for production.
Why I still use this sometimes
The reason I keep coming back to this approach is honesty about scope.
For a sandbox cluster that exists for a few weeks, a real Let's Encrypt certificate with five minutes of manual work can be the right trade-off.
It is better than a self-signed certificate, which annoys everyone else.
It is faster than installing cert-manager when all I need is a temporary demo endpoint.
And it keeps the cluster simple.
Use the right size of tool for the size of the problem.
For production, automate it.
For a short-lived demo, this is fine.