Nginx Ingress Expires in March 2026 — Migrate to Gateway API with Envoy Gateway
Nginx Ingress Expires in March 2026 — Migrate to Gateway API with Envoy Gateway
Full setup with TLS (cert-manager), wildcard domains, automatic DNS (ExternalDNS), and real-world debugging
Kubernetes Ingress has served us well, but it's being replaced by something better: the Gateway API. This new standard offers more power, better extensibility, and a cleaner separation of concerns.
In this guide, I'll walk you through a complete production-ready setup using:
- Envoy Gateway as the Gateway API controller
- Cert-manager for automated TLS certificates (Let's Encrypt)
- Wildcard and normal domain support
- ExternalDNS for automatic Route53 DNS records
- Real-world debugging tips from actual production issues
This guide is based on EKS, but the concepts apply to any Kubernetes cluster.
Architecture Overview
Before diving in, let's understand how traffic flows:
TLS and DNS are handled automatically:
- TLS: cert-manager → Let's Encrypt → Secret → Gateway
- DNS: ExternalDNS → Route53
Step 1: Install Gateway API CRDs
Gateway API Custom Resource Definitions must be installed before Envoy Gateway.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yamlVerify the installation:
kubectl get crd | grep gatewayYou should see CRDs like **gateways.gateway.networking.k8s.io** and **httproutes.gateway.networking.k8s.io**.
Step 2: Install Envoy Gateway
Create the namespace:
kubectl create namespace envoy-gateway-systemInstall via Helm:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-systemVerify installation:
kubectl -n envoy-gateway-system get pods
kubectl get gatewayclassYou should see a **GatewayClass** named **envoy** — this is your controller.
Step 3: Create a Gateway
The Gateway is your entry point, replacing the Ingress controller. Think of it as the "listener" configuration.
Option A: Per-Application Gateway
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: myapp-gateway
namespace: myapp
spec:
gatewayClassName: envoy
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same
- name: https
protocol: HTTPS
port: 443
hostname: myapp.example.com
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: myapp-tls
allowedRoutes:
namespaces:
from: SameOption B: Shared Gateway with Wildcard Certificate (Recommended)
For multiple applications sharing one load balancer:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: edge-wildcard
namespace: envoy-gateway-system
spec:
gatewayClassName: envoy
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: https
protocol: HTTPS
port: 443
hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-tls
allowedRoutes:
namespaces:
from: AllCheck Gateway status:
kubectl get gateway -A
kubectl describe gateway edge-wildcard -n envoy-gateway-systemLook for *Programmed: True* in the status.
Step 4: Install cert-manager
Install cert-manager:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
-n cert-manager --create-namespace \
--set crds.enabled=trueEnable Gateway API support:
helm upgrade cert-manager jetstack/cert-manager \
-n cert-manager \
--reuse-values \
--set config.enableGatewayAPI=trueRestart cert-manager to apply:
kubectl -n cert-manager rollout restart deploy/cert-managerStep 5: Create Let's Encrypt ClusterIssuers
Staging Issuer (Test First!)
Always test with staging to avoid rate limits:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: admin@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging-key
solvers:
- dns01:
route53:
region: eu-central-1
hostedZoneID: ZXXXXXXXXXXProduction Issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: admin@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- dns01:
route53:
region: eu-central-1
hostedZoneID: ZXXXXXXXXXXNote: For Route53 DNS challenges, cert-manager needs IAM permissions. Use IRSA (IAM Roles for Service Accounts) for secure access.
Step 6: Request Certificates
Normal Domain Certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-cert
namespace: myapp
spec:
secretName: myapp-tls
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
dnsNames:
- myapp.example.comWildcard Domain Certificate:
For a shared gateway serving multiple subdomains:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-cert
namespace: envoy-gateway-system
spec:
secretName: wildcard-tls
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
dnsNames:
- "*.example.com"Check certificate status:
kubectl get certificate -A
kubectl describe certificate wildcard-cert -n envoy-gateway-systemWait for **Ready: True**.
Step 7: Create HTTPRoutes (Ingress Replacement)
HTTPRoute is the direct replacement for Ingress rules.
Basic HTTPRoute:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.example.com
external-dns.alpha.kubernetes.io/target: your-lb.elb.amazonaws.com
spec:
parentRefs:
- name: edge-wildcard
namespace: envoy-gateway-system
sectionName: https
hostnames:
- myapp.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: myapp-service
port: 80Key Points:
**sectionName: https**— Attach only to HTTPS listener (not HTTP)**namespace**in parentRefs — Required for cross-namespace gateway references**external-dns**annotations — Enable automatic DNS record creation
Step 8: HTTP → HTTPS Redirect
Create a global redirect for all HTTP traffic:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-redirect
namespace: envoy-gateway-system
spec:
parentRefs:
- name: edge-wildcard
sectionName: http
hostnames:
- "*.example.com"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301Step 9: Configure ExternalDNS
Update your ExternalDNS deployment to support Gateway API:
args:
- --source=service
- --source=ingress
- --source=gateway-httproute
- --domain-filter=example.com
- --provider=aws
- --policy=sync
- --registry=txtAnnotate HTTPRoutes for DNS:
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.example.com
external-dns.alpha.kubernetes.io/target: your-lb.elb.amazonaws.comVerify DNS is working:
kubectl -n external-dns logs deploy/external-dns
dig +short myapp.example.comStep 10: Real-World Debugging Guide
Here's where theory meets practice. These are actual issues I encountered and how to solve them.
Issue 1: HTTPS Connection Reset
Symptom
curl -I https://your-lb.elb.amazonaws.com -H 'Host: myapp.example.com'
# curl: (35) Recv failure: Connection reset by peerCause: SNI (Server Name Indication) mismatch. When you use *-H 'Host:'*, the TLS SNI is still set to the load balancer hostname, not your application hostname. The Gateway listener expects SNI matching **.example.com*.
Solution — Test correctly:
# Option 1: Use --resolve
curl -Ik https://myapp.example.com \
--resolve myapp.example.com:443:10.0.0.1
# Option 2: Use actual DNS (once ExternalDNS has updated)
curl -Ik https://myapp.example.comVerify TLS is working:
openssl s_client -connect your-lb.elb.amazonaws.com:443 \
-servername myapp.example.comIssue 2: Getting 307 Instead of 301 Redirect
Symptom:
curl -I http://myapp.example.com
# HTTP/1.1 307 Temporary Redirect (not 301!)Cause: Your HTTPRoute is attached to both HTTP and HTTPS listeners because you didn't specify *sectionName*. Your app route takes precedence over the redirect route.
Solution — Add sectionName to your app's HTTPRoute:
spec:
parentRefs:
- name: edge-wildcard
namespace: envoy-gateway-system
sectionName: httpsIssue 3: Backend Application Redirects (ArgoCD, SonarQube, etc.)
Symptom: You get 307 Temporary Redirect from the application itself, even when routing looks correct.
Cause: Many applications (ArgoCD, Grafana, etc.) have built-in TLS redirect. When Envoy terminates TLS and sends plain HTTP to the backend, the app sees HTTP and redirects.
Solution for ArgoCD:
First, route to the HTTP port (80), not HTTPS (443):
backendRefs:
- name: argocd-server
port: 80Then, disable ArgoCD's internal TLS redirect:
kubectl patch cm argocd-cmd-params-cm -n argocd \
--type merge -p '{"data":{"server.insecure":"true"}}'
kubectl rollout restart deployment argocd-server -n argocdTraffic Flow Explained:
Client ──HTTPS──▶ Envoy Gateway ──HTTP──▶ ArgoCD:80
(TLS terminated) (server.insecure=true)Issue 4: Gateway Shows Programmed=False
Symptom:
kubectl get gateway
# NAME CLASS ADDRESS PROGRAMMED
# edge envoy FalseCause: Envoy data plane not created or secret not found.
Debug steps:
# Check Envoy Gateway controller logs
kubectl -n envoy-gateway-system logs deploy/envoy-gateway
# Check if envoy proxy pods exist
kubectl -n envoy-gateway-system get pods -l app.kubernetes.io/name=envoy
# Check if TLS secret exists
kubectl get secret wildcard-tls -n envoy-gateway-systemIssue 5: Certificate Not Ready
Symptom:
kubectl get certificate
# NAME READY SECRET AGE
# my-cert False 5mDebug steps:
# Check certificate status
kubectl describe certificate my-cert
# Check cert-manager logs
kubectl -n cert-manager logs deploy/cert-manager
# Check certificate request
kubectl get certificaterequest
kubectl describe certificaterequest my-cert-xxxxxCommon causes:
- DNS challenge failing (check Route53 permissions)
- Rate limited by Let's Encrypt (use staging first!)
- Wrong hosted zone ID
Issue 6: ExternalDNS Not Creating Records
Symptom: DNS records not appearing in Route53.
Debug:
kubectl -n external-dns logs deploy/external-dnsCommon causes:
- Missing
— source=gateway-httproutein ExternalDNS args - Missing annotations on HTTPRoute
- IAM permissions for Route53
Useful Commands Cheat Sheet
# Gateway status
kubectl get gateway -A
kubectl describe gateway <name> -n <namespace>
# HTTPRoute status
kubectl get httproute -A
kubectl describe httproute <name> -n <namespace>
# Certificate status
kubectl get certificate -A
kubectl get secret -A | grep tls
# Envoy Gateway logs
kubectl -n envoy-gateway-system logs deploy/envoy-gateway
# Envoy Proxy logs (per gateway)
kubectl -n envoy-gateway-system logs -l gateway.envoyproxy.io/owning-gateway-name=<gateway-name>
# Test TLS
openssl s_client -connect <lb-hostname>:443 -servername <app-hostname>
# Test with correct SNI
curl -Ik https://<app-hostname> --resolve <app-hostname>:443:<lb-ip>Architecture Diagram: Shared Gateway
┌─────────────────────────────────────────────┐
│ envoy-gateway-system namespace │
│ │
│ Gateway: edge-wildcard │
Internet ──────────▶│ ├── HTTP :80 → redirect to HTTPS │
│ └── HTTPS :443 (*.example.com) │
│ │
│ Secret: wildcard-tls (Let's Encrypt) │
└─────────────────────────────────────────────┘
│
┌───────────────────────────┼───────────────────────────┐
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ argocd namespace│ │ sonarqube ns │ │ grafana ns │
│ │ │ │ │ │
│ HTTPRoute │ │ HTTPRoute │ │ HTTPRoute │
│ → argocd-server │ │ → sonarqube │ │ → grafana │
└─────────────────┘ └─────────────────┘ └─────────────────┘Conclusion
You now have a production-ready edge stack:
✅ Envoy Gateway — Future-proof Gateway API implementation
✅ Wildcard TLS — One certificate for all subdomains
✅ Shared Load Balancer — Cost-effective multi-app setup
✅ Automatic DNS — ExternalDNS manages Route53
✅ HTTP→HTTPS Redirect — Secure by default
✅ Real debugging skills — Because production is never smooth
Gateway API + Envoy Gateway is the future of Kubernetes networking. It's cleaner, more powerful, and more extensible than Ingress. Combined with cert-manager and ExternalDNS, you have a fully automated, production-ready edge stack.
Found this helpful? Follow me for more Kubernetes and DevOps content!
About the author
We have other interesting reads
Achieving Resilience: High Availability Strategies in Kubernetes
In cloud computing, it’s important to keep services running smoothly, even when maintenance tasks like updating or restarting nodes are necessary.
From Proof-of-Concept to Production: Evolving Your Self-Healing Infrastructure
In the previous article, we explored building a self-healing nginx infrastructure using KAgent and KHook, covering autonomous configuration validation, intelligent analysis, and automated remediation.
Cost-Efficient Kubernetes Setup in AWS using EKS with Karpenter and Fargate
Karpenter is an open-source Kubernetes cluster autoscaler designed to optimize the provisioning and scaling of compute resources.
