Custom Error Pages for Internal NGINX Ingress Controller on AKS | by Maciej | Oct, 2024

Photo by CHUTTERSNAP on Unsplash

Managing HTTP errors in production applications is crucial for enhancing the user experience. When using the NGINX Ingress Controller on Azure Kubernetes Service (AKS), you can implement custom error pages for specific HTTP responses, such as 404 or 503 errors. In this article, we will walk through setting up an internal NGINX Ingress Controller with custom error pages.

The Ingress Controller is responsible for routing external traffic to services within a Kubernetes cluster. The internal version of the Ingress Controller communicates using private IP addresses, limiting accessibility to internal networks only, thereby enhancing security for your applications.

To install the internal NGINX Ingress Controller on AKS, we use Helm along with specific configurations. Key parameters in this installation include the Ingress class, private IPs via Azure Load Balancer, and controller replication settings.

Helm Installation Command:

helm upgrade --install ingress-nginx-internal ingress-nginx/ingress-nginx 
--namespace ingress-nginx-internal
--set controller.ingressClassResource.name=ingress-nginx-internal
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx-internal"
--set controller.ingressClassResource.enabled=true
--set controller.ingressClassByName=true
--set controller.replicaCount=1
--set controller.watchNamespace=ingress-nginx-internal
--set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-internal"="true"
-f values.yaml

This command uses the values from the values.yaml file, which contains key configurations such as replica count, annotations for the Azure Load Balancer, and custom error page settings.

The values.yaml file is essential in setting up the NGINX Ingress Controller and defining key settings, including the custom error pages. Below is the full content of the values.yaml file used in this setup:

controller:
config:
custom-http-errors: "404,503" # Defines the custom HTTP error codes (404, 503)
ingressClass: internal-ingress-nginx # Specifies the Ingress class
extraArgs:
ingress-class: internal-ingress-nginx
ingressClassResource:
name: internal-ingress-nginx
controllerValue: "k8s.io/ingress-nginx-internal"
enabled: true
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true" # Internal load balancer setup
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
service.beta.kubernetes.io/azure-pls-create: "true"
service.beta.kubernetes.io/azure-pls-name: internal-ingress-nginx
replicaCount: 1 # Sets the number of replicas to 1
watchNamespace: ingress-nginx-internal # Limits NGINX to watch the internal namespace only
nodeSelector:
"kubernetes.io/os": linux # Ensures that the pods run on Linux nodes
admissionWebhooks:
patch:
nodeSelector:
"kubernetes.io/os": linux
# metrics:
# enabled: true
# podAnnotations:
# "prometheus.io/scrape": "true"
# "prometheus.io/port": "10254"
defaultBackend:
enabled: true # Enables the default backend for custom error pages
image:
registry: registry.k8s.io
image: ingress-nginx/custom-error-pages
tag: v1.0.1@sha256:d8ab7de384cf41bdaa696354e19f1d0efbb0c9ac69f8682ffc0cc008a252eb76
extraVolumes:
- name: custom-error-pages # Defines a volume for the custom error pages
configMap:
name: custom-error-pages
items:
- key: "404"
path: "404.html"
- key: "503"
path: "503.html"
extraVolumeMounts:
- name: custom-error-pages
mountPath: /www # Mounts the error pages in the /www directory
nodeSelector:
"kubernetes.io/os": linux # Runs on Linux nodes
metadata:
annotations:
labels:
app.kubernetes.io/part-of: ingress-nginx-internal
  • Custom HTTP Errors:
    This part of the configuration enables custom error pages for HTTP status codes 404 and 503.
custom-http-errors: "404,503"
  • Ingress Class & Resource:
    The Ingress class is set as internal-ingress-nginx, and it’s assigned the controller value k8s.io/ingress-nginx-internal. This ensures that only internal traffic can reach the NGINX Ingress Controller.
ingressClass: internal-ingress-nginx
ingressClassResource:
name: internal-ingress-nginx
controllerValue: "k8s.io/ingress-nginx-internal"
  • Azure Load Balancer Configuration:
    This section sets the Azure load balancer annotations, which configure an internal load balancer with a private IP. The load balancer is used to expose services internally, adding a layer of security by not exposing them publicly.
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
  • Custom Error Pages (defaultBackend):
    The defaultBackend section specifies the use of a custom error page image, with the pages being mounted as volumes in the NGINX pod.
defaultBackend:
extraVolumes:
- name: custom-error-pages
configMap:
name: custom-error-pages
items:
- key: "404"
path: "404.html"
- key: "503"
path: "503.html"
extraVolumeMounts:
- name: custom-error-pages
mountPath: /www

The above configuration ensures that your NGINX Ingress Controller can serve custom HTML pages in case of 404 (Not Found) or 503 (Service Unavailable) errors.

Once the values.yaml file is defined, you can create a ConfigMap to serve as the data source for the custom HTML error pages.

Example ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
name: custom-error-pages
data:
404: |
Page Not Found

The page you're looking for doesn't exist.

Return to home.


503: |
Service Unavailable

We're currently experiencing technical issues.

Please try again later.


These custom HTML pages for 404 and 503 errors are more user-friendly and provide useful information to the end-user, such as a link back to the home page or a notice about ongoing technical issues.

By following these steps, you can successfully deploy custom error pages in the NGINX Ingress Controller, which not only improves the user experience but also adds a professional touch to your application. Customizing the way users see error messages helps communicate better during service downtimes or page errors, which is a crucial aspect of any production-level deployment.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *