Kubernetes networking is undergoing one of its biggest shifts since Ingress was introduced. The traditional Ingress API is slowly becoming insufficient for modern traffic-management needs, and the community has chosen Gateway API as its future.
Gateway API isn’t just a “new version of Ingress.”
It is a complete redesign of how traffic enters, flows through, and is controlled inside Kubernetes clusters.
In this guide, we will:
- Understand the real reasons Ingress is being replaced
- Explore the Gateway API architecture in depth
- Deploy the full stack with NGINX Gateway Fabric
- Route real traffic across two services
- Understand Route policies, extensibility, and production best practices
- Highlight the differences you must know before migrating in 2026
- Provide GitHub repo + video tutorial for hands-on experience
This blog covers more than video, including architecture insights, diagrams, production recommendations, and an explanation of internal NGF behaviour.
Watch the Full Video Tutorial
Why Gateway API Exists (Beyond What Most People Know)
When developers hear “Ingress is deprecated,” the natural question is:
“Why replace something that works?”
The answer is deeper than what the documentation usually states.
Here are the real reasons the Kubernetes community decided Ingress had to evolve:
1. Ingress Could Never Support Multiple Traffic Protocols
Ingress only supports HTTP/HTTPS.
But modern applications need more:
- gRPC
- TCP
- UDP
- TLS passthrough
- mTLS
- WebSockets
- HTTP/3
Vendors hacked these into annotations, which led to massive fragmentation.
Gateway API solves this with separate route types:
HTTPRoute
GRPCRoute
TCPRoute
UDPRoute
TLSRoute
Each designed with first-class support.
2. Ingress Was Controller-Owned, Not Kubernetes-Owned
This might surprise beginners:
Ingress behavior depended entirely on the vendor’s implementation.
This meant:
- Same YAML, different behavior
- Impossible portability
- No shared standard
- No smooth multi-cloud migration
Gateway API is owned by the Kubernetes SIG-NETWORK group, with vendors collaborating instead of inventing extensions.
3. Ingress Couldn’t Support Multi-Tenancy
Today’s clusters have:
- Multiple teams
- Multiple ingresses
- Shared load balancers
- Platform teams vs application teams
Ingress forced everyone to touch the same resource.
Gateway API introduces a small but powerful line:
allowedRoutes:
namespaces:
from: Same | Selector | All
This finally gives:
- Team separation
- Scoped permissions
- Clean multi-tenant architectures
Each vendor invented its own custom annotation set.
This makes YAML non-portable and hard to maintain.
4. Ingress Was Too Rigid for Modern Traffic Engineering
The Solution: Gateway API
Modern systems need:
- Canary rollouts
- Header-based routing
- A/B testing
- Path rewrites
- Traffic splitting
- Circuit breaking
- Retries + timeouts
- Advanced load balancing
With Ingress? Everything was annotations vendor hacks wrapped in YAML.
Gateway API gives rich routing features inside actual structured fields.
Deep Dive: Gateway API Architecture
Gateway API is designed as a 4-layer architecture, each with a clear purpose.

1. GatewayClass: What Engine Will Drive This Gateway?
Defines which controller implementation you want:
- NGINX
- Traefik
- Kong
- Istio
- HAProxy
- AWS ALB
- And more…
Think of it as the controller + data plane provider.
2. Gateway: Where Does External Traffic Enter?
This is the actual load balancer or NodePort that receives traffic.
A Gateway defines:
- Listener ports
- Protocols
- TLS settings
- Which namespaces can attach Routes
Multiple Gateways can use the same class.
One cluster can have:
- A public gateway
- An internal gateway
- A staging gateway
- A team-isolated gateway
This was impossible with Ingress.
3. Routes : How Should Traffic Flow?
Routes define rules like:
- Match
/green,/blue - Header contains
region: asia - Method = POST
- gRPC service name matches
- Weighted traffic (canary)
Each route type supports its own protocol.
4. Backends: Where Should the Traffic Go?
Usually:
- Kubernetes Services
- ServiceImport (multi-cluster)
- gRPC backends
- Mesh sidecar endpoints
Gateway API never talks to pods directly, always via Services.
Deploying Gateway API + NGINX Gateway Fabric
Now let’s deploy everything with explanations you won’t find in normal tutorials.
In this lab we will set up:
✔ Gateway API CRDs
✔ NGINX Gateway Fabric (controller + data plane)
✔ Two sample applications (green + blue)
✔ A Gateway
✔ An HTTPRoute for /green and /blue
Step 1: Install Gateway API
Gateway API is not installed by default in Kubernetes, so we need to install the official CRDs (Custom Resource Definitions).
Run this command:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
What this command does
This installs new API types into your cluster, such as:
- GatewayClass → defines which controller will manage Gateways
- Gateway → the actual entry point
- HTTPRoute → routing rules
- GRPCRoute → for gRPC apps
- BackendTLSPolicy → extra TLS controls
- ReferenceGrant → allows cross-namespace references
Step 2: Install NGINX Gateway Fabric Using Helm
Gateway API needs a controller something that reads Gateways and Routes and configures the data plane.
In this demo, we will use NGINX Gateway Fabric, one of the most stable Gateway API implementations.
Create a values.yaml file
This file tells NGF how to expose itself.
nginxGateway:
name: nginx-gateway
nginx:
service:
type: NodePort
What these lines mean
- nginxGateway.name — this is just a name for the controller.
- service.type: NodePort — NGF will be reachable through a NodePort on your nodes.
By default Nginx gateway Fabric uses Loadbalancer as a service type.
Install NGF using Helm
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric \
--create-namespace -n nginx-gateway -f values.yaml
After installation, check the resources:
kubectl get all -n nginx-gateway
kubectl get gatewayclass
You should see a GatewayClass named nginx. This is important because our Gateway will use this class.
Step 3: Deploy the Green & Blue Sample Applications
To test routing, we’ll deploy two simple apps: one green, one blue.
Each app has:
- A Deployment
- A Service
---
apiVersion: v1
kind: Service
metadata:
name: green-svc
spec:
selector:
app: green-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: green-deployment
spec:
selector:
matchLabels:
app: green-app
replicas: 2
template:
metadata:
labels:
app: green-app
spec:
containers:
- name: app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
env:
- name: GREETING
value: "Hello from the Green App!"
---
apiVersion: v1
kind: Service
metadata:
name: blue-svc
spec:
selector:
app: blue-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue-deployment
spec:
selector:
matchLabels:
app: blue-app
replicas: 2
template:
metadata:
labels:
app: blue-app
spec:
containers:
- name: app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
env:
- name: GREETING
value: "Hello from the Blue App!"
Now, run the commands to deploy the applications.
kubectl apply -f apps.yaml
kubectl get pods,svc
You should see:
- green-svc
- blue-svc
Step 4: Create the Gateway (Entry Point)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: colors-gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same
- When NGF sees a Gateway with
gatewayClassName: nginx,
it automatically updates the Gateway status with the address of the NGINX Service. - If your NGINX service is:
- LoadBalancer → your Gateway gets LB IP
- NodePort → your Gateway gets node IP
- No service → NGF assigns pod IPs
This dynamic linking is unique to NGF and extremely useful for automation.
kubectl apply -f gateway.yaml
kubectl get gateway
You will see gateway information along with gateway IP.
Step 5: Create HTTPRoute for /green and /blue
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: colors-route
spec:
parentRefs:
- name: colors-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /green
backendRefs:
- name: green-svc
port: 80
- matches:
- path:
type: PathPrefix
value: /blue
backendRefs:
- name: blue-svc
port: 80
What happens internally:
- Route attaches to Gateway
- NGF validates the route
- NGF generates NGINX config snippet
- Data plane updates without reload (hot reload → zero downtime)
This is a major improvement over classic Ingress.
kubectl apply -f colors-route.yaml
kubectl get httproute
You will will see the httproute created with both the rules .
Step 6: Test the Routing
kubectl get gateway
curl http://<IP>/green
curl http://<IP>/blue
Traffic flows as expected.
Gateway API Policies: The Real Power
This is where Gateway API truly outshines Ingress.
Policies allow you to configure:
Without touching:
❌ Applications
❌ Services
❌ Deployments
But instead controlling at the Gateway/Route level.
Examples:
BackendTrafficPolicy
- Timeouts
- Retries
- Circuit breaking
- Load-balancing algorithm
- Max connections
BackendTLSPolicy
- mTLS
- TLS verification mode
HTTPRoute Filters
- Header add/remove/modify
- URL rewrite
- Redirect
- Traffic mirror
- Traffic split
Example: Canary Deployment
backendRefs:
- name: app-v1
port: 80
weight: 90
- name: app-v2
port: 80
weight: 10
No need for custom annotations, no vendor hacks.
Production Best Practices for Gateway API
Here are insights used in real production clusters:
✔ Use a dedicated Gateway per environment (dev/stage/prod)
✔ Separate public and internal Gateways
✔ Avoid wildcard “allowedRoutes: All” in large orgs
Use namespace selectors instead.
✔ Avoid path-based routing for microservices with overlapping prefixes
Use hostname-based routing instead.
✔ Enable mutual TLS between Gateway and backend services for sensitive workloads.
✔ Monitor Gateway API objects through:
- NGF metrics
gateway_api_*Prometheus metrics- NGINX data plane metrics
✔ Use Gateway API for:
- API Gateways
- North-south traffic
- Ingress replacement
- Multi-tenant clusters
- East-west service-to-service mesh (use Linkerd/Istio)
- Pure TCP/UDP load balancing without discovery
✔ Avoid Gateway API for:
- East-west service-to-service mesh (use Linkerd/Istio)
- Pure TCP/UDP load balancing without discovery
GitHub Repository: kubernetes-basics-to-advanced
Final Thoughts
Gateway API isn’t just the future, it is already production-ready.
With NGINX Gateway Fabric, you get:
- Dynamic configuration updates
- Zero reload routing
- High performance
- Full HTTPRoute feature support
- Clean multi-team separation
- A scalable and modern API
However, if you start now, your 2026 migration will be smooth and well-architected.



