Using the Gateway API on OpenShift
With the retirement of Ingress NGINX, there has been some elevated interest in using the Gateway API. Many of the people I deal with have also approached me to ask how to do this on OpenShift. In this blog post, I wanted to show how to get started with the Gateway API on OpenShift by deploying the Gateway API and then use it via a HTTPRoute.
About the Gateway API in OpenShift
Starting with OpenShift 4.19, the Gateway API CustomResourceDefinitions (CRDs) are shipped with the default OpenShift installation. This means starting with OpenShift 4.19, CRDs like the gatewayclasses.gateway.networking.k8s.io are available in OpenShift by default. Different versions of OpenShift implement different version of the Gateway API. To figure out what version of the Gateway API is available, have a look at the annotation of the CRD:
# Output from a 4.21 cluster
$ oc describe crd gatewayclasses.gateway.networking.k8s.io | grep bundle-version
gateway.networking.k8s.io/bundle-version: v1.3.0
In this case, Red Hat ships Gateway API 1.3 in OpenShift 4.21. There is also a Solution article about this including some more information: https://access.redhat.com/solutions/7135887
Deploying the Envoy Gateway
To actually use Gateway API, we need to deploy an implementation. This can be either something third-party implementation, or what most Red Hat customers want to use, the Red Hat implementation. The supported way to use Gateway API on OpenShift is to use the OpenShift Service Mesh Operator to deploy Envoy. Note that we do not actually need to use Service Mesh for anything else, it is just the Operator used to deploy the Gateway.
So the first step is to install the OpenShift Service Mesh Operator 3.0 or later using either the CLI or the Web Console. In this blog post, I am deploying OpenShift Service Mesh 3.3, as it is the current version.
Once that is installed, deploy the following GatewayClass:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: openshift-default
spec:
controllerName: openshift.io/gateway-controller/v1
Alongside the Router Pods you can then see the istiod-openshift-gateway Pod being started:
$ oc get po -n openshift-ingress
NAME READY STATUS RESTARTS AGE
istiod-openshift-gateway-55cd94f4c4-gf5js 1/1 Running 0 8s
router-default-68c8886bb5-99sb8 1/1 Running 3 (15m ago) 28m
router-default-68c8886bb5-tszk7 1/1 Running 3 (15m ago) 28m
In order to then create the actual Gateway (think of the equivalent to the Router Pods / HAProxy in the regular OpenShift ingress), we need to specify a wildcard certificate that we’ll use. In this case I’ll use a self-signed certificate, in a production system you would likely use cert-manager or similar to get that certificate:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout gateway.key -out gateway.crt -subj "/CN=*.gwapi.apps.krenger.ch" -addext "subjectAltName=DNS:*.gwapi.apps.krenger.ch"
$ oc -n openshift-ingress create secret tls gwapi-wildcard --cert=gateway.crt --key=gateway.key
secret/gwapi-wildcard created
Once the certificate is ready, we’ll create a Gateway object using the wildcard hostname:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: example-gateway
namespace: openshift-ingress
spec:
gatewayClassName: openshift-default
listeners:
- name: https
hostname: "*.gwapi.apps.krenger.ch"
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- name: gwapi-wildcard
allowedRoutes:
namespaces:
from: All
Once this is deployed, this will in turn deploy the Envoy Gateway Pod next to the Ingress Pods:
$ oc get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
example-gateway-openshift-default-6df6877bb4-fv9fc 1/1 Running 0 62s
istiod-openshift-gateway-55cd94f4c4-gf5js 1/1 Running 0 7m21s
[..]
We can also see that there has been a Service with “type: LoadBalancer” created for the new Gateway API Ingress:
$ oc get service -n openshift-ingress example-gateway-openshift-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-gateway-openshift-default LoadBalancer 172.30.243.82 ae2e6699bd38f402e88ee6e31b1194a4-822201684.eu-central-1.elb.amazonaws.com 15021:30193/TCP,443:30201/TCP 106s
If you look into the “example-gateway-openshift-default” Pod, you’ll notice that there are two processes, the “pilot-agent” and the “envoy” processes. To my current understanding, the “pilot-agent” acts as a lifecycle manager for the Envoy configuration, while “envoy” process is the actual reverse proxy.
Actually using the Gateway API
So once we are set up, we can deploy our application that is using the Gateway API using a HTTPRoute:
$ oc new-project echoenv
$ oc new-app --name=echoenv --image=quay.io/simonkrenger/echoenv:latest
In the Gateway API there are different ingress types, such as HTTPRoute or GRPCRoute, depending on the traffic we want to route to our application. In this example we’ll use a simple HTTPRoute, which refers to the Gateway we deployed above and which also references the Service:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echoenv-route
namespace: echoenv
spec:
parentRefs:
- name: example-gateway
namespace: openshift-ingress
hostnames: ["echoenv.gwapi.apps.krenger.ch"]
rules:
- backendRefs:
- name: echoenv
port: 8080
After applying that, we’ll see the HTTPRoute deployed and we can reach it via the Gateway API:
$ oc get httproute
NAME HOSTNAMES AGE
echoenv-route ["echoenv.gwapi.apps.krenger.ch"] 13m
$ $ curl -k https://echoenv.gwapi.apps.krenger.ch
{"clientIP":"100.64.0.2","env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","TERM=xterm","HOSTNAME=echoenv-6b88574b56-jxvzz","NSS_SDB_USE_CACHE=no","ECHOENV_PORT_8080_TCP_ADDR=172.30.170.204","ECHOENV_PORT_8080_TCP_PROTO=tcp","KUBERNETES_PORT=tcp://172.30.0.1:443","KUBERNETES_PORT_443_TCP_PORT=443","ECHOENV_SERVICE_PORT=8080","KUBERNETES_SERVICE_PORT=443","KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1","ECHOENV_SERVICE_HOST=172.30.170.204","ECHOENV_SERVICE_PORT_8080_TCP=8080","ECHOENV_PORT=tcp://172.30.170.204:8080","ECHOENV_PORT_8080_TCP_PORT=8080","KUBERNETES_SERVICE_HOST=172.30.0.1","KUBERNETES_SERVICE_PORT_HTTPS=443","KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443","KUBERNETES_PORT_443_TCP_PROTO=tcp","ECHOENV_PORT_8080_TCP=tcp://172.30.170.204:8080","GIN_MODE=release","PORT=8080","HOME=/"],"hostname":"echoenv-6b88574b56-jxvzz","process":{"gid":0,"pid":1,"uid":1000740000},"request":{"header":{"Accept":["*/*"],"User-Agent":["curl/8.15.0"],"X-Envoy-Attempt-Count":["1"],"X-Envoy-Decorator-Operation":["echoenv.echoenv.svc.cluster.local:8080/*"],"X-Envoy-External-Address":["100.64.0.2"],"X-Envoy-Peer-Metadata":["..."],"X-Envoy-Peer-Metadata-Id":["router~10.129.2.12~example-gateway-openshift-default-6df6877bb4-fv9fc.openshift-ingress~openshift-ingress.svc.cluster.local"],"X-Forwarded-For":["100.64.0.2"],"X-Forwarded-Proto":["https"],"X-Request-Id":["62a8c879-9f75-4ddd-a414-48462581157d"]},"host":"echoenv.gwapi.apps.krenger.ch","method":"GET","protocol":"HTTP/1.1","requestURI":"/","url":{"Scheme":"","Opaque":"","User":null,"Host":"","Path":"/","RawPath":"","OmitHost":false,"ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""}}}
If you want to dive deeper into the Envoy config that has now been deployed, you can do that via the following pilot-agent request command:
oc exec example-gateway-openshift-default-6df6877bb4-fv9fc -n openshift-ingress -- pilot-agent request GET /config_dump
{
"configs": [
{
"@type": "type.googleapis.com/envoy.admin.v3.BootstrapConfigDump",
"bootstrap": {
"node": {
"id": "router~10.129.2.12~example-gateway-openshift-default-6df6877bb4-fv9fc.openshift-ingress~openshift-ingress.svc.cluster.local",
"cluster": "example-gateway-openshift-default.openshift-ingress",
"metadata": {
"OWNER": "kubernetes://apis/apps/v1/namespaces/openshift-ingress/deployments/example-gateway-openshift-default",
"NAMESPACE": "openshift-ingress",
"INTERCEPTION_MODE": "REDIRECT",
"INSTANCE_IPS": "10.129.2.12",
"ISTIO_PROXY_SHA": "92066c0a3b603c894f1528a0222157dac42a5293",
"PROXY_CONFIG": {
"serviceCluster": "istio-proxy",
"statusPort": 15020,
"proxyHeaders": {
[..]