In OpenShift Container Platform (OCP) 4, most of the functionality is controlled by Operators. To see the currently installed Operators and also their status, use the following command:
$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.6.4 True False False 12m
cloud-credential 4.6.4 True False False 38m
cluster-autoscaler 4.6.4 True False False 32m
config-operator 4.6.4 True False False 33m
console 4.6.4 True False False 21m
csi-snapshot-controller 4.6.4 True False False 27m
dns 4.6.4 True False False 31m
etcd 4.6.4 True False False 32m
image-registry 4.6.4 True False False 25m
ingress 4.6.4 True False False 24m
insights 4.6.4 True False False 33m
kube-apiserver 4.6.4 True False False 30m
kube-controller-manager 4.6.4 True False False 31m
kube-scheduler 4.6.4 True False False 31m
kube-storage-version-migrator 4.6.4 True False False 24m
machine-api 4.6.4 True False False 27m
machine-approver 4.6.4 True False False 32m
machine-config 4.6.4 True False False 32m
marketplace 4.6.4 True False False 32m
monitoring 4.6.4 True False False 23m
network 4.6.4 True False False 33m
node-tuning 4.6.4 True False False 33m
openshift-apiserver 4.6.4 True False False 27m
openshift-controller-manager 4.6.4 True False False 24m
openshift-samples 4.6.4 True False False 26m
operator-lifecycle-manager 4.6.4 True False False 32m
operator-lifecycle-manager-catalog 4.6.4 True False False 32m
operator-lifecycle-manager-packageserver 4.6.4 True False False 27m
service-ca 4.6.4 True False False 33m
storage 4.6.4 True False False 32m
You can find the description of the default Operators in the documentation.
This will only list the Red Hat Operators that are installed as part of the cluster. These are all controlled by the
ClusterVersionOperator, which is the “Master-Operator” of the cluster controlling all others.
If you want to list all Operators that were installed via the Operator Lifecycle Manager (OLM), you can use the following command:
$ oc get subscriptions --all-namespaces
Getting training and exams done in 2020 has been challenging. After reaching my RHCE mid-February, I am now proud to say that I achieved my Red Hat Certified Architect in Infrastructure certification less than 9 months later.
To reach my RHCA, I took the following Red Hat exams. As you can see, it is OpenShift and Ansible all the way down:
- EX180 Red Hat Certified Specialist in Containers and Kubernetes
- EX280 Red Hat Certified Specialist in OpenShift Administration
- EX288 Red Hat Certified Specialist in OpenShift Application Development
- EX407 Red Hat Certified Specialist in Ansible Automation
- EX447 Red Hat Certified Specialist in Ansible Best Practices
Of course, the journey does not end here as there are quite a few interesting topics still to learn!
With OpenShift 4, Red Hat introduced Red Hat Enterprise Linux CoreOS. It is a very minimalist operating system, focused on running container workload.
This new minimalism comes with some challenges. There are no more RPM packages and most of the tools we know and love are missing! Luckily, there is the Red Hat supplied
toolbox container that contains all the necessary tools and is nicely integrated.
So to start the toolbox, use
oc debug node/<nodename>. This will start a privileged container on the node you specify, mount the host file system on
/host and drop you into a shell:
$ oc debug node/worker-0.lab.openshift.krenger.ch
Starting pod/worker-0labopenshiftkrengerch-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
Container started successfully. To exit, type 'exit'.
Now we are running in the
toolbox container on our CoreOS host with all the tools we know at our disposal, for example
sosreport will generate a sosreport in
/host/var/tmp/, which means it will be accessible in
/var/tmp/ on the CoreOS host itself.
For OpenShift 4, the upgrade paths are kept in the cincinnati-graph-data repository as YAML files and then exposed via an API.
There is a Red Hat Solution describing how this data can be queried via api.openshift.com and how you can use this data in your automation:
$ curl -sH 'Accept:application/json' 'https://api.openshift.com/api/upgrades_info/v1/graph?channel=fast-4.2&arch=amd64' | jq .
While this data is quite helpful for automation (the Solution also describes helpful queries), it is not very nice to look at the raw data. If you are looking for a graphical presentation of that data, you should check out this wonderful website that is maintained by a Red Hat colleague with hourly generated data: www.ocp-upgrade.net
So here is another one from the trenches.
More than once one of our OpenShift Container Platform customers approached us and said something along the lines of: “Help, I cannot see the X-Forwarded-For header in my application, our OpenShift Router is probably configured incorrectly!”.
In such cases, it is often a good idea to check what is really being forwarded to the Pods in the cluster. For this, I typically use my simonkrenger/echoenv container to print the headers received by the application. In many cases, it turns out that the application affected is a Spring Boot application and the header is passed correctly to the Pod itself. But the Spring Boot application does not show the header anyway.
We have observed a behaviour of Spring Boot that leads to the
X-Forwarded-For header not being passed to the application, as it is consumed by Spring Boot. In the
application.properties of a Spring Boot application, the following setting controls this:
This configuration leads to the header being consumed by Spring Boot and the header not being available in the application. See also the relevant sections in Spring documentation. Good to know.
Kubernetes uses etcd as the persistent store for API data. As etcd is a distributed key-value store, we can also use command line tools to query this store. The examples in this post are for OpenShift 3.x.
Apart from just using
get, there is also the possibility to perform the following actions on certain keys:
put to write to a key – unless you know what you are doing, don’t touch the Kubernetes data in etcd, as this will manifest in very strange Kubernetes behaviour.
del to delete a key – also, this may break your Kubernetes cluster by introducing inconsistencies.
watch to keep a watch on an object. This is very helpful to track changes on a certain object.
Read the rest of this entry
get action is probably the most helpful functionality for in-depth API debugging directly within etcd.
For editing YAML, be it for OpenShift / Kubernetes or Ansible, having your editor set up right can help to avoid common mistakes. So here is the minimalistic config in my
~/.vimrc to make working with YAML files easier. I am sure there are even more plugins or settings available, but this minimal set of commands works fine for me:
Read the rest of this entry
filetype indent plugin on
Some time ago, I had a curious case of very slow DNS resolution in a container on OpenShift. The symptoms were as follows:
- In the PHP application in the container, DNS resolution was very slow with a 5 second delay before the lookup was resolved
- In the container itself, DNS resolution for
curl was very slow, with a 5 second timeout before the lookup was resolved
- However, using
dig in the container itself, DNS resolution was instant
- Also, on the worker node, the DNS resolution was instant (using both
TL;DR: Since glibc 2.10, glibc performs IPv4 and IPv6 lookups in parallel. When IPv6 fails, there is a 5 second timeout in many cases before the lookup is returned. Disable IPv6 DNS lookups by setting “single-request” in “resolv.conf” or disable the IPv6 stack completely.
Read the rest of this entry
In the past few months, on all my machines I have replaced Docker with Podman and mostly the transition has been quite smooth. There are still some rough edges here and there, but the overall experience of using Podman has been great!
However, when trying to start a very simple container, one often runs into the following issue:
$ podman run -p80:80 nginx:latest
Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]
The error message looks very cryptic, but the issue is quite simple: As a regular user, one is typically not allowed to bind ports < 1024. So by trying to bind port 80, you will get the error above.
The fix is trivial, just use a port greater than 1024:
$ podman run -p8080:80 -d nginx:latest
$ curl localhost:8080
Welcome to nginx!
If you really need to use a port number lower than 1024, there are multiple ways to configure that:
net.ipv4.ip_unprivileged_port_start=80or similar in your
- Add the
CAP_NET_BIND_SERVICE capability to your process or user