In OpenShift Container Platform (OCP) 4, most of the functionality is controlled by Operators. To see the currently installed Operators and also their status, use the following command:
This will only list the Red Hat Operators that are installed as part of the cluster. These are all controlled by the ClusterVersionOperator, which is the “Master-Operator” of the cluster controlling all others.
If you want to list all Operators that were installed via the Operator Lifecycle Manager (OLM), you can use the following command:
Getting training and exams done in 2020 has been challenging. After reaching my RHCE mid-February, I am now proud to say that I achieved my Red Hat Certified Architect in Infrastructure certification less than 9 months later.
To reach my RHCA, I took the following Red Hat exams. As you can see, it is OpenShift and Ansible all the way down:
EX180 Red Hat Certified Specialist in Containers and Kubernetes
EX280 Red Hat Certified Specialist in OpenShift Administration
EX288 Red Hat Certified Specialist in OpenShift Application Development
EX407 Red Hat Certified Specialist in Ansible Automation
EX447 Red Hat Certified Specialist in Ansible Best Practices
Of course, the journey does not end here as there are quite a few interesting topics still to learn!
While this data is quite helpful for automation (the Solution also describes helpful queries), it is not very nice to look at the raw data. If you are looking for a graphical presentation of that data, you should check out this wonderful website that is maintained by a Red Hat colleague with hourly generated data: www.ocp-upgrade.net
More than once one of our OpenShift Container Platform customers approached us and said something along the lines of: “Help, I cannot see the X-Forwarded-For header in my application, our OpenShift Router is probably configured incorrectly!”.
In such cases, it is often a good idea to check what is really being forwarded to the Pods in the cluster. For this, I typically use my simonkrenger/echoenv container to print the headers received by the application. In many cases, it turns out that the application affected is a Spring Boot application and the header is passed correctly to the Pod itself. But the Spring Boot application does not show the header anyway.
We have observed a behaviour of Spring Boot that leads to the X-Forwarded-For header not being passed to the application, as it is consumed by Spring Boot. In the application.properties of a Spring Boot application, the following setting controls this:
server.use-forward-headers: true
This configuration leads to the header being consumed by Spring Boot and the header not being available in the application. See also the relevant sections in Spring documentation. Good to know.
Kubernetes uses etcd as the persistent store for API data. As etcd is a distributed key-value store, we can also use command line tools to query this store. The examples in this post are for OpenShift 3.x.
Apart from just using get, there is also the possibility to perform the following actions on certain keys:
put to write to a key – unless you know what you are doing, don’t touch the Kubernetes data in etcd, as this will manifest in very strange Kubernetes behaviour.
del to delete a key – also, this may break your Kubernetes cluster by introducing inconsistencies.
watch to keep a watch on an object. This is very helpful to track changes on a certain object.
The get action is probably the most helpful functionality for in-depth API debugging directly within etcd.
Some time ago, I had a curious case of very slow DNS resolution in a container on OpenShift. The symptoms were as follows:
In the PHP application in the container, DNS resolution was very slow with a 5 second delay before the lookup was resolved
In the container itself, DNS resolution for curl was very slow, with a 5 second timeout before the lookup was resolved
However, using dig in the container itself, DNS resolution was instant
Also, on the worker node, the DNS resolution was instant (using both dig and curl)
TL;DR: Since glibc 2.10, glibc performs IPv4 and IPv6 lookups in parallel. When IPv6 fails, there is a 5 second timeout in many cases before the lookup is returned. Disable IPv6 DNS lookups by setting “single-request” in “resolv.conf” or disable the IPv6 stack completely.
On the 28th of November, me and my colleagues from SBB had the honor of speaking at the Open Source Workshop at Deutsche Bahn in Frankfurt.
Deutsche Bahn (the German counterpart to SBB, where I currently work) is looking to invest more in Open Source technology and also container platforms. This is why they are holding a yearly Open Source Workshop. Me and my colleagues from SBB are big supporters of Open Source software (SBB has lots of stuff on GitHub) and we also participate in the OpenShift Container Platform Community Switzerland (also on GitHub).
So in our presentation, we mainly talked about operating OpenShift at scale, our Open Source tools and why we participate in Open Source software. You can find more information on Twitter. We had a lot of fun and are looking forward to joining Deutsche Bahn again next year – if we are invited ;).
So when using NodeSelectors in OpenShift, you’ll also have to set labels on your nodes. You can find more information on labeling nodes in the OpenShift documentation. Here is how you can add or remove a label from a node or pod:
So in any larger container orchestrator installation, be it Kubernetes or OpenShift, you will encounter pods that crash regularly and enter the “CrashLoopBackOff” status.
$ oc get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
[..]
my-project-1 helloworld-11-9w3ud 1/1 Running 0 7h
my-project-2 myapp-simon-43-7macd 0/1 CrashLoopBackOff 3774 9h
Note the container that has status “CrashLoopBackOff” and 3774 restarts.
I recently started working with OpenShift and needed to get a list of all pods on the cluster. I quickly glanced at the documentation but could not find what I wanted. My colleagues quickly pointed me in the right direction:
oc get pod --all-namespaces -o wide
Here is the command with some example output of what to expect:
# oc get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
my-project my-pod-43-d9mo6 1/1 Running 0 1d 192.168.0.183 node3.krenger.local
yet-another-project another-pod-43-7g3r0 1/1 Running 0 2d 192.168.0.184 node4.krenger.local
[..]
If you just want to know which pods are on a certain node, use oc adm manage-node:
My name is Simon Krenger, I am a Technical Account Manager (TAM) at Red Hat. I advise our customers in using Kubernetes, Containers, Linux and Open Source.