jq: Delete an element from an array

When working with JSON data, I typically use jq to mangle the data. I keep this post as a reference for myself on how to remove an element from a JSON list or array using jq.

Given we have the following array:

$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq
{
  "hello": "world",
  "myarray": [
    "a",
    "b",
    "c"
  ]
}

To remove an element from the array, use the del function with the select function to select a single element:

jq 'del(.myarray[] | select(. == "b"))'

So when applying this to the above array, we can remove “b” from the array like so:

$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq 'del(.myarray[] | select(. == "b"))'
{
  "hello": "world",
  "myarray": [
    "a",
    "c"
  ]
}

Docker Desktop for Mac: SSH into the Docker VM

As you may know, Docker Desktop on macOS runs a Linux VM in the background to run containers on macOS (since containers are a Linux concept). However, that VM is well hidden from view and you typically only interact with it when you start Docker Desktop or when you need to clean up images in the VM itself.

Sometimes you’ll want to have a shell into that VM, but that turns out to be more complicated than I initially expected. There is however an easily accessible debug shell available.

  • First, open a terminal and use socat to open the debug shell socket to the VM using the following command:
$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
  • socat will print the line “PTY is /dev/ttys010“, to which you can then connect to using screen on another terminal window:
$ screen /dev/ttys0xx

So that will look something like this:

$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
2021/01/02 21:28:43 socat[23508] N opening connection to LEN=73 AF=1 "/Users/simon/Library/Containers/com.docker.docker/Data/debug-shell.sock"
2021/01/02 21:28:43 socat[23508] N successfully connected from local address LEN=16 AF=1 ""
2021/01/02 21:28:43 socat[23508] N successfully connected via
2021/01/02 21:28:43 socat[23508] N PTY is /dev/ttys010
2021/01/02 21:28:43 socat[23508] N starting data transfer loop with FDs [5,5] and [6,6]

$ screen /dev/ttys010
/ #
/ # uname -a
Linux docker-desktop 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 Linux

The VM is a very stripped down Alpine image with no package manager available, so you’ll have to make do with what is available.

Quit with CTRL-D, which will also close the socat socket. Thanks to Tatsushi for figuring it out in this GitHub Gist.

OpenShift 4 – List installed Operators

In OpenShift Container Platform (OCP) 4, most of the functionality is controlled by Operators. To see the currently installed Operators and also their status, use the following command:

$ oc get clusteroperators
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.4     True        False         False      12m
cloud-credential                           4.6.4     True        False         False      38m
cluster-autoscaler                         4.6.4     True        False         False      32m
config-operator                            4.6.4     True        False         False      33m
console                                    4.6.4     True        False         False      21m
csi-snapshot-controller                    4.6.4     True        False         False      27m
dns                                        4.6.4     True        False         False      31m
etcd                                       4.6.4     True        False         False      32m
image-registry                             4.6.4     True        False         False      25m
ingress                                    4.6.4     True        False         False      24m
insights                                   4.6.4     True        False         False      33m
kube-apiserver                             4.6.4     True        False         False      30m
kube-controller-manager                    4.6.4     True        False         False      31m
kube-scheduler                             4.6.4     True        False         False      31m
kube-storage-version-migrator              4.6.4     True        False         False      24m
machine-api                                4.6.4     True        False         False      27m
machine-approver                           4.6.4     True        False         False      32m
machine-config                             4.6.4     True        False         False      32m
marketplace                                4.6.4     True        False         False      32m
monitoring                                 4.6.4     True        False         False      23m
network                                    4.6.4     True        False         False      33m
node-tuning                                4.6.4     True        False         False      33m
openshift-apiserver                        4.6.4     True        False         False      27m
openshift-controller-manager               4.6.4     True        False         False      24m
openshift-samples                          4.6.4     True        False         False      26m
operator-lifecycle-manager                 4.6.4     True        False         False      32m
operator-lifecycle-manager-catalog         4.6.4     True        False         False      32m
operator-lifecycle-manager-packageserver   4.6.4     True        False         False      27m
service-ca                                 4.6.4     True        False         False      33m
storage                                    4.6.4     True        False         False      32m

You can find the description of the default Operators in the documentation.

This will only list the Red Hat Operators that are installed as part of the cluster. These are all controlled by the ClusterVersionOperator, which is the “Master-Operator” of the cluster controlling all others.

If you want to list all Operators that were installed via the Operator Lifecycle Manager (OLM), you can use the following command:

$ oc get subscriptions --all-namespaces

Red Hat Certified Architect

Getting training and exams done in 2020 has been challenging. After reaching my RHCE mid-February, I am now proud to say that I achieved my Red Hat Certified Architect in Infrastructure certification less than 9 months later.

To reach my RHCA, I took the following Red Hat exams. As you can see, it is OpenShift and Ansible all the way down:

  • EX180 Red Hat Certified Specialist in Containers and Kubernetes
  • EX280 Red Hat Certified Specialist in OpenShift Administration
  • EX288 Red Hat Certified Specialist in OpenShift Application Development
  • EX407 Red Hat Certified Specialist in Ansible Automation
  • EX447 Red Hat Certified Specialist in Ansible Best Practices

Of course, the journey does not end here as there are quite a few interesting topics still to learn!

fedora-minimal: Broken tzdata

For my own container images, I often like to use the Fedora Container Images as the base image. This means I often use the “fedora:32” or “fedora-minimal:32” image when building my own images.

Yesterday, while playing around with an image based on the “fedora-minimal” image that then uses nginx and php-fpm, I came across this curious error:

Invalid date.timezone value 'UTC', we selected the timezone 'UTC' for now
Read the rest of this entry »

Dell U3818DW and Fedora 32

Due to COVID-19, like many others I am currently working from home and as a result I took the chance to update my home office. Working with a small laptop screen for months is not optimal, so I went the ultra-wide route and got myself a Dell U3818DW monitor.

Since I did not find a lot of information about running this monitor with Linux, here is a quick overview. To summarize, everything works out-of-the-box.

Read the rest of this entry »

Creating a sosreport on CoreOS

With OpenShift 4, Red Hat introduced Red Hat Enterprise Linux CoreOS. It is a very minimalist operating system, focused on running container workload.

This new minimalism comes with some challenges. There are no more RPM packages and most of the tools we know and love are missing! Luckily, there is the Red Hat supplied toolbox container that contains all the necessary tools and is nicely integrated.

So to start the toolbox, use oc debug node/<nodename>. This will start a privileged container on the node you specify, mount the host file system on /host and drop you into a shell:

$ oc debug node/worker-0.lab.openshift.krenger.ch
Starting pod/worker-0labopenshiftkrengerch-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# toolbox
Container started successfully. To exit, type 'exit'.
sh-4.2#

Now we are running in the toolbox container on our CoreOS host with all the tools we know at our disposal, for example sosreport:

sh-4.2# sosreport

Running sosreport will generate a sosreport in /host/var/tmp/, which means it will be accessible in /var/tmp/ on the CoreOS host itself.

OpenShift 4 Upgrade Paths

For OpenShift 4, the upgrade paths are kept in the cincinnati-graph-data repository as YAML files and then exposed via an API.

There is a Red Hat Solution describing how this data can be queried via api.openshift.com and how you can use this data in your automation:

$ curl -sH 'Accept:application/json' 'https://api.openshift.com/api/upgrades_info/v1/graph?channel=fast-4.2&arch=amd64' | jq .

While this data is quite helpful for automation (the Solution also describes helpful queries), it is not very nice to look at the raw data. If you are looking for a graphical presentation of that data, you should check out this wonderful website that is maintained by a Red Hat colleague with hourly generated data: www.ocp-upgrade.net

Missing X-Forwarded-For header in Spring Boot application

So here is another one from the trenches.

More than once one of our OpenShift Container Platform customers approached us and said something along the lines of: “Help, I cannot see the X-Forwarded-For header in my application, our OpenShift Router is probably configured incorrectly!”.

In such cases, it is often a good idea to check what is really being forwarded to the Pods in the cluster. For this, I typically use my simonkrenger/echoenv container to print the headers received by the application. In many cases, it turns out that the application affected is a Spring Boot application and the header is passed correctly to the Pod itself. But the Spring Boot application does not show the header anyway.

We have observed a behaviour of Spring Boot that leads to the X-Forwarded-For header not being passed to the application, as it is consumed by Spring Boot. In the application.properties of a Spring Boot application, the following setting controls this:

server.use-forward-headers: true

This configuration leads to the header being consumed by Spring Boot and the header not being available in the application. See also the relevant sections in Spring documentation. Good to know.

Exploring the OpenShift etcd with etcdctl

Kubernetes uses etcd as the persistent store for API data. As etcd is a distributed key-value store, we can also use command line tools to query this store. The examples in this post are for OpenShift 3.x.

Apart from just using get, there is also the possibility to perform the following actions on certain keys:

  • put to write to a key – unless you know what you are doing, don’t touch the Kubernetes data in etcd, as this will manifest in very strange Kubernetes behaviour.
  • del to delete a key – also, this may break your Kubernetes cluster by introducing inconsistencies.
  • watch to keep a watch on an object. This is very helpful to track changes on a certain object.

The get action is probably the most helpful functionality for in-depth API debugging directly within etcd.

Read the rest of this entry »