I recently bought an NVIDIA Jetson Nano Developer Kit to fiddle around with things like MicroShift or TensorFlow. The board is typically used with L4T (Linux for Tegra) based on Ubuntu 18.04. Fedora can also be installed, although not all drivers (for example for the GPU) are available yet. So after properly updating the system with the latest packages, when starting a container using the nvidia
runtime, I got the following error:
docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.6.1-py3
[..]
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.
Read the rest of this entry
When working with JSON data, I typically use jq
to mangle the data. I keep this post as a reference for myself on how to remove an element from a JSON list or array using jq
.
Given we have the following array:
$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq
{
"hello": "world",
"myarray": [
"a",
"b",
"c"
]
}
To remove an element from the array, use the del
function with the select
function to select a single element:
jq 'del(.myarray[] | select(. == "b"))'
So when applying this to the above array, we can remove “b” from the array like so:
$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq 'del(.myarray[] | select(. == "b"))'
{
"hello": "world",
"myarray": [
"a",
"c"
]
}
As you may know, Docker Desktop on macOS runs a Linux VM in the background to run containers on macOS (since containers are a Linux concept). However, that VM is well hidden from view and you typically only interact with it when you start Docker Desktop or when you need to clean up images in the VM itself.
Sometimes you’ll want to have a shell into that VM, but that turns out to be more complicated than I initially expected. There is however an easily accessible debug shell available.
- First, open a terminal and use
socat
to open the debug shell socket to the VM using the following command:
$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
socat
will print the line “PTY is /dev/ttys010
“, to which you can then connect to using screen
on another terminal window:
$ screen /dev/ttys0xx
So that will look something like this:
$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
2021/01/02 21:28:43 socat[23508] N opening connection to LEN=73 AF=1 "/Users/simon/Library/Containers/com.docker.docker/Data/debug-shell.sock"
2021/01/02 21:28:43 socat[23508] N successfully connected from local address LEN=16 AF=1 ""
2021/01/02 21:28:43 socat[23508] N successfully connected via
2021/01/02 21:28:43 socat[23508] N PTY is /dev/ttys010
2021/01/02 21:28:43 socat[23508] N starting data transfer loop with FDs [5,5] and [6,6]
$ screen /dev/ttys010
/ #
/ # uname -a
Linux docker-desktop 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 Linux
The VM is a very stripped down Alpine image with no package manager available, so you’ll have to make do with what is available.
Quit with CTRL-D, which will also close the socat
socket. Thanks to Tatsushi for figuring it out in this GitHub Gist.
In OpenShift Container Platform (OCP) 4, most of the functionality is controlled by Operators. To see the currently installed Operators and also their status, use the following command:
$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.6.4 True False False 12m
cloud-credential 4.6.4 True False False 38m
cluster-autoscaler 4.6.4 True False False 32m
config-operator 4.6.4 True False False 33m
console 4.6.4 True False False 21m
csi-snapshot-controller 4.6.4 True False False 27m
dns 4.6.4 True False False 31m
etcd 4.6.4 True False False 32m
image-registry 4.6.4 True False False 25m
ingress 4.6.4 True False False 24m
insights 4.6.4 True False False 33m
kube-apiserver 4.6.4 True False False 30m
kube-controller-manager 4.6.4 True False False 31m
kube-scheduler 4.6.4 True False False 31m
kube-storage-version-migrator 4.6.4 True False False 24m
machine-api 4.6.4 True False False 27m
machine-approver 4.6.4 True False False 32m
machine-config 4.6.4 True False False 32m
marketplace 4.6.4 True False False 32m
monitoring 4.6.4 True False False 23m
network 4.6.4 True False False 33m
node-tuning 4.6.4 True False False 33m
openshift-apiserver 4.6.4 True False False 27m
openshift-controller-manager 4.6.4 True False False 24m
openshift-samples 4.6.4 True False False 26m
operator-lifecycle-manager 4.6.4 True False False 32m
operator-lifecycle-manager-catalog 4.6.4 True False False 32m
operator-lifecycle-manager-packageserver 4.6.4 True False False 27m
service-ca 4.6.4 True False False 33m
storage 4.6.4 True False False 32m
You can find the description of the default Operators in the documentation.
This will only list the Red Hat Operators that are installed as part of the cluster. These are all controlled by the ClusterVersionOperator
, which is the “Master-Operator” of the cluster controlling all others.
If you want to list all Operators that were installed via the Operator Lifecycle Manager (OLM), you can use the following command:
$ oc get subscriptions --all-namespaces
Getting training and exams done in 2020 has been challenging. After reaching my RHCE mid-February, I am now proud to say that I achieved my Red Hat Certified Architect in Infrastructure certification less than 9 months later.
To reach my RHCA, I took the following Red Hat exams. As you can see, it is OpenShift and Ansible all the way down:
- EX180 Red Hat Certified Specialist in Containers and Kubernetes
- EX280 Red Hat Certified Specialist in OpenShift Administration
- EX288 Red Hat Certified Specialist in OpenShift Application Development
- EX407 Red Hat Certified Specialist in Ansible Automation
- EX447 Red Hat Certified Specialist in Ansible Best Practices
Of course, the journey does not end here as there are quite a few interesting topics still to learn!
For my own container images, I often like to use the Fedora Container Images as the base image. This means I often use the “fedora:32” or “fedora-minimal:32” image when building my own images.
Yesterday, while playing around with an image based on the “fedora-minimal” image that then uses nginx and php-fpm, I came across this curious error:
Invalid date.timezone value 'UTC', we selected the timezone 'UTC' for now
Read the rest of this entry
Due to COVID-19, like many others I am currently working from home and as a result I took the chance to update my home office. Working with a small laptop screen for months is not optimal, so I went the ultra-wide route and got myself a Dell U3818DW monitor.
Since I did not find a lot of information about running this monitor with Linux, here is a quick overview. To summarize, everything works out-of-the-box.
Read the rest of this entry
With OpenShift 4, Red Hat introduced Red Hat Enterprise Linux CoreOS. It is a very minimalist operating system, focused on running container workload.
This new minimalism comes with some challenges. There are no more RPM packages and most of the tools we know and love are missing! Luckily, there is the Red Hat supplied toolbox
container that contains all the necessary tools and is nicely integrated.
So to start the toolbox, use oc debug node/<nodename>
. This will start a privileged container on the node you specify, mount the host file system on /host
and drop you into a shell:
$ oc debug node/worker-0.lab.openshift.krenger.ch
Starting pod/worker-0labopenshiftkrengerch-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# toolbox
Container started successfully. To exit, type 'exit'.
sh-4.2#
Now we are running in the toolbox
container on our CoreOS host with all the tools we know at our disposal, for example sosreport
:
sh-4.2# sosreport
Running sosreport
will generate a sosreport in /host/var/tmp/
, which means it will be accessible in /var/tmp/
on the CoreOS host itself.
For OpenShift 4, the upgrade paths are kept in the cincinnati-graph-data repository as YAML files and then exposed via an API.
There is a Red Hat Solution describing how this data can be queried via api.openshift.com and how you can use this data in your automation:
$ curl -sH 'Accept:application/json' 'https://api.openshift.com/api/upgrades_info/v1/graph?channel=fast-4.2&arch=amd64' | jq .
While this data is quite helpful for automation (the Solution also describes helpful queries), it is not very nice to look at the raw data. If you are looking for a graphical presentation of that data, you should check out this wonderful website that is maintained by a Red Hat colleague with hourly generated data: www.ocp-upgrade.net