Cilium Hubble CLI - Header Image

Cilium Hubble CLI – Configure Auto Completion

One of the little nits I have is when I use the terminal, and as I start typing a command if I press the Tab key, autocomplete doesn’t work. It feels like it should be the default out of the box.

You can configure the Hubble CLI for Cilium, but it’s not documented in the docs.cilium.io pages yet, so I thought I’d throw up a quick post adding it here!

This command pushes the auto-complete config into my zsh config on macOS.

hubble completion zsh > $(brew --prefix)/share/zsh/site-functions/_hubble

For other platforms, you can see the examples provided for Cilium Agent, and apply the logic to your own environment.

Cilium Hubble CLI - Autocompletion

Regards

Dean Lewis

Cilium Hubble CLI - Header Image

Cilium Hubble CLI – Using a local configuration file

Did you know that the Cilium Hubble CLI supports using a configuration file?

Below is an example command where Isovalent Enterprise for Cilium is deployed and Hubble RBAC is configured. Therefore, I must provide additional details such as the server location and certificates to authenticate using the CLI. The steps in this blog post also work with Cilium OSS, which is especially handy when setting allow and deny lists to prune the information returned.

This can become cumbersome for every command you want to run.

❯ hubble observe \
--server tls://localhost:4245 \
--tls-ca-cert-files ca-cert.pem \
--tls-server-name 'cli.hubble-relay.cilium.io' \
--namespace kube-system
Mar 20 13:09:38.061: tenant-jobs/resumes-58c6678bc8-5nkcg:36459 (ID:88195) -> kube-system/coredns-77fcb74c4c-wfw4f:53 (ID:102385) policy-verdict:L3-L4 EGRESS ALLOWED (UDP)
Mar 20 13:09:38.061: tenant-jobs/resumes-58c6678bc8-5nkcg:36459 (ID:88195) -> kube-system/coredns-77fcb74c4c-wfw4f:53 (ID:102385) to-proxy FORWARDED (UDP)
Mar 20 13:09:38.062: tenant-jobs/resumes-58c6678bc8-5nkcg (ID:88195) <> kube-system/coredns-77fcb74c4c-wfw4f:53 (ID:102385) post-xlate-fwd TRANSLATED (UDP)
Mar 20 13:09:38.062: tenant-jobs/resumes-58c6678bc8-5nkcg:36459 (ID:88195) -> kube-system/coredns-77fcb74c4c-wfw4f:53 (ID:102385) dns-request proxy FORWARDED (DNS Query coreapi.tenant-jobs.svc.cluster.local. A)

Below you can see the various configuration options that the Hubble CLI supports. The above example is using flags as part of the command.

hubble config -h
Config allows to modify or view the hubble configuration. Global hubble options
can be set via flags, environment variables or a configuration file. The
following precedence order is used:

1. Flag
2. Environment variable
3. Configuration file
4. Default value

The "config view" subcommand provides a merged view of the configuration. The
"config set" and "config reset" subcommand modify values in the configuration
file.

Environment variable names start with HUBBLE_ followed by the flag name
capitalized where eventual dashes ('-') are replaced by underscores ('_').
For example, the environment variable that corresponds to the "--server" flag
is HUBBLE_SERVER. The environment variable for "--tls-allow-insecure" is
HUBBLE_TLS_ALLOW_INSECURE and so on.

Usage:
  hubble config [flags]
  hubble config [command]

Available Commands:
  get         Get an individual value in the hubble config file
  reset       Reset all or an individual value in the hubble config file
  set         Set an individual value in the hubble config file
  view        Display merged configuration settings

Using the below commands, I can set the flags as values in the configuration file, for any CLI flag, the set value will be prepended with HUBBLE_+ the flag name.

❯ hubble config set HUBBLE_SERVER tls://localhost:4245
unknown key: HUBBLE_SERVER
❯ hubble config set server tls://localhost:4245
❯ hubble config set tls-ca-cert-files ca-cert.pem

❯ hubble config set tls-server-name 'cli.hubble-relay.cilium.io'

Now we can use the Hubble CLI without the additional flags.

❯ hubble observe -n tenant-jobs
Mar 20 13:13:30.004: tenant-jobs/coreapi-6748664db6-rmr2j:42935 (ID:111121) <- kube-system/coredns-77fcb74c4c-wfw4f:53 (ID:102385) to-endpoint FORWARDED (UDP)
Mar 20 13:13:30.496: tenant-jobs/crawler-6dbf4f8b5d-vr7gr:47804 (ID:71705) -> tenant-jobs/loader-68544b8b87-zrxwt:50051 (ID:115137) http-request FORWARDED (HTTP/2 POST http://loader:50051/loader.Loader/LoadCv)
Mar 20 13:13:30.505: tenant-jobs/crawler-6dbf4f8b5d-vr7gr:47804 (ID:71705) <- tenant-jobs/loader-68544b8b87-zrxwt:50051 (ID:115137) http-response FORWARDED (HTTP/2 200 9ms (POST http://loader:50051/loader.Loader/LoadCv))

We can validate the configuration in use by running the below command, which also confirms the location of the config file itself, which you can edit directly.

❯ hubble config view
allowlist: []
client-id: ""
client-secret: ""
config: /Users/veducate/Library/Application Support/hubble/config.yaml
debug: false
denylist: []
grant-type: auto
issuer: ""
issuer-ca: ""
refresh: false
scopes: []
server: tls://localhost:4245
timeout: 5s
tls: false
tls-allow-insecure: false
tls-ca-cert-files:
- ca-cert.pem
tls-client-cert-file: ""
tls-client-key-file: ""
tls-server-name: cli.hubble-relay.cilium.io
token-file

Regards

Dean Lewis

Red Hat OpenShift - Cilium CNI Migration - Header

How to migrate from Red Hat OpenShiftSDN/OVN-Kubernetes to Cilium

Recently, I’m seeing more and more queries about migrating to Cilium within an existing Red Hat OpenShift cluster, due to Cilium’s advanced networking capabilities, robust security features, and enhanced observability out-of-the-box. This increase of interest is also boosted by the fact that Cilium became the first Kubernetes CNI to graduate in the CNCF Landscape.

In this blog post, we’ll cover the step-by-step process of migrating from the traditional OpenShiftSDN (default CNI pre-4.12) or OVN-Kubernetes (default CNI from 4.12) to Cilium, exploring the advantages and considerations along the way.

If you need to understand more about the default CNI options in Red Hat OpenShift first, then I highly recommend this blog post, as pre-reading before going through this walkthrough.

Cilium Overview

For those of you who have not heard of Cilium, or maybe just the name and know there’s a buzz about it. In short Cilium, is a cloud native networking solution to provide security, networking and observability at a software level.

The reason why the buzz is so huge is due to being implemented using eBPF, a new way of interacting and programming with the kernel layer of the OS. This implementation opens a whole new world of options.

I’ll leave you with these two short videos from Thomas Graf, co-founder of Isovalent, the creators of Cilium.

Does Red Hat support this migration?

Cilium has achieved the Red Hat OpenShift Container Network Interface (CNI) certification by completing the operator certification and passing end-to-end testing. Red Hat will support Cilium installed and running in a Red Hat OpenShift cluster, and collaborate as needed with the ecosystem partner to troubleshoot any issues, as per their third-party software support statements. This would be a great reason to look at Isovalent Enterprise for Cilium, rather than using Cilium OSS, to get support from both vendors.

However, when it comes to performing a CNI migration for an active existing OpenShift cluster, Red Hat provides no guidance, unless it’s migrating from OpenShiftSDN to OVN-Kubernetes.

This means CNI migration to a third party CNI in an existing running Red Hat OpenShift Cluster is a grey area.

I’d recommend speaking to your Red Hat account team before performing any migration like this in your production environments. I have known large customers to take on this work/burden/supportability themselves and be successful.

Follow along with this video!

If you prefer watching a video or seeing things live and following along, like I do at times, then I’ve got you covered with the below video that covers the content from this blog post.

Pre-requisites and OpenShift Cluster configuration
As per the above, understand this process in detail, and if you follow it, you do so at your own risk.

For this walkthrough, I’ve deployed a OpenShift 4.13 cluster with OVN-Kubernetes, with a sample application (see below). You can see these posts I’ve written for deployments of OpenShift, or follow the official documentation.

Here is a copy of my install-config.yaml file. It was generated using the openShift-install create install-config wizard. Then I ran the openshift-install create cluster command. Continue reading How to migrate from Red Hat OpenShiftSDN/OVN-Kubernetes to Cilium

VMware Change Block Tracking Issue - Header

vSphere data loss bug returns – CBT issues in vSphere ESXI 8.0 update 2

The Issue

I keep saying, there are no new ideas in technology, just re-hashes of old ones. That is also true for VMware and their data loss issues.

The vSphere-based change block tracking (CBT) bug is back! I think I wrote 5 articles on this back in 2014/2015 with explanations and fixes!

Veeam reported this at the start of week commencing 11th December 2023, with VMware confirming the issue by the end of the same week.

The Cause

Change block tracking is the feature used to see which blocks of data have changed since a known point in time, to enable backup software to capture only the incremental changes.

If this feature fails, you could lose data in your backups, as the backup software doesn’t know which blocks to protect.

as per VMware:

CBT's QueryChangedDiskAreas may lose some data changed on the disk after disk is hot-extended.
It only happens on ESXi 8.0u2.
The Fix/Workaround

Directly from VMware’s newly published KB, which took them only a few days to confirm this behaviour after Veeam noticed at the start of the week!

  • Resolution
    • Unfortunately, there is no fix available for this bug at this time. However, you can use the following workaround to work around the issue until a fix is released
  • Workaround
    1. Reset CBT after disk is hot-extended. Then, user need to take a full backup immediately.
      It does not fix existing backups, but it makes sure the new ones are good.
    2. Or, user extend disk in offline.

You cannot fix your existing incremental backups if they have been affected, if they missed the correct data to backup, it’s been missed! But you can run an Active Full backup to capture everything, certainly for Veeam this is the case, other backup vendors you’ll need to double check with!

How do I reset Change Block Tracking?

If you are using Veeam, you can just perform an Active Full backup, and ensure the reset CBT option is configured. This is enabled by default.

If you aren’t using Veeam, then the following will be your next steps.

To reset Change Block Tracking, as per this older VMware KB article from the last time this was an issue. VMware may update this article or produce another one now this recent bug has been found.

  • Find your VM in the vCenter Client
    • Power the VM off
    • Click the Options tab, select the Advanced section and then click Configuration Parameters.
  • Disable CBT for the virtual machine by setting the ctkEnabled value to false.
  • If you need to do this for specific virtual disks attached to your virtual machine
    • Disable CBT by configuring the scsix:x.ctkEnabled value for each attached virtual disk to false. (scsix:x is SCSI controller and SCSI device ID of your virtual disk.)
  • Ensure there are no snapshot files (.delta.vmdk) present in the virtual machine’s working directory. For more information, see Determining if there are leftover delta files or snapshots that VMware vSphere or Infrastructure Client cannot detect (1005049).
  • Delete any -CTK.VMDK files within the virtual machine’s working directory.

Now power on your virtual machine.

Depending on your backup software vendor, you may need to manually re-enable Change Block Tracking, you can find a full list of steps and considerations in this VMware KB article. It’s essentially power down the VM, enable in value again in configuration parameters.

Summary

Let’s hope VMware produces a fix for this quickly, I remember they had this issue in vSphere 5.5 and 6.0 and some fixes didn’t resolved the issue, it was a pain being a consultant having to install fixes at customers sites.

It’s good that VMware have only taken a short amount of time to validate this bug and publish something officially about it!

 

Regards

Dean Lewis

Grafana Header

Grafana – unable to login “User already exists”

The Issue

When trying to log into Grafana Web UI using an OIDC provider, in my case, Dex. The login would fail due to the error “User already exists”, after some time. This happened for any users given access via the OIDC.

The Cause

This looks to happen due to a CVE fix implemented in Grafana as documented in the two comments below:

The Fix

To resolve this issue, for Grafana 10.0.x and 9.5.6, the env variable GF_AUTH_OAUTH_ALLOW_INSECURE_EMAIL_LOOKUP can be set or the config key oauth_allow_insecure_email_lookup can be set under the auth section.

[auth]
oauth_allow_insecure_email_lookup=true

Source + Source 2

Hope this helps anyone stuck out there!

Regards

Dean Lewis