Kubernetes

Kubernetes Metric Server – cannot validate certificate because it doesn’t contain any IP SANs

The Issue

Whilst trying to install the Metric’s server:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

so I could use kubectl top node for it’s metrics on Node resource useage, I found the pods were not loading, and upon inspection found the following:

> kubectl logs -n kube-system metrics-server-6f6cdbf67d-v6sbf 

I0717 12:19:32.132722 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E0717 12:19:39.159422 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.49.2:10250/metrics/resource\": x509: cannot validate certificate for 192.168.49.2 because it doesn't contain any IP SANs" node="minikube"

The Cause

The issue here was due to the installation of Cert-Manager and setting up some TLS configurations within the CNI and Self-Signed certificates, the metric’s server wasn’t able to validate the authority of the Kubernetes API

The Fix

As this is communication within the cluster, I could simply fix this by telling Metric Server container to trust the insecure certificates from the API using the below
kubectl patch command:

kubectl patch deployment metrics-server -n kube-system --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

 

Regards

Dean Lewis

Kubernetes

Quick Tip: Supercharge Kubernetes Resource Retrieval with ‘kubectl get -f’

Did you know you can use the -f argument with kubectl get? Yep me either.

It’s pretty handy actually, as it will provide the status for all your Kubernetes resources deployed using that file or even file from hyperlink!

Below is a screenshot example using a file.

kubectl get -f

You can also specify multiple files by adding -f {file}for each file you want to check (this also works when deploying resources too!).

kubect get -f multiple files

And another example, using a hyperlink as the file location.

kubectl get -f from url

Hope this little tip helps someone!

Just for fun, here’s the ChatGPT write-up!

The command “kubectl get -f” is used in Kubernetes to retrieve information about Kubernetes resources defined in a file or multiple files. Let’s break down the command and its components:

  • “kubectl”: It is the command-line interface (CLI) tool for interacting with Kubernetes clusters. It allows you to manage and control Kubernetes resources.
  • “get”: It is the action or subcommand used to retrieve information about Kubernetes resources.
  • “-f”: It is a flag that specifies that the input will be provided from a file or multiple files rather than directly on the command line. It is followed by the path to the file(s) containing the Kubernetes resource definitions.

When you use “kubectl get -f <file>”, Kubernetes reads the file(s) provided and retrieves the information about the resources defined within those files. The information can include the names, statuses, and other details of the resources.

For example, if you have a file named “deployment.yaml” that defines a Kubernetes Deployment resource, you can use the command “kubectl get -f deployment.yaml” to retrieve information about that specific Deployment resource.

You can also provide multiple files by separating them with commas or specifying a directory containing multiple resource files. For instance, “kubectl get -f file1.yaml,file2.yaml” or “kubectl get -f /path/to/files” (where /path/to/files is the directory path).

By using this command, you can quickly retrieve information about Kubernetes resources defined in files without needing to manually create or modify resources using the command line.

Regards

Dean Lewis

Red Hat OpenShift Header

Red Hat OpenShift – Sorry, your reply was invalid: IP expected to be in one of the machine networks

The Issue

When running the command:

openshift-install create cluster

And you provide an API IP address which is not in the CIDR range 10.0.0.0/16, you recieve the below error.

INFO Defaulting to only available network: VM Network 
X Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16
? The VIP to be used for the OpenShift API.
OpenShift-Install create cluster - Sorry, your reply was invalid- IP expected to be in one of the machine networks- 10.0.0.0-16
The Cause

This is a known bug in the openshift-install tool (GitHub PR,Red Hat Article), where by the software installer is hardcoded to only accept addresses in the 10.0.0.0/16 range.

The Fix

The current work around for this is to run openshift-install create install-config provide ip addresses in the 10.0.0.0/16 range, and then alter the install-config.yaml file manually before running openshift-install create cluster, which will read the available install-config.yaml file and create the cluster (rather than presenting you another wizard).

In the wizard (below screenshot), I’ve provided IP’s on the range from above, and set my base domain and cluster name as well. The final piece is to paste in my Pull Secret from the Red Hat Cloud console.

OpenShift-install create install-config

Now if I run ls on my current directory I’ll see the install-config.yaml file. It is recommended to save this file now before you run the create cluster command, as this file will be removed after this, as it contains plain text passwords.

I’ve highlighted in the below image the lines we need to alter.

OpenShift install install config.yaml file

For the section:

machineNetwork: - cidr: 10.0.0.0/16

This needs to be changed to the network subnet the nodes will run on. And for the platform section, you need to map the right IP addresses from your DNS records.

platform:
  vsphere:
    apiVIP: 192.168.200.192 <<<<<<< This is your api.{cluster_name}.{base_domain} DNS record
    cluster: Cluster-1
    folder: /vEducate-DC/vm/OpenShift/
    datacenter: vEducate-DC
    defaultDatastore: Datastore01
    ingressVIP: 192.168.200.193 <<<<<<< This is your *.apps.{cluster_name}.{base_domain} DNS record

Now that we have our correctly configured install-config.yaml file, we can proceed with the installation of the cluster, which after running the openshift-install create cluster command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the --log-level= argument at the end of the command.

Regards

Dean Lewis

Kubernetes

How to delete Kubernetes namespaces or pods with a specific pattern or name

I had a need to delete a number of Namespaces all at once that were created as part of some automated platform testing.

Each namespace had a common naming convention starting with “e2e”, the below command will get all namespaces without the initial returned header line from Kubectl, look for anything with the pattern “e2e” using the awk command, and print them to a variable $1, xargs then uses each object in the variable array into the “kubectl delete ns”

kubectl get ns --no-headers=true | awk '/e2e/{print $1}'| xargs  kubectl delete ns

You can also do the same for deleting pods. The below command, would delete any pods with “veducate” in their name, you would need to input the necessary namespace.

kubectl get pods -n {namespace} --no-headers=true | awk '/veducate/{print $1}'| xargs  kubectl delete -n {namespace} pod

Quick link to this Stackoverflow post which pointed me in the right direction, I just had to modify it from pods to namespaces as the use case.

Regards

Dean Lewis

vROPs Header

Collect VM Notes in (Aria) vRealize Operations: A Step-by-Step Guide

One of the most common questions I’ve come across in previous years is how do I get the VM notes held in vCenter into vRealize (Aria) Operations?

Great news, in vRealize Operations 8.10 and later, you can now collect those properties for the virtual machines simply by enabling the property to be collected in your Policy.

Enable the Notes property on your Policy
  • Click on Policies under Configure in the left-hand navigation pane
  • Select your active policy that you want to alter
    • You may need to change multiple policies due to inheritance settings

vROPs - VM Notes - Edit Policy

  • Select the Edit Policy in the far right-hand side

vROPs - VM Notes - Edit Policy 2

  • Set the object type as “Virtual Machine”
  • Search “note” to curate the list to show just the property we are interested in
  • Expand Properties > System
  • Highlight Notes and click on “Deactivated” and change to “Activated”
  • Click Save

vROPs - VM Notes - Edit Policy - Metrics and Properties - Virtual Machine - Enable System Notes

vROPs - VM Notes - Edit Policy - Metrics and Properties - Virtual Machine - System Notes - Activated

Viewing the VM notes and adding them to a view and reports

Now it’s a case of wait for the collection cycle of your vSphere environment, below you can see an example of a virtual machine which is configured with a note. Any note changes will also be captured.

vROPs - VM Notes - Virtual Machine Property - System - Notes

Now let’s look at adding this property to an existing report as well.

In the below, I’m going to edit the view “Virtual Machine Inventory” which is used to power the out-of-the-box report “Inventory Report – Virtual Machines”

  • Under Visualize on the left-hand navigation, click on Views
  • Click Manage Views, find your view and click to edit
  • Go to Step 2 – Data
  • The Selected Subject will already be Virtual Machine (red box)
  • Search for Note (1)
  • Drag the note property to the data column (2)
  • Set a vanity name for the property (3)
  • Set a preview source (green box)
    • Ensure that the VM note displays as expected (4)
  • Click Update

vROPs - VM Notes - Edit View

Now let’s check this updated view is reflected in our report:

  • Under Visualize on the left-hand navigation, click on Reports
  • Click to edit your chosen report
  • Expand the Views and Dashboards section in the report
  • In the red boxes you can see the matching name of the view I edited in the above screenshots, and the VM Notes Column is present

vROPs - VM Notes - Edit Report

Finally, when I run this report, I can see the additional VM note data added to the report.

vROPs - VM Notes - Run Report

Hopefully this new simple but much asked for feature will help in the ongoing management of your environments.

Regards

Dean Lewis