- Knowledge Base
- Oracle Cloud Infrastructure
- OCI Kubernetes Engine
- Enable Authentication Using SSL/TLS Certificates
Ensure that Kubelet authentication using SSL/TLS certificates is enabled by setting the "clientCAFile" parameter to restrict access to only those requests presenting a client certificate signed by a trusted authority, such as the cluster's root CA. This feature is important because the Kubelet uses this mechanism to validate the Kubernetes API server's identity when handling sensitive commands like kubectl logs or kubectl exec.
Connections from the API server to the Kubelet, such as those used for kubectl logs and kubectl exec, terminate at the Kubelet's HTTPS endpoint. Without verification of the Kubelet's serving certificate, the API server cannot confirm the Kubelet’s identity, leaving these connections vulnerable to Man-In-The-Middle (MITM) attacks. Enabling certificate verification ensures the API server authenticates the Kubelet before submitting requests, maintaining the confidentiality and integrity of sensitive communications, even over untrusted networks.
Audit
To determine if Kubelet authentication using SSL/TLS certificates is enabled, perform the following operations:
Using OCI Console
-
Sign in to your Oracle Cloud Infrastructure (OCI) account.
-
Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.
-
For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.
-
Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to examine, listed in the Name column.
-
Select the Node pools tab and click on the name (link) of the node pool that you want to examine.
-
Select the Nodes tab and click on the name (link) of the node (instance) that you want to examine.
-
Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.
-
Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.
-
Once connected to your OKE cluster worker node, run the commands listed below to determine if the client certificate authentication is enabled for the Kubelet service:
- Run the following command to determine if the Kubelet service is running:
sudo systemctl status kubelet
- The output should return Active: active (running).
- Run the following command to find the kubelet-config.json file for your node:
find / -name kubelet-config.json
- The command output should return the location of the kubelet-config.json file, such as /etc/kubernetes/kubelet/kubelet-config.json.
- Run the following command to describe the contents of the kubelet-config.json file:
sudo more /etc/kubernetes/kubelet/kubelet-config.json
- The command output should return the Kubelet config file contents:
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "address": "0.0.0.0", "authentication": { "anonymous": { "enabled": true }, "webhook": { "cacheTTL": "2m0s", "enabled": true }, "x509": { "clientCAFile": "" } }, "authorization": { "mode": "AlwaysAllow" }, "clusterDomain": "cluster.local", "hairpinMode": "hairpin-veth", "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "cgroupRoot": "/", "featureGates": { "RotateKubeletServerCertificate": true }, "protectKernelDefaults": true, "serializeImagePulls": false, "serverTLSBootstrap": true, "tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"] }Check the "x509" property value configured within the "authentication" object to determine if the "clientCAFile" parameter is available and properly configured. If the "x509" block does not contain the "clientCAFile" parameter, or if the parameter is present but lacks a value (i.e., the path to the CA certificate), as shown in the example above, Kubelet client authentication using SSL/TLS certificates is not enabled.
- Run the following command to determine if the Kubelet service is running:
-
Repeat steps no. 6 - 9 for each worker node running within the selected node pool.
-
Repeat steps no. 5 - 10 for each node pool created for the selected OKE cluster.
Using OCI CLI
-
Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:
oci iam compartment list --all --include-root --query 'data[]."id"'
-
The command output should return the requested OCI compartment identifiers (OCIDs):
[ "ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:
oci ce cluster list --compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --all --query 'data[]."id"'
-
The command output should return the requested OKE cluster IDs:
[ "ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:
oci ce node-pool list --compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --query 'data[]."id"'
-
The command output should return the OKE node pool IDs:
[ "ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:
oci ce node-pool get --node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd' --query 'data.nodes[]."public-ip"'
-
The command output should return the public IP address of each OKE cluster worker node (instance):
[ "<public-ip-node-1>", "<public-ip-node-2>", "<public-ip-node-3>" ]
-
Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.
-
Once connected to your OKE cluster worker node, run the commands listed below to determine if the client certificate authentication is enabled for the Kubelet service:
- Run the following command to determine if the Kubelet service is running:
sudo systemctl status kubelet
- The output should return Active: active (running).
- Run the following command to find the kubelet-config.json file for your node:
find / -name kubelet-config.json
- The command output should return the location of the kubelet-config.json file, such as /etc/kubernetes/kubelet/kubelet-config.json.
- Run the following command to describe the contents of the kubelet-config.json file:
sudo more /etc/kubernetes/kubelet/kubelet-config.json
- The command output should return the Kubelet config file contents:
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "address": "0.0.0.0", "authentication": { "anonymous": { "enabled": true }, "webhook": { "cacheTTL": "2m0s", "enabled": true }, "x509": { "clientCAFile": "" } }, "authorization": { "mode": "AlwaysAllow" }, "clusterDomain": "cluster.local", "hairpinMode": "hairpin-veth", "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "cgroupRoot": "/", "featureGates": { "RotateKubeletServerCertificate": true }, "protectKernelDefaults": true, "serializeImagePulls": false, "serverTLSBootstrap": true, "tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"] }Check the "x509" property value configured within the "authentication" object to determine if the "clientCAFile" parameter is available and properly configured. If the "x509" block does not contain the "clientCAFile" parameter, or if the parameter is present but lacks a value (i.e., the path to the CA certificate), as shown in the example above, Kubelet client authentication using SSL/TLS certificates is not enabled.
- Run the following command to determine if the Kubelet service is running:
-
Repeat steps no. 9 and 10 for each worker node running within the selected node pool.
-
Repeat steps no. 7 - 11 for each node pool created for the selected OKE cluster.
Remediation / Resolution
To enable Kubelet authentication using SSL/TLS certificates, perform the following operations:
Using OCI Console
-
Sign in to your Oracle Cloud Infrastructure (OCI) account.
-
Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.
-
For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.
-
Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to configure, listed in the Name column.
-
Select the Node pools tab and click on the name (link) of the node pool that you want to access.
-
Select the Nodes tab and click on the name (link) of the node (instance) that you want to configure.
-
Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.
-
Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.
-
Once connected to your OKE cluster worker node, edit the kubelet-config.json file for your Kubelet server, to add and configure the "clientCAFile" parameter, as shown in the example below, i.e., "clientCAFile": "/etc/kubernetes/pki/ca.crt", where "/etc/kubernetes/pki/ca.crt" is the path to the Certificate Authority (CA) certificate. This will ensure Kubelet authentication using SSL/TLS certificates:
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "address": "0.0.0.0", "authentication": { "anonymous": { "enabled": false }, "webhook": { "cacheTTL": "2m0s", "enabled": true }, "x509": { "clientCAFile": "/etc/kubernetes/pki/ca.crt" } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "clusterDomain": "cluster.local", "hairpinMode": "hairpin-veth", "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "cgroupRoot": "/", "featureGates": { "RotateKubeletServerCertificate": true }, "protectKernelDefaults": true, "serializeImagePulls": false, "serverTLSBootstrap": true, "tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"] } -
Run the following commands to reload systemd and restart the Kubelet service, forcing it to read the updated configuration file:
sudo systemctl daemon-reload sudo systemctl restart kubelet
-
After the restart, run the following command to check the Kubelet service status in order to ensure it came back up successfully and loaded the new configuration:
sudo systemctl status kubelet
-
Repeat steps no. 6 - 11 for each worker node running within the selected node pool.
-
Repeat steps no. 5 - 12 for each node pool deployed for the selected OKE cluster.
Using OCI CLI
-
Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:
oci iam compartment list --all --include-root --query 'data[]."id"'
-
The command output should return the requested OCI compartment identifiers (OCIDs):
[ "ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:
oci ce cluster list --compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --all --query 'data[]."id"'
-
The command output should return the requested OKE cluster IDs:
[ "ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:
oci ce node-pool list --compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd' --query 'data[]."id"'
-
The command output should return the OKE node pool IDs:
[ "ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd", "ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd" ]
-
Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:
oci ce node-pool get --node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd' --query 'data.nodes[]."public-ip"'
-
The command output should return the public IP address of each OKE cluster worker node (instance):
[ "<public-ip-node-1>", "<public-ip-node-2>", "<public-ip-node-3>" ]
-
Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.
-
Once connected to your OKE cluster worker node, edit the kubelet-config.json file for your Kubelet server, to add and configure the "clientCAFile" parameter, as shown in the example below, i.e., "clientCAFile": "/etc/kubernetes/pki/ca.crt", where "/etc/kubernetes/pki/ca.crt" is the path to the Certificate Authority (CA) certificate. This will ensure Kubelet authentication using SSL/TLS certificates:
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "address": "0.0.0.0", "authentication": { "anonymous": { "enabled": false }, "webhook": { "cacheTTL": "2m0s", "enabled": true }, "x509": { "clientCAFile": "/etc/kubernetes/pki/ca.crt" } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "clusterDomain": "cluster.local", "hairpinMode": "hairpin-veth", "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "cgroupRoot": "/", "featureGates": { "RotateKubeletServerCertificate": true }, "protectKernelDefaults": true, "serializeImagePulls": false, "serverTLSBootstrap": true, "tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"] } -
Run the following commands to reload systemd and restart the Kubelet service, forcing it to read the updated configuration file:
sudo systemctl daemon-reload sudo systemctl restart kubelet
-
After the restart, run the following command to check the Kubelet service status in order to ensure it came back up successfully and loaded the new configuration:
sudo systemctl status kubelet
-
Repeat steps no. 9 - 12 for each worker node running within the selected node pool.
-
Repeat steps no. 7 - 13 for each node pool deployed for the selected OKE cluster.
References
- Oracle Cloud Infrastructure Documentation
- Overview of Kubernetes Engine (OKE)
- Managing Kubernetes Clusters
- Connecting to an Instance
- Oracle Cloud Infrastructure CLI Documentation
- compartment list
- cluster list
- node-pool list
- node-pool get
- Kubernetes Documentation
- kubelet
- Set Kubelet Parameters Via A Configuration File