Info icon
End of Life Notice: For Trend Cloud One™ - Conformity Customers, Conformity will reach its End of Sale on “July 31st, 2025” and End of Life “July 31st, 2026”. The same capabilities and much more is available in TrendAI Vision One™ Cloud Risk Management. For details, please refer to Upgrade to TrendAI Vision One™
Use the Knowledge Base AI to help improve your Cloud Posture

Enable Explicit Authorization

TrendAI Vision One™ provides continuous assurance that gives peace of mind for your cloud infrastructure, delivering over 1400 automated best practice checks.

Risk Level: Medium (should be achieved)

Ensure that the --authorization-mode parameter is not set to "AlwaysAllow" in order to enforce explicit authorization checks for all requests to the Kubelet API, preventing unauthorized access and ensuring only explicitly authorized requests are fulfilled.

Security

Setting the Kubelet authorization mode to "Webhook" (or any restrictive mode other than the default "AlwaysAllow") represents an important security measure because the Kubelet, by default, permits all authenticated requests, including potentially anonymous ones, without requiring explicit authorization from the API server. Enabling explicit authorization ensures that the Kubelet denies unauthorized requests and only processes those that have been explicitly validated, significantly reducing the attack surface and improving the security posture of the worker node.


Audit

To determine if explicit authorization is enabled for your Kubelet servers, perform the following operations:

Using OCI Console

  1. Sign in to your Oracle Cloud Infrastructure (OCI) account.

  2. Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.

  3. For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.

  4. Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to examine, listed in the Name column.

  5. Select the Node pools tab and click on the name (link) of the node pool that you want to examine.

  6. Select the Nodes tab and click on the name (link) of the node (instance) that you want to examine.

  7. Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.

  8. Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  9. Once connected to your OKE cluster worker node, run the commands listed below to determine the authorization mode configured for your Kubelet server:

    1. Run the following command to determine if the Kubelet service is running:
      	sudo systemctl status kubelet
      	
    2. The output should return Active: active (running).
    3. Run the following command to find the kubelet-config.json file for your node:
      	find / -name kubelet-config.json
      	
    4. The command output should return the location of the kubelet-config.json file, such as /etc/kubernetes/kubelet/kubelet-config.json.
    5. Run the following command to describe the contents of the kubelet-config.json file:
      	sudo more /etc/kubernetes/kubelet/kubelet-config.json
      	
    6. The command output should return the Kubelet config file contents:
      	{
      		"kind": "KubeletConfiguration",
      		"apiVersion": "kubelet.config.k8s.io/v1beta1",
      		"address": "0.0.0.0",
      		"authentication": {
      			"anonymous": {
      				"enabled": true
      			},
      			"webhook": {
      				"cacheTTL": "2m0s",
      				"enabled": true
      			},
      			"x509": {
      				"clientCAFile": "/etc/kubernetes/pki/ca.crt"
      			}
      		},
      		"authorization": {
      			"mode": "AlwaysAllow"
      		},
      		"clusterDomain": "cluster.local",
      		"hairpinMode": "hairpin-veth",
      		"readOnlyPort": 0,
      		"cgroupDriver": "cgroupfs",
      		"cgroupRoot": "/",
      		"featureGates": {
      			"RotateKubeletServerCertificate": true
      		},
      		"protectKernelDefaults": true,
      		"serializeImagePulls": false,
      		"serverTLSBootstrap": true,
      		"tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"]
      	}
      	

      Check the "mode" property value configured for the "authorization" object to determine the authorization mode configured for your Kubelet server. If "mode" is set to "AlwaysAllow", as shown in the example above, all requests are allowed and explicit authorization is not enabled for the Kubelet server.

  10. Repeat steps no. 6 - 9 for each worker node running within the selected node pool.

  11. Repeat steps no. 5 - 10 for each node pool created for the selected OKE cluster.

Using OCI CLI

  1. Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:

    oci iam compartment list
    	--all
    	--include-root
    	--query 'data[]."id"'
    
  2. The command output should return the requested OCI compartment identifiers (OCIDs):

    [
    	"ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  3. Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:

    oci ce cluster list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--all
    	--query 'data[]."id"'
    
  4. The command output should return the requested OKE cluster IDs:

    [
    	"ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  5. Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:

    oci ce node-pool list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data[]."id"'
    
  6. The command output should return the OKE node pool IDs:

    [
    	"ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  7. Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:

    oci ce node-pool get
    	--node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data.nodes[]."public-ip"'
    
  8. The command output should return the public IP address of each OKE cluster worker node (instance):

    [
    	"<public-ip-node-1>",
    	"<public-ip-node-2>",
    	"<public-ip-node-3>"
    ]
    
  9. Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  10. Once connected to your OKE cluster worker node, run the commands listed below to determine the authorization mode configured for your Kubelet server:

    1. Run the following command to determine if the Kubelet service is running:
      	sudo systemctl status kubelet
      	
    2. The output should return Active: active (running).
    3. Run the following command to find the kubelet-config.json file for your node:
      	find / -name kubelet-config.json
      	
    4. The command output should return the location of the kubelet-config.json file, such as /etc/kubernetes/kubelet/kubelet-config.json.
    5. Run the following command to describe the contents of the kubelet-config.json file:
      	sudo more /etc/kubernetes/kubelet/kubelet-config.json
      	
    6. The command output should return the Kubelet config file contents:
      	{
      		"kind": "KubeletConfiguration",
      		"apiVersion": "kubelet.config.k8s.io/v1beta1",
      		"address": "0.0.0.0",
      		"authentication": {
      			"anonymous": {
      				"enabled": true
      			},
      			"webhook": {
      				"cacheTTL": "2m0s",
      				"enabled": true
      			},
      			"x509": {
      				"clientCAFile": "/etc/kubernetes/pki/ca.crt"
      			}
      		},
      		"authorization": {
      			"mode": "AlwaysAllow"
      		},
      		"clusterDomain": "cluster.local",
      		"hairpinMode": "hairpin-veth",
      		"readOnlyPort": 0,
      		"cgroupDriver": "cgroupfs",
      		"cgroupRoot": "/",
      		"featureGates": {
      			"RotateKubeletServerCertificate": true
      		},
      		"protectKernelDefaults": true,
      		"serializeImagePulls": false,
      		"serverTLSBootstrap": true,
      		"tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"]
      	}
      	

      Check the "mode" property value configured for the "authorization" object to determine the authorization mode configured for your Kubelet server. If "mode" is set to "AlwaysAllow", as shown in the example above, all requests are allowed and explicit authorization is not enabled for the Kubelet server.

  11. Repeat steps no. 9 and 10 for each worker node running within the selected node pool.

  12. Repeat steps no. 7 - 11 for each node pool created for the selected OKE cluster.

Remediation / Resolution

To ensure that explicit authorization is enabled for your Kubelet servers, perform the following operations:

Using OCI Console

  1. Sign in to your Oracle Cloud Infrastructure (OCI) account.

  2. Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.

  3. For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.

  4. Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to configure, listed in the Name column.

  5. Select the Node pools tab and click on the name (link) of the node pool that you want to access.

  6. Select the Nodes tab and click on the name (link) of the node (instance) that you want to configure.

  7. Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.

  8. Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  9. Once connected to your OKE cluster worker node, edit the kubelet-config.json file for your Kubelet server, to configure the "authorization" parameter, as shown in the example below, i.e., "authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}}. This will enable explicit authorization for your Kubelet server:

    {
    	"kind": "KubeletConfiguration",
    	"apiVersion": "kubelet.config.k8s.io/v1beta1",
    	"address": "0.0.0.0",
    	"authentication": {
    		"anonymous": {
    			"enabled": false
    		},
    		"webhook": {
    			"cacheTTL": "2m0s",
    			"enabled": true
    		},
    		"x509": {
    			"clientCAFile": "/etc/kubernetes/pki/ca.crt"
    		}
    	},
    	"authorization": {
    		"mode": "Webhook",
    		"webhook": {
    			"cacheAuthorizedTTL": "5m0s",
    			"cacheUnauthorizedTTL": "30s"
    		}
    	},
    	"clusterDomain": "cluster.local",
    	"hairpinMode": "hairpin-veth",
    	"readOnlyPort": 0,
    	"cgroupDriver": "cgroupfs",
    	"cgroupRoot": "/",
    	"featureGates": {
    		"RotateKubeletServerCertificate": true
    	},
    	"protectKernelDefaults": true,
    	"serializeImagePulls": false,
    	"serverTLSBootstrap": true,
    	"tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"]
    }
    
  10. Run the following commands to reload systemd and restart the Kubelet service, forcing it to read the updated configuration file:

    sudo systemctl daemon-reload
    	sudo systemctl restart kubelet
    
  11. After the restart, run the following command to check the Kubelet service status in order to ensure it came back up successfully and loaded the new configuration:

    sudo systemctl status kubelet
    
  12. Repeat steps no. 6 - 11 for each worker node running within the selected node pool.

  13. Repeat steps no. 5 - 12 for each node pool deployed for the selected OKE cluster.

Using OCI CLI

  1. Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:

    oci iam compartment list
    	--all
    	--include-root
    	--query 'data[]."id"'
    
  2. The command output should return the requested OCI compartment identifiers (OCIDs):

    [
    	"ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  3. Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:

    oci ce cluster list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--all
    	--query 'data[]."id"'
    
  4. The command output should return the requested OKE cluster IDs:

    [
    	"ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  5. Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:

    oci ce node-pool list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data[]."id"'
    
  6. The command output should return the OKE node pool IDs:

    [
    	"ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  7. Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:

    oci ce node-pool get
    	--node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data.nodes[]."public-ip"'
    
  8. The command output should return the public IP address of each OKE cluster worker node (instance):

    [
    	"<public-ip-node-1>",
    	"<public-ip-node-2>",
    	"<public-ip-node-3>"
    ]
    
  9. Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  10. Once connected to your OKE cluster worker node, edit the kubelet-config.json file for your Kubelet server, to configure the "authorization" parameter, as shown in the example below, i.e., "authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}}. This will enable explicit authorization for your Kubelet server:

    {
    	"kind": "KubeletConfiguration",
    	"apiVersion": "kubelet.config.k8s.io/v1beta1",
    	"address": "0.0.0.0",
    	"authentication": {
    		"anonymous": {
    			"enabled": false
    		},
    		"webhook": {
    			"cacheTTL": "2m0s",
    			"enabled": true
    		},
    		"x509": {
    			"clientCAFile": "/etc/kubernetes/pki/ca.crt"
    		}
    	},
    	"authorization": {
    	"mode": "Webhook",
    	"webhook": {
    		"cacheAuthorizedTTL": "5m0s",
    		"cacheUnauthorizedTTL": "30s"
    	}
    	},
    	"clusterDomain": "cluster.local",
    	"hairpinMode": "hairpin-veth",
    	"readOnlyPort": 0,
    	"cgroupDriver": "cgroupfs",
    	"cgroupRoot": "/",
    	"featureGates": {
    		"RotateKubeletServerCertificate": true
    	},
    	"protectKernelDefaults": true,
    	"serializeImagePulls": false,
    	"serverTLSBootstrap": true,
    	"tlsCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"]
    }
    
  11. Run the following commands to reload systemd and restart the Kubelet service, forcing it to read the updated configuration file:

    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    
  12. After the restart, run the following command to check the Kubelet service status in order to ensure it came back up successfully and loaded the new configuration:

    sudo systemctl status kubelet
    
  13. Repeat steps no. 9 - 12 for each worker node running within the selected node pool.

  14. Repeat steps no. 7 - 13 for each node pool deployed for the selected OKE cluster.

References

Publication date Dec 1, 2025