Worker node configuration
This feature is only available when using the following worker node types:
The following section provides security recommendations for components running on Kubernetes worker nodes. The assessment focuses on Kubelet configuration and security and file system permissions.
Assessment focus for vCluster involves verifying and ensuring the private node meets these requirements.
The control numbers used throughout this guide (4.1.1, 4.2.1, etc.) correlate directly to the official CIS Kubernetes Benchmark control numbers. This allows you to cross-reference with the official CIS documentation and maintain consistency with standard security frameworks.
4.1 Worker node configuration files​
4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive​
Result: PASS
Run the audit command to verify the permissions on the service file. If they do not match the expected result, run the following command to set the appropriate permissions:
chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Audit:
Run the following command against each node:
stat -c permissions=%a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Verify that the permissions are 600 or more restrictive.
Expected results:
permissions has value 600, expected 600 or more restrictive
Returned value:
permissions=600
4.1.2 Ensure that the kubelet service file ownership is set to root:root​
Result: PASS
Audit:
Run the following command against each node:
stat -c %U:%G /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Verify that the ownership is set to root:root.
Expected results:
root:root
Returned value:
root:root
4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive​
Result: PASS
Connect to the vCluster and run the following command to extract the kube-proxy pod name.
KUBE_PROXY_POD=$(kubectl get pods -n kube-system -l k8s-app=kube-proxy -o jsonpath='{.items[0].metadata.name}')
Save the below file for use with the debug container.
custom.json{
"volumeMounts": [
{
"name": "kube-proxy",
"mountPath": "/var/lib/kube-proxy"
}
]
}
Audit:
When both serverTLSBootstrap is true and RotateKubeletServerCertificate feature is enabled, instead of self signing a serving certificate, the Kubelet will request a certificate from the 'certificates.k8s.io' API. Ensure to approve the certificate signing requests (CSR) using kubectl certificate approve <csr-name>
, for the following debug command to function.
Run the following command against the kube-proxy pod:
kubectl debug --custom custom.json -it $KUBE_PROXY_POD --image=busybox --target=kube-proxy --namespace kube-system --profile=general -q -- stat -L -c permissions=%a /var/lib/kube-proxy/kubeconfig.conf
Verify that the permissions are 600 or more restrictive.
Expected results:
permissions has value 600, expected 600 or more restrictive
Returned value:
permissions=600
4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root​
Result: PASS
Connect to the vCluster and run the following command to extract the kube-proxy pod name.
KUBE_PROXY_POD=$(kubectl get pods -n kube-system -l k8s-app=kube-proxy -o jsonpath='{.items[0].metadata.name}')
Save the below file for use with the debug container.
custom.json{
"volumeMounts": [
{
"name": "kube-proxy",
"mountPath": "/var/lib/kube-proxy"
}
]
}
Audit:
When both serverTLSBootstrap is true and RotateKubeletServerCertificate feature is enabled, instead of self signing a serving certificate, the Kubelet will request a certificate from the 'certificates.k8s.io' API. Ensure to approve the certificate signing requests (CSR) using kubectl certificate approve <csr-name>
, for the following debug command to function.
Run the following command against the kube-proxy pod:
kubectl debug --custom custom.json -it $KUBE_PROXY_POD --image=busybox --target=kube-proxy --namespace kube-system --profile=general -q -- stat -c %U:%G /var/lib/kube-proxy/kubeconfig.conf
Verify that the ownership is set to root:root.
Expected results:
root:root
Returned value:
root:root
4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive​
Result: PASS
Audit:
Run the following command against each node:
stat -c permissions=%a /etc/kubernetes/kubelet.conf
Verify that the permissions are 600 or more restrictive.
Expected results:
permissions has value 600, expected 600 or more restrictive
Returned value:
permissions=600
4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root​
Result: PASS
Audit:
Run the following command against each node:
stat -c %U:%G /etc/kubernetes/kubelet.conf
Verify that the ownership is set to root:root.
Expected results:
root:root
Returned value:
root:root
4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive​
Result: PASS
Run the audit command to verify the permissions on the certificate authorities file. If they do not match the expected result, run the following command to set the appropriate permissions:
chmod 600 /etc/kubernetes/pki/ca.crt
Audit:
Run the following command against each node:
stat -c permissions=%a /etc/kubernetes/pki/ca.crt
Verify that the permissions are 600 or more restrictive.
Expected results:
permissions has value 600, expected 600 or more restrictive
Returned value:
permissions=600
4.1.8 Ensure that the client certificate authorities file ownership is set to root:root​
Result: PASS
Audit:
Run the following command against each node:
stat -c %U:%G /etc/kubernetes/pki/ca.crt
Verify that the ownership is set to root:root.
Expected results:
root:root
Returned value:
root:root
4.1.9 Ensure that the kubelet --config configuration file has permissions set to 600 or more restrictive​
Result: PASS
Run the audit command to verify the permissions on the certificate authorities file. If they do not match the expected result, run the following command to set the appropriate permissions:
chmod 600 /var/lib/kubelet/config.yaml
Audit:
Run the following command against each node:
stat -c permissions=%a /var/lib/kubelet/config.yaml
Verify that the permissions are 600 or more restrictive.
Expected results:
permissions has value 600, expected 600 or more restrictive
Returned value:
permissions=600
4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root​
Result: PASS
Audit:
Run the following command against each node:
stat -c %U:%G /var/lib/kubelet/config.yaml
Verify that the ownership is set to root:root.
Expected results:
root:root
Returned value:
root:root
4.2 Kubelet​
4.2.1 Ensure that the --anonymous-auth argument is set to false​
Result: PASS
Audit:
If using a Kubelet configuration file, check that there is an entry for authentication:anonymous: enabled set to false.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that the corresponding entry is set to false in the Kubelet config file.
Expected results:
...
authentication:
anonymous:
enabled: false
...
Returned value:
...
authentication:
anonymous:
enabled: false
...
4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow​
Result: PASS
Audit:
If using a Kubelet configuration file, check that there is an entry that sets authorization: mode to something other than AlwaysAllow.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that the corresponding entry is set to Webhook in the Kubelet config file.
Expected results:
...
authorization:
mode: Webhook
...
Returned value:
...
authorization:
mode: Webhook
...
4.2.3 Ensure that the --client-ca-file argument is set as appropriate​
Result: PASS
Audit:
If using a Kubelet configuration file, check that there is an entry that sets authentication: x509: clientCAFile to the location of the client certificate authority file.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that the corresponding entry is set to the appropriate file in the Kubelet config file.
Expected results:
...
authentication:
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
...
Returned value:
...
authentication:
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
...
4.2.4 Ensure that the --read-only-port argument is set to 0​
Result: PASS
Audit:
If using a Kubelet configuration file, check that it does not set readOnlyPort to any value other than 0.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that readOnlyPort either does not exist or is not set to a value other than 0.
Expected results:
'readOnlyPort' does not exist in the config
Returned value:
'readOnlyPort' does not exist in the config
4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0​
Result: PASS
Audit:
If using a Kubelet configuration file, check that it does not set streamingConnectionIdleTimeout to 0.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that streamingConnectionIdleTimeout is not set to 0.
Expected results:
...
streamingConnectionIdleTimeout: 4h0m0s
...
Returned value:
...
streamingConnectionIdleTimeout: 4h0m0s
...
4.2.6 Ensure that the --make-iptables-util-chains argument is set to true​
Result: PASS
Audit:
If using a Kubelet configuration file, check that it does not set makeIPTablesUtilChains to false.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that makeIPTablesUtilChains is not set to false.
Expected results:
...
makeIPTablesUtilChains: true
...
Returned value:
...
makeIPTablesUtilChains: true
...
4.2.7 Ensure that the --hostname-override argument is not set​
Result: PASS
Audit:
Run the following command against each node:
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Verify that --hostname-override argument does not exist.
Expected results:
--hostname-override argument does not exist.
Returned value:
--hostname-override argument does not exist.
4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture​
Result: PASS
Audit:
If using a Kubelet configuration file, check that there is an entry to set eventRecordQPS: to an appropriate level.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that eventRecordQPS is set to an appropriate level for the cluster.
Expected results:
...
eventRecordQPS: 50
...
Returned value:
...
eventRecordQPS: 50
...
4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate​
Result: NOT APPLICABLE
--tls-cert-file
and --tls-private-key-file
are mutually exclusive with the serverTLSBootrapping: true
setting (also recommended in Control 4.2.11).
These flags are required to be set when generating and distributing your own certificates to nodes. While serverTLSBootrapping: true switches to using the CSR certificate bootstrapping method, which bootstraps as well as auto-rotates the certificates.
4.2.10 Ensure that the --rotate-certificates argument is not set to false​
Result: PASS
Audit:
If using a Kubelet configuration file, check that there is an entry to set rotateCertificates: to true.
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that rotateCertificates is set to true.
Expected results:
...
rotateCertificates: true
...
Returned value:
...
rotateCertificates: true
...
4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true​
Result: PASS
This check can be ignored if serverTLSBootstrap is true in the kubelet config file.
Pass the following configuration while creating the vCluster:
vcluster.yamlprivateNodes:
enabled: true
kubelet:
config:
serverTLSBootstrap: true
featureGates:
RotateKubeletServerCertificate: trueCreate the vCluster using the above values file:
vcluster create my-vcluster -f vcluster.yaml --connect=false
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that serverTLSBootstrap is set to true.
Expected results:
...
serverTLSBootstrap: true
...
Returned value:
...
serverTLSBootstrap: true
...
When both serverTLSBootstrap is true and RotateKubeletServerCertificate feature is enabled, instead of self signing a serving certificate, the Kubelet will request a certificate from the 'certificates.k8s.io' API. This would require an approver to approve the certificate signing requests (CSR).
4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers​
Result: PASS
Pass the following configuration while creating the vCluster:
vcluster.yamlprivateNodes:
enabled: true
kubelet:
config:
tlsCipherSuites:
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"Create the vCluster using the above values file:
vcluster create my-vcluster -f vcluster.yaml --connect=false
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that the tlsCipherSuites field is set with one of the cipher suites listed below
TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_GCM_SHA384
Expected results:
...
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
...
Returned value:
...
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
...
4.2.13 Ensure that a limit is set on pod PIDs​
Result: PASS
Pass the following configuration while creating the vCluster:
vcluster.yamlprivateNodes:
enabled: true
kubelet:
config:
podPidsLimit: 1000000Create the vCluster using the above values file:
vcluster create my-vcluster -f vcluster.yaml --connect=false
Run the following command against each node:
cat /var/lib/kubelet/config.yaml
Verify that --pod-max-pids is set correctly.
Expected results:
...
podPidsLimit: 1000000
...
Returned value:
...
podPidsLimit: 1000000
...
4.3 kube-proxy​
4.3.1 Ensure that the kube-proxy metrics service is bound to localhost​
Result: PASS
Audit:
kube-proxy runs as a pod inside the vCluster and its configuration is backed by a ConfigMap named "kube-proxy" in the kube-system namespace.
Run the following command inside the virtual cluster:
kubectl get cm kube-proxy -n kube-system -o jsonpath='{.data.config\.conf}' | grep metricsBindAddress
Verify that metricsBindAddress is set to default.
Expected results:
metricsBindAddress: ""
Returned value:
metricsBindAddress: ""