Skip to main content
Version: main 🚧

Control plane security configuration

Limited vCluster Tenancy Configuration Support

This feature is only available when using the following worker node types:

  • Private Nodes
  • The following section covers security recommendations for the direct configuration of Kubernetes control plane processes. The assessment focuses on API Server configuration and security settings, Controller Manager security parameters, Scheduler security configurations, and general control plane security practices.

    Assessment focus for vCluster involves checking the extraArgs configurations in your vcluster.yaml file and ensuring proper security parameters are set for the virtualized API server, controller manager, and scheduler. Since vCluster virtualizes the control plane components, verification requires examining container processes rather than traditional system-level checks.

    Control numbering

    The control numbers used throughout this guide (1.1.1, 1.2.1, etc.) correlate directly to the official CIS Kubernetes Benchmark control numbers. This allows you to cross-reference with the official CIS documentation and maintain consistency with standard security frameworks.

    note

    For auditing each control, create the vCluster using default values as shown below, unless specified otherwise.

    vcluster create my-vcluster --connect=false

    1.1 Master node configuration files​

    1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-apiserver.yaml as found in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file permissions check defined in this control not applicable in the context of vCluster.

    1.1.2 Ensure that the API server pod specification file ownership is set to root:root​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-apiserver.yaml as found in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file ownership check defined in this control not applicable in the context of vCluster.

    1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-controller-manager.yaml as found in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file permissions check defined in this control not applicable in the context of vCluster.

    1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-controller-manager.yaml as found in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file ownership check defined in this control not applicable in the context of vCluster.

    1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-scheduler.yaml as found in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file permissions check defined in this control not applicable in the context of vCluster.

    1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/kube-scheduler.yaml as found in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file ownership check defined in this control not applicable in the context of vCluster.

    1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/etcd.yaml as found in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file permissions check defined in this control not applicable in the context of vCluster.

    1.1.8 Ensure that the etcd pod specification file ownership is set to root:root​

    Result: NOT APPLICABLE

    vCluster does not rely on static pod manifests stored on the host such as /etc/kubernetes/manifests/etcd.yaml as found in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. The architecture does not use the static pod mechanism, making the file ownership check defined in this control not applicable in the context of vCluster.

    1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive​

    Result: NOT APPLICABLE

    vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster's CNI plugin. As a result, there are no CNI configuration files such as /etc/cni/net.d/*.conf present within the vCluster container. You should evaluate this control on the underlying host cluster as it is not applicable in vCluster environments.

    1.1.10 Ensure that the Container Network Interface file ownership is set to root:root​

    Result: NOT APPLICABLE

    vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster's CNI plugin. As a result, there are no CNI configuration files such as /etc/cni/net.d/*.conf present within the vCluster container. You should evaluate this control on the underlying host cluster as it is not applicable in vCluster environments.

    1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive​

    Result: PASS

    1. Get the etcd data directory, passed as an argument to --data-dir, from the following command:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd
    2. Run the audit command to verify the permissions on the data directory. If they do not match the expected result, run the following command to set the appropriate permissions:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- chmod 700 /data/etcd

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/etcd

    Verify that the etcd data directory permissions are set to 700 or more restrictive.

    Expected results:

    permissions has value 700, expected 700 or more restrictive

    Returned value:

    permissions=700

    1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd​

    Result: NOT APPLICABLE

    The control recommends that the etcd data directory be owned by the etcd user and group (etcd:etcd) to follow least privilege principles. However, in vCluster, etcd is embedded and runs as root within the syncer container. There is no separate etcd user present in the container. The directory ownership check defined in this control is not applicable in the context of vCluster.

    1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/admin.conf

    Verify that the admin.conf file permissions are 600 or more restrictive.

    Expected results:

    permissions has value 600, expected 600 or more restrictive

    Returned value:

    permissions=600

    1.1.14 Ensure that the admin.conf file ownership is set to root:root​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/admin.conf

    Verify that the admin.conf file ownership is set to root:root.

    Expected results:

    root:root

    Returned value:

    root:root

    1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive​

    Result: PASS

    Get the scheduler kubeconfig file, passed as an argument to --kubeconfig, from the following command:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/scheduler.conf

    Verify that the scheduler kubeconfig file permissions are set to 600 or more restrictive.

    Expected results:

    permissions has value 600, expected 600 or more restrictive

    Returned value:

    permissions=600

    1.1.16 Ensure that the scheduler.conf file ownership is set to root:root​

    Result: PASS

    Get the scheduler kubeconfig file, passed as an argument to --kubeconfig, from the following command:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/scheduler.conf

    Verify that the scheduler kubeconfig file ownership is set to root:root.

    Expected results:

    root:root

    Returned value:

    root:root

    1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive​

    Result: PASS

    Get the controller-manager kubeconfig file, passed as an argument to --kubeconfig, from the following command:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/controller-manager.conf

    Verify that the controller-manager kubeconfig file permissions are set to 600 or more restrictive.

    Expected results:

    permissions has value 600, expected 600 or more restrictive

    Returned value:

    permissions=600

    1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root​

    Result: PASS

    Get the controller-manager kubeconfig file, passed as an argument to --kubeconfig, from the following command:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/controller-manager.conf

    Verify that the controller-manager kubeconfig file ownership is set to root:root.

    Expected results:

    root:root

    Returned value:

    root:root

    1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -not -user root -o -not -group root | wc -l | grep -q '^0$' && echo 'All files owned by root' || echo 'Some files not owned by root'"

    Verify that the ownership of all files and directories in this hierarchy is set to root:root.

    Expected results:

    All files owned by root

    Returned value:

    All files owned by root

    1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive​

    Result: PASS

    Run the audit command to verify the permissions on the certificate files. If they do not match the expected result, run the following command to set the appropriate permissions:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec chmod 600 {} \;"

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec stat -c permissions=%a {} \;"

    Verify that the permissions on all the certificate files are 600 or more restrictive.

    Expected results:

    permissions on all the certificate files are 600 or more restrictive

    Returned value:

    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600

    1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600​

    Result: PASS

    Run the audit command to verify the permissions on the key files. If they do not match the expected result, run the following command to set the appropriate permissions:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec chmod 600 {} \;"

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec stat -c permissions=%a {} \;"

    Verify that the permissions on all the key files are 600 or more restrictive.

    Expected results:

    permissions on all the key files are 600 or more restrictive

    Returned value:

    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600
    permissions=600

    1.2 API Server​

    1.2.1 Ensure that the --anonymous-auth argument is set to false​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --anonymous-auth=false
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --anonymous-auth argument is set to false.

    Expected results:

    '--anonymous-auth' is equal to 'false'

    Returned value:

    41 root      0:07 /binaries/kube-apiserver --service-cluster-ip-range=10.96.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --anonymous-auth=false

    1.2.2 Ensure that the --token-auth-file parameter is not set​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --token-auth-file parameter is not set.

    Expected results:

    '--token-auth-file' is not present

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

    1.2.3 Ensure that the --DenyServiceExternalIPs is set​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --enable-admission-plugins=DenyServiceExternalIPs
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the DenyServiceExternalIPs argument exists as a string value in --enable-admission-plugins.

    Expected results:

    'DenyServiceExternalIPs' argument exist as a string value in the --enable-admission-plugins list.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

    1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --kubelet-client-certificate and --kubelet-client-key arguments exist and they are set as appropriate.

    Expected results:

    '--kubelet-client-certificate' is present AND '--kubelet-client-key' is present

    Returned value:

    12 root      0:32 /binaries/kube-apiserver --service-cluster-ip-range=10.128.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://my-vcluster-etcd:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --kubelet-client-certificate=/data/pki/apiserver-kubelet-client.crt --kubelet-client-key=/data/pki/apiserver-kubelet-client.key --endpoint-reconciler-type=none --egress-selector-config-file=/data/konnectivity/egress.yaml --admission-control-config-file=/etc/kubernetes/admission-control.yaml --anonymous-auth=false --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,EventRateLimit,NodeRestriction --encryption-provider-config=/etc/encryption/encryption-config.yaml --request-timeout=300s --service-account-lookup=true --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

    1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "pgrep -f 'kube-apiserver' | head -1 | xargs -I {} cat /proc/{}/cmdline | tr '\0' ' '"

    Verify that the --kubelet-certificate-authority argument exists and is set as appropriate.

    Expected results:

    '--kubelet-certificate-authority' is present

    Returned value:

    /binaries/kube-apiserver --service-cluster-ip-range=10.128.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --kubelet-client-certificate=/data/pki/apiserver-kubelet-client.crt --kubelet-client-key=/data/pki/apiserver-kubelet-client.key --endpoint-reconciler-type=none --egress-selector-config-file=/data/konnectivity/egress.yaml --admission-control-config-file=/etc/kubernetes/admission-control.yaml --anonymous-auth=false --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,EventRateLimit,NodeRestriction --encryption-provider-config=/etc/encryption/encryption-config.yaml --feature-gates=RotateKubeletServerCertificate=true --request-timeout=300s --service-account-lookup=true --kubelet-certificate-authority=/data/pki/ca.crt --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

    1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --authorization-mode argument exists and is not set to AlwaysAllow.

    Expected results:

    'AlwaysAllow' argument does not exist as a string value in the --authorization-mode list.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

    1.2.7 Ensure that the --authorization-mode argument includes Node​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --authorization-mode=Node
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --authorization-mode argument exists and is set to a value that includes Node.

    Expected results:

    'Node' argument exists as a string value in the --authorization-mode list.

    Returned value:

    47 root      0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node

    1.2.8 Ensure that the --authorization-mode argument includes RBAC​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --authorization-mode argument exists and is set to a value that includes RBAC.

    Expected results:

    'RBAC' argument exists as a string value in the --authorization-mode list.

    Returned value:

    47 root      0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node

    1.2.9 Ensure that the admission control plugin EventRateLimit is set​

    Result: PASS

    1. Follow the Kubernetes documentation and set the desired limits in a configuration file. Create a config map in the vCluster namespace that contains the configuration file:

      admission-control.yaml
      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: admission-control
      namespace: vcluster-my-vcluster
      data:
      admission-control.yaml: |
      apiVersion: apiserver.config.k8s.io/v1
      kind: AdmissionConfiguration
      plugins:
      - name: EventRateLimit
      configuration:
      apiVersion: eventratelimit.admission.k8s.io/v1alpha1
      kind: Configuration
      limits:
      - type: Server
      qps: 50
      burst: 100
    2. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --enable-admission-plugins=EventRateLimit
      - --admission-control-config-file=/etc/kubernetes/admission-control.yaml
      statefulSet:
      persistence:
      addVolumes:
      - name: admission-control
      configMap:
      name: admission-control
      addVolumeMounts:
      - name: admission-control
      mountPath: /etc/kubernetes
    3. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    4. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --enable-admission-plugins argument is set to a value that includes EventRateLimit.

    Expected results:

    'EventRateLimit' argument exist as a string value in the --enable-admission-plugins list.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml

    1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that if the --enable-admission-plugins argument is set, its value does not include AlwaysAdmit.

    Expected results:

    'AlwaysAdmit' argument does not exist as a string value in the --enable-admission-plugins list.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml

    1.2.11 Ensure that the admission control plugin AlwaysPullImages is set​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --enable-admission-plugins=AlwaysPullImages
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --enable-admission-plugins argument is set to a value that includes AlwaysPullImages.

    Expected results:

    'AlwaysPullImages' argument exist as a string value in the --enable-admission-plugins list.

    Returned value:

    45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages

    1.2.12 Ensure that the admission control plugin ServiceAccount is set​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --disable-admission-plugins argument is set to a value that does not include ServiceAccount.

    Expected results:

    --disable-admission-plugins is not set.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.13 Ensure that the admission control plugin NamespaceLifecycle is set​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --disable-admission-plugins argument is set to a value that does not include NamespaceLifecycle.

    Expected results:

    --disable-admission-plugins is not set.

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.14 Ensure that the admission control plugin NodeRestriction is set​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --enable-admission-plugins=NodeRestriction
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --enable-admission-plugins argument is set to a value that includes NodeRestriction.

    Expected results:

    'NodeRestriction' argument exist as a string value in the --enable-admission-plugins list.

    Returned value:

    44 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=NodeRestriction

    1.2.15 Ensure that the --profiling argument is set to false​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --profiling argument is set to false.

    Expected results:

    '--profiling' is equal to 'false'

    Returned value:

    45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.16 Ensure that the --audit-log-path argument is set​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --audit-log-path=/var/log/audit.log
      statefulSet:
      persistence:
      addVolumes:
      - name: audit-log
      emptyDir: {}
      addVolumeMounts:
      - name: audit-log
      mountPath: /var/log
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --audit-log-path argument is set as appropriate.

    Expected results:

    '--audit-log-path' is present

    Returned value:

    45 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log

    1.2.17 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --audit-log-path=/var/log/audit.log
      - --audit-log-maxage=30
      statefulSet:
      persistence:
      addVolumes:
      - name: audit-log
      emptyDir: {}
      addVolumeMounts:
      - name: audit-log
      mountPath: /var/log
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --audit-log-maxage argument is set to 30 or as appropriate.

    Expected results:

    '--audit-log-maxage' is greater or equal to 30

    Returned value:

    45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxage=30

    1.2.18 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --audit-log-path=/var/log/audit.log
      - --audit-log-maxbackup=10
      statefulSet:
      persistence:
      addVolumes:
      - name: audit-log
      emptyDir: {}
      addVolumeMounts:
      - name: audit-log
      mountPath: /var/log
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --audit-log-maxbackup argument is set to 10 or as appropriate.

    Expected results:

    '--audit-log-maxbackup' is greater or equal to 10

    Returned value:

    44 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxbackup=10

    1.2.19 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --audit-log-path=/var/log/audit.log
      - --audit-log-maxsize=100
      statefulSet:
      persistence:
      addVolumes:
      - name: audit-log
      emptyDir: {}
      addVolumeMounts:
      - name: audit-log
      mountPath: /var/log
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --audit-log-maxsize argument is set to 100 or as appropriate.

    Expected results:

    '--audit-log-maxsize' is greater or equal to 100

    Returned value:

    43 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxsize=100

    1.2.20 Ensure that the --request-timeout argument is set as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --request-timeout=300s
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --request-timeout argument is either not set or set to an appropriate value.

    Expected results:

    '--request-timeout' is set to 300s

    Returned value:

    43 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --request-timeout=300s

    1.2.21 Ensure that the --service-account-lookup argument is set to true​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --service-account-lookup=true
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that if the --service-account-lookup argument exists it is set to true.

    Expected results:

    '--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'

    Returned value:

    43 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --service-account-lookup=true

    1.2.22 Ensure that the --service-account-key-file argument is set as appropriate​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --service-account-key-file argument exists and is set as appropriate.

    Expected results:

    '--service-account-key-file' argument exists and is set appropriately

    Returned value:

    45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.23 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      backingStore:
      etcd:
      embedded:
      enabled: true
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --etcd-certfile and --etcd-keyfile arguments exist and they are set as appropriate.

    Expected results:

    '--etcd-certfile' is present AND '--etcd-keyfile' is present

    Returned value:

    47 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.24 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate.

    Expected results:

    '--tls-cert-file' is present AND '--tls-private-key-file' is present

    Returned value:

    45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.25 Ensure that the --client-ca-file argument is set as appropriate​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

    Verify that the --client-ca-file argument exists and it is set as appropriate.

    Expected results:

    '--client-ca-file' is present

    Returned value:

    45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.26 Ensure that the --etcd-cafile argument is set as appropriate​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      backingStore:
      etcd:
      embedded:
      enabled: true
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --etcd-cafile argument exists and it is set as appropriate.

    Expected results:

    '--etcd-cafile' is present

    Returned value:

    47 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

    1.2.27 Ensure that the --encryption-provider-config argument is set as appropriate​

    Result: PASS

    1. Follow the Kubernetes documentation and configure an EncryptionConfig file. Generate a 32-bit key using the following command:

      head -c 32 /dev/urandom | base64
    2. Create an encryption configuration file with base64 encoded key created previously:

      encryption-config.yaml
      apiVersion: apiserver.config.k8s.io/v1
      kind: EncryptionConfiguration
      resources:
      - resources:
      - secrets
      providers:
      - aescbc:
      keys:
      - name: key1
      secret: <base64-encoded-32-byte-key>
      - identity: {}
    3. Create a secret in the vCluster namespace from the configuration file:

      kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster
    4. Create the vCluster referring to the secret as follows:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --encryption-provider-config=/etc/encryption/encryption-config.yaml
      statefulSet:
      persistence:
      addVolumes:
      - name: encryption-config
      secret:
      secretName: encryption-config
      addVolumeMounts:
      - name: encryption-config
      mountPath: /etc/encryption
      readOnly: true
    5. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    6. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

      Verify that the --encryption-provider-config argument is set to an EncryptionConfig file. Additionally, ensure that the EncryptionConfig file has all the desired resources covered especially any secrets.

    Expected results:

    '--encryption-provider-config' is present

    Returned value:

    45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --encryption-provider-config=/etc/encryption/encryption-config.yaml

    1.2.28 Ensure that encryption providers are appropriately configured​

    Result: PASS

    1. Follow the same configuration steps as in control 1.2.27 to set up encryption providers.

    2. Create the vCluster using the values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- cat /etc/encryption/encryption-config.yaml

      Verify that aescbc, kms, or secretbox is set as the encryption provider for all the desired resources.

    Expected results:

    aescbc is set as the encryption provider for the configured resources

    Returned value:

    apiVersion: apiserver.config.k8s.io/v1
    kind: EncryptionConfiguration
    resources:
    - resources:
    - secrets
    providers:
    - aescbc:
    keys:
    - name: key1
    secret: <base64-encoded-32-byte-key>
    - identity: {}

    1.2.29 Ensure that the API Server only makes use of Strong Cryptographic Ciphers​

    Result: PASS

    1. Pass the following configuration as arguments to the API Server while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      apiServer:
      extraArgs:
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "pgrep -f 'kube-apiserver' | head -1 | xargs -I {} cat /proc/{}/cmdline | tr '\0' ' '"

      Verify that the --tls-cipher-suites argument is set with approved cipher suites.

    Expected results:

    '--tls-cipher-suites' contains valid elements from approved cipher suite list

    Returned value:

    /binaries/kube-apiserver --service-cluster-ip-range=10.128.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --kubelet-client-certificate=/data/pki/apiserver-kubelet-client.crt --kubelet-client-key=/data/pki/apiserver-kubelet-client.key --endpoint-reconciler-type=none --egress-selector-config-file=/data/konnectivity/egress.yaml --admission-control-config-file=/etc/kubernetes/admission-control.yaml --anonymous-auth=false --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,EventRateLimit,NodeRestriction --encryption-provider-config=/etc/encryption/encryption-config.yaml --feature-gates=RotateKubeletServerCertificate=true --request-timeout=300s --service-account-lookup=true --kubelet-certificate-authority=/data/pki/ca.crt --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

    1.3 Controller Manager​

    1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate​

    Result: PASS

    1. Pass the following configuration while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      controllerManager:
      extraArgs:
      - --terminated-pod-gc-threshold=12500
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

      Verify that the --terminated-pod-gc-threshold argument is set as appropriate.

    Expected results:

    '--terminated-pod-gc-threshold' is present

    Returned value:

    98 root      0:01 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --terminated-pod-gc-threshold=12500

    1.3.2 Ensure that the --profiling argument is set to false​

    Result: PASS

    1. Pass the following configuration while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      controllerManager:
      extraArgs:
      - --profiling=false
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

      Verify that the --profiling argument is set to false.

    Expected results:

    '--profiling' is equal to 'false'

    Returned value:

    98 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --profiling=false

    1.3.3 Ensure that the --use-service-account-credentials argument is set to true​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Verify that the --use-service-account-credentials argument is set to true.

    Expected results:

    '--use-service-account-credentials' is not equal to 'false'

    Returned value:

    102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

    1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Verify that the --service-account-private-key-file argument is set as appropriate.

    Expected results:

    '--service-account-private-key-file' is present

    Returned value:

    102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

    1.3.5 Ensure that the --root-ca-file argument is set as appropriate​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Verify that the --root-ca-file argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate.

    Expected results:

    '--root-ca-file' is present

    Returned value:

    102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

    1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true​

    Result: PASS

    Audit:

    Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    By default, RotateKubeletServerCertificate is set to "true". Verify that it has not been disabled.

    Expected results:

    --feature-gates=RotateKubeletServerCertificate=false is not present

    Returned value:

    102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

    1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1​

    Result: PASS

    Audit: Run the following command against the vCluster pod:

    kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

    Verify that the --bind-address argument is set to 127.0.0.1

    Expected results:

    '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present

    Returned value:

    102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

    1.4 Scheduler​

    1.4.1 Ensure that the --profiling argument is set to false​

    Result: PASS

    1. Pass the following configuration as arguments to the scheduler while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      scheduler:
      extraArgs:
      - --profiling=false
      advanced:
      virtualScheduler:
      enabled: true
      sync:
      fromHost:
      nodes:
      enabled: true
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

      Verify that the --profiling argument is set to false.

    Expected results:

    '--profiling' is equal to 'false'

    Returned value:

     98 root      0:01 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false --profiling=false

    1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1​

    Result: PASS

    1. Pass the following configuration while creating the vCluster:

      vcluster.yaml
      controlPlane:
      distro:
      k8s:
      enabled: true
      advanced:
      virtualScheduler:
      enabled: true
      sync:
      fromHost:
      nodes:
      enabled: true
    2. Create the vCluster using the above values file:

      vcluster create my-vcluster -f vcluster.yaml --connect=false
    3. Run the following command against the vCluster pod:

      kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

      Verify that the --bind-address argument is set to 127.0.0.1

    Expected results:

    '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present

    Returned value:

    88 root      0:00 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false