Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BSI APP.4.4.A14+A15 #12158

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

sluetze
Copy link
Contributor

@sluetze sluetze commented Jul 16, 2024

Description:

To check against BSI APP4.4.A14 this PR adds a new rule

master_taint_noschedule.

This rule checks, if the masters taint is set on master nodes. As we never know what kind of setup it is, it only checks if it is set at_least_once, since we can conclude that it might mostly be controlled by the schedulers component and be identical for each master node.

This PR also adds ad missing identifier and sets the bsi profile for automatic referencing.

Rationale:

  • Requested BSI Profile from our customers

Review Hints:

while we could also check if .spec.mastersSchedulable is set in the schedulers.config.openshift.io manifest, this key is not set by default. Thats why I moved to checking the effect instead of the configuration.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Used by openshift-ci bot. label Jul 16, 2024
Copy link

openshift-ci bot commented Jul 16, 2024

Hi @sluetze. Thanks for your PR.

I'm waiting for a ComplianceAsCode member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot added the needs-ok-to-test Used by openshift-ci bot. label Jul 16, 2024
Copy link

github-actions bot commented Jul 16, 2024

This datastream diff is auto generated by the check Compare DS/Generate Diff

Click here to see the full diff
OCIL for rule 'xccdf_org.ssgproject.content_rule_accounts_no_clusterrolebindings_default_service_account' differs.
--- ocil:ssg-accounts_no_clusterrolebindings_default_service_account_ocil:questionnaire:1
+++ ocil:ssg-accounts_no_clusterrolebindings_default_service_account_ocil:questionnaire:1
@@ -1,7 +1,7 @@
 Run the following command to retrieve a list of ClusterRoleBindings that are
 associated to the default service account:
 $ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
-There should be no ClusterRoleBindings associated with the the default service account 
+There should be no ClusterRoleBindings associated with the the default service account
 in any namespace.
       Is it the case that default service account is given permissions using ClusterRoleBindings?
       
OCIL for rule 'xccdf_org.ssgproject.content_rule_accounts_no_rolebindings_default_service_account' differs.
--- ocil:ssg-accounts_no_rolebindings_default_service_account_ocil:questionnaire:1
+++ ocil:ssg-accounts_no_rolebindings_default_service_account_ocil:questionnaire:1
@@ -1,7 +1,7 @@
 Run the following command to retrieve a list of RoleBindings that are
 associated to the default service account:
 $ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
-There should be no RoleBindings associated with the the default service account 
+There should be no RoleBindings associated with the the default service account
 in any namespace.
       Is it the case that default service account is given permissions using RoleBindings?
       
New content has different text for rule 'xccdf_org.ssgproject.content_rule_accounts_restrict_service_account_tokens'.
--- xccdf_org.ssgproject.content_rule_accounts_restrict_service_account_tokens
+++ xccdf_org.ssgproject.content_rule_accounts_restrict_service_account_tokens
@@ -7,9 +7,6 @@
 running in the pod explicitly needs to communicate with the API server.
 To ensure pods do not automatically mount tokens, set
 automountServiceAccountToken to false.
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -35,6 +32,9 @@
 [reference]:
 5.1.6
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 Mounting service account tokens inside pods can provide an avenue
 for privilege escalation attacks where an attacker is able to

New content has different text for rule 'xccdf_org.ssgproject.content_rule_accounts_unique_service_account'.
--- xccdf_org.ssgproject.content_rule_accounts_unique_service_account
+++ xccdf_org.ssgproject.content_rule_accounts_unique_service_account
@@ -10,9 +10,6 @@
        
 where service_account_name is the name of a service account
 that is needed in the project namespace.
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -38,6 +35,9 @@
 [reference]:
 5.1.5
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 Kubernetes provides a default service account which is used by
 cluster workloads where no specific service account is assigned to the pod.

New content has different text for rule 'xccdf_org.ssgproject.content_rule_api_server_anonymous_auth'.
--- xccdf_org.ssgproject.content_rule_api_server_anonymous_auth
+++ xccdf_org.ssgproject.content_rule_api_server_anonymous_auth
@@ -26,9 +26,6 @@
 Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterrolebindings API endpoint to the local /apis/rbac.authorization.k8s.io/v1/clusterrolebindings file.
 
 [reference]:
-APP.4.4.A3
-
-[reference]:
 CIP-003-8 R6
 
 [reference]:
@@ -52,6 +49,9 @@
 [reference]:
 1.2.1
 
+[reference]:
+APP.4.4.A3
+
 [rationale]:
 When enabled, requests that are not rejected by other configured
 authentication methods are treated as anonymous requests. These requests

New content has different text for rule 'xccdf_org.ssgproject.content_rule_general_node_separation'.
--- xccdf_org.ssgproject.content_rule_general_node_separation
+++ xccdf_org.ssgproject.content_rule_general_node_separation
@@ -5,12 +5,23 @@
 [description]:
 Use Nodes or Clusters to isolate Workloads with high protection requirements.
 
-Run the following command and review the pods and how they are deployed on Nodes. $ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" 
+Run the following command and review the pods and how they are deployed on Nodes.
+$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" 
 You can use labels or other data as custom field which helps you to identify parts of an application.
-Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters with workloads of lower protection requirements.
+Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters
+with workloads of lower protection requirements.
+
+[reference]:
+APP.4.4.A14
 
 [reference]:
 APP.4.4.A15
 
 [rationale]:
-Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads.
+Assigning workloads with high protection requirements to specific nodes creates and additional
+boundary (the node) between workloads of high protection requirements and workloads which might
+follow less strict requirements. An adversary which attacked a lighter protected workload now has
+additional obstacles for their movement towards the higher protected workloads.
+
+[ident]:
+CCE-88903-0

New content has different text for rule 'xccdf_org.ssgproject.content_rule_kubeadmin_removed'.
--- xccdf_org.ssgproject.content_rule_kubeadmin_removed
+++ xccdf_org.ssgproject.content_rule_kubeadmin_removed
@@ -14,9 +14,6 @@
 [warning]:
 This rule's check operates on the cluster configuration dump.
 Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/kube-system/secrets/kubeadmin API endpoint to the local /api/v1/namespaces/kube-system/secrets/kubeadmin file.
-
-[reference]:
-APP.4.4.A3
 
 [reference]:
 CIP-004-6 R2.2.2
@@ -94,6 +91,9 @@
 5.1.1
 
 [reference]:
+APP.4.4.A3
+
+[reference]:
 CNTR-OS-000030
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_liveness_readiness_probe_in_workload'.
--- xccdf_org.ssgproject.content_rule_liveness_readiness_probe_in_workload
+++ xccdf_org.ssgproject.content_rule_liveness_readiness_probe_in_workload
@@ -3,22 +3,22 @@
 Ensure that all workloads have liveness and readiness probes
 
 [description]:
-Configuring Kubernetes liveness and readiness probes is essential for ensuring the security and 
+Configuring Kubernetes liveness and readiness probes is essential for ensuring the security and
 reliability of a system. These probes actively monitor container health and readiness, facilitating
-automatic actions like restarting or rescheduling unresponsive instances for improved reliability. 
-They play a proactive role in issue detection, allowing timely problem resolution and contribute 
+automatic actions like restarting or rescheduling unresponsive instances for improved reliability.
+They play a proactive role in issue detection, allowing timely problem resolution and contribute
 to efficient scaling and traffic distribution.
 
 [reference]:
 APP.4.4.A11
 
 [rationale]:
-Many applications running for long periods of time eventually transition to broken states, and 
+Many applications running for long periods of time eventually transition to broken states, and
 cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy
 such situations.
-Sometimes, applications are temporarily unable to serve traffic. For example, an application might 
+Sometimes, applications are temporarily unable to serve traffic. For example, an application might
 need to load large data or configuration files during startup, or depend on external services after
-startup. In such cases, you don't want to kill the application, but you don't want to send it 
-requests either. Kubernetes provides readiness probes to detect and mitigate these situations. 
-A pod with containers reporting that they are not ready does not receive traffic through Kubernetes 
+startup. In such cases, you don't want to kill the application, but you don't want to send it
+requests either. Kubernetes provides readiness probes to detect and mitigate these situations.
+A pod with containers reporting that they are not ready does not receive traffic through Kubernetes
 Services.

OCIL for rule 'xccdf_org.ssgproject.content_rule_liveness_readiness_probe_in_workload' differs.
--- ocil:ssg-liveness_readiness_probe_in_workload_ocil:questionnaire:1
+++ ocil:ssg-liveness_readiness_probe_in_workload_ocil:questionnaire:1
@@ -1,4 +1,4 @@
-Run the following command to retrieve a list of deployments, daemonsets and statefulsets that 
+Run the following command to retrieve a list of deployments, daemonsets and statefulsets that
 do not have liveness or readiness probes set for their containers:
 $ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'
 

New content has different text for rule 'xccdf_org.ssgproject.content_rule_kubelet_anonymous_auth'.
--- xccdf_org.ssgproject.content_rule_kubelet_anonymous_auth
+++ xccdf_org.ssgproject.content_rule_kubelet_anonymous_auth
@@ -14,9 +14,6 @@
   anonymous:
     enabled: false
   ...
-
-[reference]:
-APP.4.4.A3
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +36,9 @@
 [reference]:
 4.2.2
 
+[reference]:
+APP.4.4.A3
+
 [rationale]:
 When enabled, requests that are not rejected by other configured
 authentication methods are treated as anonymous requests. These

New content has different text for rule 'xccdf_org.ssgproject.content_rule_configure_network_policies'.
--- xccdf_org.ssgproject.content_rule_configure_network_policies
+++ xccdf_org.ssgproject.content_rule_configure_network_policies
@@ -17,9 +17,6 @@
     and persist it to the local
     /apis/operator.openshift.io/v1/networks/cluster#35e33d6dc1252a03495b35bd1751cac70041a511fa4d282c300a8b83b83e3498
     file.
-
-[reference]:
-APP.4.4.A7
 
 [reference]:
 CIP-003-8 R6
@@ -52,6 +49,9 @@
 5.3.1
 
 [reference]:
+APP.4.4.A7
+
+[reference]:
 CNTR-OS-000100
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces'.
--- xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces
+++ xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces
@@ -20,9 +20,6 @@
     and persist it to the local
     /api/v1/namespaces#f673748db2dd4e4f0ad55d10ce5e86714c06da02b67ddb392582f71ef81efab2
     file.
-
-[reference]:
-APP.4.4.A7
 
 [reference]:
 CIP-003-8 R4
@@ -133,6 +130,9 @@
 5.3.2
 
 [reference]:
+APP.4.4.A7
+
+[reference]:
 CNTR-OS-000100
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_project_config_and_template_network_policy'.
--- xccdf_org.ssgproject.content_rule_project_config_and_template_network_policy
+++ xccdf_org.ssgproject.content_rule_project_config_and_template_network_policy
@@ -32,10 +32,10 @@
     file.
 
 [reference]:
-APP.4.4.A7
+SRG-APP-000039-CTR-000110
 
 [reference]:
-SRG-APP-000039-CTR-000110
+APP.4.4.A7
 
 [reference]:
 CNTR-OS-000110

New content has different text for rule 'xccdf_org.ssgproject.content_rule_rbac_least_privilege'.
--- xccdf_org.ssgproject.content_rule_rbac_least_privilege
+++ xccdf_org.ssgproject.content_rule_rbac_least_privilege
@@ -25,15 +25,6 @@
 oc adm policy remove-role-from-group role groupname
 
 NOTE: For additional information. https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html
-
-[reference]:
-APP.4.4.A3
-
-[reference]:
-APP.4.4.A7
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 AC-3
@@ -111,6 +102,15 @@
 5.2.10
 
 [reference]:
+APP.4.4.A3
+
+[reference]:
+APP.4.4.A7
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000090
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_rbac_wildcard_use'.
--- xccdf_org.ssgproject.content_rule_rbac_wildcard_use
+++ xccdf_org.ssgproject.content_rule_rbac_wildcard_use
@@ -9,9 +9,6 @@
 wildcard * which matches all items. This violates the
 principle of least privilege and leaves a cluster in a more
 vulnerable state to privilege abuse.
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -37,6 +34,9 @@
 [reference]:
 5.1.3
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 The principle of least privilege recommends that users are
 provided only the access required for their role and nothing

New content has different text for rule 'xccdf_org.ssgproject.content_rule_ocp_insecure_allowed_registries_for_import'.
--- xccdf_org.ssgproject.content_rule_ocp_insecure_allowed_registries_for_import
+++ xccdf_org.ssgproject.content_rule_ocp_insecure_allowed_registries_for_import
@@ -24,9 +24,6 @@
 Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /apis/config.openshift.io/v1/images/cluster file.
 
 [reference]:
-APP.4.4.A12
-
-[reference]:
 CM-5(3)
 
 [reference]:
@@ -34,6 +31,9 @@
 
 [reference]:
 5.5.1
+
+[reference]:
+APP.4.4.A12
 
 [reference]:
 CNTR-OS-000010

New content has different text for rule 'xccdf_org.ssgproject.content_rule_ocp_insecure_registries'.
--- xccdf_org.ssgproject.content_rule_ocp_insecure_registries
+++ xccdf_org.ssgproject.content_rule_ocp_insecure_registries
@@ -19,9 +19,6 @@
 Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /apis/config.openshift.io/v1/images/cluster file.
 
 [reference]:
-APP.4.4.A12
-
-[reference]:
 CM-5(3)
 
 [reference]:
@@ -29,6 +26,9 @@
 
 [reference]:
 5.5.1
+
+[reference]:
+APP.4.4.A12
 
 [reference]:
 CNTR-OS-000010

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scansettingbinding_exists'.
--- xccdf_org.ssgproject.content_rule_scansettingbinding_exists
+++ xccdf_org.ssgproject.content_rule_scansettingbinding_exists
@@ -12,9 +12,6 @@
 [warning]:
 This rule's check operates on the cluster configuration dump.
 Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 API endpoint to the local /apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 file.
-
-[reference]:
-APP.4.4.A13
 
 [reference]:
 CIP-003-8 R1.3
@@ -83,6 +80,9 @@
 SRG-APP-000472-CTR-001170
 
 [reference]:
+APP.4.4.A13
+
+[reference]:
 CNTR-OS-000910
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scansettings_have_schedule'.
--- xccdf_org.ssgproject.content_rule_scansettings_have_schedule
+++ xccdf_org.ssgproject.content_rule_scansettings_have_schedule
@@ -21,13 +21,13 @@
     file.
 
 [reference]:
-APP.4.4.A13
-
-[reference]:
 SI-6(b)
 
 [reference]:
 SRG-APP-000473-CTR-001175
+
+[reference]:
+APP.4.4.A13
 
 [reference]:
 CNTR-OS-000920

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_drop_container_capabilities'.
--- xccdf_org.ssgproject.content_rule_scc_drop_container_capabilities
+++ xccdf_org.ssgproject.content_rule_scc_drop_container_capabilities
@@ -8,9 +8,6 @@
 capabilities, the appropriate Security Context Constraints (SCCs)
 should set all capabilities as * or a list of capabilities in
 requiredDropCapabilities.
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -36,6 +33,9 @@
 [reference]:
 5.2.9
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 By default, containers run with a default set of capabilities as assigned
 by the Container Runtime which can include dangerous or highly privileged

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_container_allowed_capabilities'.
--- xccdf_org.ssgproject.content_rule_scc_limit_container_allowed_capabilities
+++ xccdf_org.ssgproject.content_rule_scc_limit_container_allowed_capabilities
@@ -47,9 +47,6 @@
     file.
 
 [reference]:
-APP.4.4.A9
-
-[reference]:
 CIP-003-8 R6
 
 [reference]:
@@ -73,6 +70,9 @@
 [reference]:
 5.2.8
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 By default, containers run with a default set of capabilities as assigned
 by the Container Runtime which can include dangerous or highly privileged

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_host_dir_volume_plugin'.
--- xccdf_org.ssgproject.content_rule_scc_limit_host_dir_volume_plugin
+++ xccdf_org.ssgproject.content_rule_scc_limit_host_dir_volume_plugin
@@ -7,12 +7,6 @@
 necessary. To prevent containers from using the host filesystem
 the appropriate Security Context Constraints (SCCs) should set
 allowHostDirVolumePlugin to false.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 AC-6
@@ -25,6 +19,12 @@
 
 [reference]:
 5.2.12
+
+[reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
 
 [reference]:
 CNTR-OS-000660

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_host_ports'.
--- xccdf_org.ssgproject.content_rule_scc_limit_host_ports
+++ xccdf_org.ssgproject.content_rule_scc_limit_host_ports
@@ -9,9 +9,6 @@
 should set allowHostPorts to false.
 
 [reference]:
-APP.4.4.A9
-
-[reference]:
 CM-6
 
 [reference]:
@@ -19,6 +16,9 @@
 
 [reference]:
 SRG-APP-000142-CTR-000330
+
+[reference]:
+APP.4.4.A9
 
 [reference]:
 CNTR-OS-000660

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace'.
--- xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace
+++ xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace
@@ -7,12 +7,6 @@
 namespace. To prevent containers from getting access to a host's
 IPC namespace, the appropriate Security Context Constraints (SCCs)
 should set allowHostIPC to false.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +33,12 @@
 5.2.3
 
 [reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000660
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability'.
--- xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability
+++ xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability
@@ -8,12 +8,6 @@
 to launch a network attack on another container or cluster. To disable the
 CAP_NET_RAW capability, the appropriate Security Context Constraints (SCCs)
 should set NET_RAW in requiredDropCapabilities.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +33,12 @@
 [reference]:
 5.2.7
 
+[reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 By default, containers run with a default set of capabilities as assigned
 by the Container Runtime which can include dangerous or highly privileged

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_network_namespace'.
--- xccdf_org.ssgproject.content_rule_scc_limit_network_namespace
+++ xccdf_org.ssgproject.content_rule_scc_limit_network_namespace
@@ -7,12 +7,6 @@
 namespace. To prevent containers from getting access to a host's
 network namespace, the appropriate Security Context Constraints (SCCs)
 should set allowHostNetwork to false.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +33,12 @@
 5.2.4
 
 [reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000660
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation'.
--- xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation
+++ xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation
@@ -8,9 +8,6 @@
 To prevent containers from escalating privileges,
 the appropriate Security Context Constraints (SCCs)
 should set allowPrivilegeEscalation to false.
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -36,6 +33,9 @@
 [reference]:
 5.2.5
 
+[reference]:
+APP.4.4.A9
+
 [rationale]:
 Privileged containers have access to more of the Linux Kernel
 capabilities and devices. If a privileged container were

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers'.
--- xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers
+++ xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers
@@ -7,12 +7,6 @@
 to run. To prevent containers from running as privileged containers,
 the appropriate Security Context Constraints (SCCs) should set
 allowPrivilegedContainer to false.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -42,6 +36,12 @@
 5.2.1
 
 [reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000660
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace'.
--- xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace
+++ xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace
@@ -7,12 +7,6 @@
 ID namespace. To prevent containers from getting access to a host's
 process ID namespace, the appropriate Security Context Constraints (SCCs)
 should set allowHostPID to false.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +33,12 @@
 5.2.2
 
 [reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000660
 
 [reference]:

New content has different text for rule 'xccdf_org.ssgproject.content_rule_scc_limit_root_containers'.
--- xccdf_org.ssgproject.content_rule_scc_limit_root_containers
+++ xccdf_org.ssgproject.content_rule_scc_limit_root_containers
@@ -7,12 +7,6 @@
 To prevent containers from running as root user,
 the appropriate Security Context Constraints (SCCs) should set
 .runAsUser.type to MustRunAsRange.
-
-[reference]:
-APP.4.4.A4
-
-[reference]:
-APP.4.4.A9
 
 [reference]:
 CIP-003-8 R6
@@ -39,6 +33,12 @@
 5.2.6
 
 [reference]:
+APP.4.4.A4
+
+[reference]:
+APP.4.4.A9
+
+[reference]:
 CNTR-OS-000660
 
 [reference]:

Copy link

Start a new ephemeral environment with changes proposed in this pull request:

ocp4 (from CTF) Environment (using Fedora as testing environment)
Open in Gitpod

Fedora Testing Environment
Open in Gitpod

Oracle Linux 8 Environment
Open in Gitpod

@openshift-merge-robot openshift-merge-robot removed the needs-rebase Used by openshift-ci bot. label Jul 16, 2024
Copy link

github-actions bot commented Jul 16, 2024

🤖 A k8s content image for this PR is available at:
ghcr.io/complianceascode/k8scontent:12158
This image was built from commit: 2a52623

Click here to see how to deploy it

If you alread have Compliance Operator deployed:
utils/build_ds_container.py -i ghcr.io/complianceascode/k8scontent:12158

Otherwise deploy the content and operator together by checking out ComplianceAsCode/compliance-operator and:
CONTENT_IMAGE=ghcr.io/complianceascode/k8scontent:12158 make deploy-local

Copy link

codeclimate bot commented Jul 16, 2024

Code Climate has analyzed commit 2a52623 and detected 0 issues on this pull request.

The test coverage on the diff in this pull request is 100.0% (50% is the threshold).

This pull request will bring the total coverage in the repository to 59.4% (0.0% change).

View more on Code Climate.

@marcusburghardt marcusburghardt added OpenShift OpenShift product related. BSI PRs or issues for the BSI profile. labels Jul 31, 2024
@BhargaviGudi
Copy link
Collaborator

QE: /lgtm
Verification passed with 4.17.0-0.nightly-2024-08-18-131731 + compliance-operator + #12158
Verified the content and verified rule instructions are working as expected.

$ oc get ccr | grep no-clusterrolebin
upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account   PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of ClusterRoleBindings that are
associated to the default service account:
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
There should be no ClusterRoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using ClusterRoleBindings?$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
[]
$ oc get ccr | grep no-rolebinding
upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account          PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of RoleBindings that are
associated to the default service account:
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
There should be no RoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using RoleBindings?$ 
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
[]
$ oc get ccr | grep ristrict-service
$ oc get ccr | grep restrict-service
upstream-ocp4-bsi-accounts-restrict-service-account-tokens                  MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-accounts-restrict-service-account-tokens -o=jsonpath={.instructions}
For each pod in the cluster, review the pod specification and
ensure that pods that do not need to explicitly communicate with
the API server have automountServiceAccountToken
configured to false.
Is it the case that service account token usage needs review?$ 
$ oc get ccr | grep account-unique-service
$ oc get ccr | grep accounts-unique-service
upstream-ocp4-bsi-accounts-unique-service-account                           MANUAL   medium
$ oc get ccr | grep general-node
upstream-ocp4-bsi-general-node-separation                                   MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-general-node-separation -o=jsonpath={.instructions}
Run the following command and review the pods and how they are deployed on nodes. $ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that applications with high protection requirements are not colocated on nodes or in clusters with workloads of lower protection requirements.
Is it the case that Application placement on Nodes and Clusters needs review?$ 
$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"
NAME                                                                                          NAMESPACE                                          APP      NODE
$ oc get ccr | grep liveness-readiness
upstream-ocp4-bsi-liveness-readiness-probe-in-workload                      MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-liveness-readiness-probe-in-workload -o=jsonpath={.instructions}
Run the following command to retrieve a list of deployments, daemonsets and statefulsets that
do not have liveness or readiness probes set for their containers:
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'

Make sure that there is output nothing in the result or there are valid reason not to set a
readiness or liveness probe for those workloads.
Is it the case that Liveness or readiness probe is not set?$ 
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'
[]
$ oc get ccr | grep master-taint
upstream-ocp4-bsi-master-taint-noschedule                                   PASS     medium
$ oc get ccr upstream-ocp4-bsi-master-taint-noschedule -o=jsonpath={.instructions}
Run the following command to see if control planes are schedulable
$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'
for each master node, there should be an output of a key with the NoSchedule effect.

By editing the cluster scheduler you can centrally configure the masters as schedulable or not
by setting .spec.mastersSchedulable to true.
Use $oc edit schedulers.config.openshift.io cluster to configure the scheduling.
Is it the case that Control Plane is schedulable?$ 
$ oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
$ oc get ccr | grep scansetting-has-auto
upstream-ocp4-bsi-scansetting-has-autoapplyremediations                     PASS     medium
$ oc get ccr upstream-ocp4-bsi-scansetting-has-autoapplyremediations -o=jsonpath={.instructions}
Run the following command to retrieve the scansettingbindings in the system:
oc get scansettings -ojson | jq '.items[].autoApplyRemediations'
If a scansetting is defined to set the autoApplyRemediation attribute, the above
filter will return at least one 'true'. Run the following jq query to identify the non-compliant scansettings objects:
oc get scansettings -ojson | jq -r '[.items[] | select(.autoApplyRemediation != "" or .autoApplyRemediation != null) | .metadata.name]'
Is it the case that compliance operator is not automatically remediating the cluster?$ 
$ oc get scansettings -ojson | jq '.items[].autoApplyRemediations'
true
null
true
$ oc get scansettings -ojson | jq -r '[.items[] | select(.autoApplyRemediation != "" or .autoApplyRemediation != null) | .metadata.name]'
[
  "auto-rem-ss",
  "default",
  "default-auto-apply"
]
$ oc get ccr | grep no-clusterbindings-de
$ oc get ccr | grep no-clusterrole
upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account   PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of ClusterRoleBindings that are
associated to the default service account:
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
There should be no ClusterRoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using ClusterRoleBindings?$ 
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
[]
$ oc get ccr | grep no-rolebindings
upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account          PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of RoleBindings that are
associated to the default service account:
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
There should be no RoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using RoleBindings?$ 
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
[]
$ oc get ccr | grep general-network
upstream-ocp4-bsi-general-network-separation                                MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-general-network-separation -o=jsonpath={.instructions}
Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...).
Is it the case that Network separation needs review?$ 
$ oc get ccr | grep probe-in-workload
upstream-ocp4-bsi-liveness-readiness-probe-in-workload                      MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-liveness-readiness-probe-in-workload -o=jsonpath={.instructions}
Run the following command to retrieve a list of deployments, daemonsets and statefulsets that
do not have liveness or readiness probes set for their containers:
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'

Make sure that there is output nothing in the result or there are valid reason not to set a
readiness or liveness probe for those workloads.
Is it the case that Liveness or readoc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'ind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'
[]

@yuumasato yuumasato self-assigned this Sep 17, 2024
@yuumasato yuumasato added this to the 0.1.75 milestone Sep 20, 2024
Copy link
Member

@yuumasato yuumasato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sluetze This looks great.
It just needs a rebase/conflict resolution though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BSI PRs or issues for the BSI profile. needs-ok-to-test Used by openshift-ci bot. OpenShift OpenShift product related.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants