-
Notifications
You must be signed in to change notification settings - Fork 686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defined notes and rules for BSI APP.4.4.A19 #12155
base: master
Are you sure you want to change the base?
Defined notes and rules for BSI APP.4.4.A19 #12155
Conversation
Hi @benruland. Thanks for your PR. I'm waiting for a ComplianceAsCode member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
🤖 A k8s content image for this PR is available at: Click here to see how to deploy itIf you alread have Compliance Operator deployed: Otherwise deploy the content and operator together by checking out ComplianceAsCode/compliance-operator and: |
6b090f8
to
3b28165
Compare
Rules include OpenShift node high availability and spreading over zones, and Anti-Affinity/TopologySpreadConstraints for Workload
3b28165
to
8f3cabb
Compare
Code Climate has analyzed commit 8f3cabb and detected 0 issues on this pull request. The test coverage on the diff in this pull request is 100.0% (50% is the threshold). This pull request will bring the total coverage in the repository to 59.4% (0.0% change). View more on Code Climate. |
applications of the clusters, and the pods of the applications SHOULD be distributed across | ||
several fire zones based on the location data of the corresponding nodes so that the failure of a | ||
fire zone will not lead to the failure of an application. | ||
notes: >- | ||
TBD | ||
Section 1: OpenShift support topology labels to differentiate between failure zones. To achieve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Section 1: OpenShift support topology labels to differentiate between failure zones. To achieve | |
Section 1: OpenShift supports topology labels to differentiate between failure zones. To achieve | |
distribution across nodes and zones needs to be configured during deployment using affinity / | ||
anti-affinity rules or topology spread constraints. | ||
|
||
Single Node OpenShift (SNO) is not highly available and therefore incompliant to this control. | ||
status: pending |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
status: pending | |
status: automated | |
This seems to be pretty automated. Automated is also applicable, if all technical feasible checks are done.
This approach ensures that a single node or AZ failure does not lead to total application | ||
downtime, as workloads are balanced and resources are efficiently utilized. | ||
|
||
identifiers: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unique identifier is missing, add it by using
./utils/rule_dir_json.py
followed by python utils/fix_rules.py --product products/ocp4/product.yml add-cce --cce-pool redhat anti_affinity_or_topology_spread_constraints_in_deployment
@@ -0,0 +1,83 @@ | |||
documentation_complete: true | |||
|
|||
title: 'Ensure deployments have either anti-affinity rules or topology spread constraints' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
title: 'Ensure deployments have either anti-affinity rules or topology spread constraints' | |
title: 'Ensure Deployments have either Anti-Affinity Rules or Topology Spread Constraints' |
as of style guide
|
||
There might be deployments, that do not require high availability or spreading across nodes. | ||
To limit the number of false positives, this rule only checks deployments with a replica count | ||
of more than one. For deployments with one replica neither anti-affinity rules nor topology |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
of more than one. For deployments with one replica neither anti-affinity rules nor topology | |
of more than one. For deployments with one replica, neither anti-affinity rules nor topology |
a comma that helps readability
To ensure, that the OpenShift control plane stays accessible on outage of a single master node, a | ||
number of 3 control plane nodes is required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To ensure, that the OpenShift control plane stays accessible on outage of a single master node, a | |
number of 3 control plane nodes is required. | |
To ensure, that the OpenShift control plane stays accessible on outage of a single master node, | |
three control plane nodes are required. |
number of 3 control plane nodes is required. | ||
|
||
rationale: |- | ||
A highly-available OpenShift control plane requires 3 control plane nodes. This allows etcd to have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A highly-available OpenShift control plane requires 3 control plane nodes. This allows etcd to have | |
A high available OpenShift control plane requires three control plane nodes. This allows etcd to have |
A highly-available OpenShift control plane requires 3 control plane nodes. This allows etcd to have | ||
a functional quorum state, when a single control plane node is unavailable. | ||
|
||
identifiers: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unique identifier is missing, add it by using
./utils/rule_dir_json.py
followed by python utils/fix_rules.py --product products/ocp4/product.yml add-cce --cce-pool redhatthree_control_plane_nodes
@@ -0,0 +1,52 @@ | |||
documentation_complete: true | |||
|
|||
title: 'Ensure worker nodes are distribute across three failure zones' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
title: 'Ensure worker nodes are distribute across three failure zones' | |
title: 'Ensure Worker Nodes are Distributed Across Three Failure Zones' |
This label is automatically assigned to each node by cloud providers but might need to be managed | ||
manually in other environments | ||
|
||
identifiers: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unique identifier is missing, add it by using
./utils/rule_dir_json.py
followed by python utils/fix_rules.py --product products/ocp4/product.yml add-cce --cce-pool redhat worker_nodes_in_two_zones_or_more
@benruland Hi, do you plan to act on @sluetze's feedback? |
Description:
Notes / Rules for BSI APP4.4.A19
Rationale:
As we have multiple customers asking for a BSI profile to be included in the compliance-operator, we are contributing a profile. To provide a better review process, the individual controle are implemented as separate PRs.
This is a follow-up of #11659. It was breaken up for better reviewability.