-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSDOCS-12054: 4.16.14 z-stream RN #82313
base: enterprise-4.16
Are you sure you want to change the base?
Conversation
@obrown1205: This pull request references OSDOCS-12054 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
🤖 Tue Sep 24 14:30:36 - Prow CI generated the docs preview: |
e94d390
to
80c2af9
Compare
@obrown1205: This pull request references OSDOCS-12054 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
a9485de
to
965cd88
Compare
0276763
to
57f6a19
Compare
@obrown1205: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/label peer-review-needed |
|
||
Issued: 24 September 2024 | ||
|
||
{product-title} release {product-version}.14 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2024:6824[RHSA-2024:6687] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2024:6827[RHSA-2024:6827] advisory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mismatch between URL and link-text:
{product-title} release {product-version}.14 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2024:6824[RHSA-2024:6687] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2024:6827[RHSA-2024:6827] advisory. | |
{product-title} release {product-version}.14 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2024:6824[RHSA-2024:6824] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2024:6827[RHSA-2024:6827] advisory. |
[id="ocp-4-16-14-insight-operator-collecting-cr-data"] | ||
===== Collecting data from the {rh-openstack-first} on OpenStack Services cluster resources with the Insight Operator | ||
|
||
* Insight Operator can collect data from the Red Hat OpenStack on OpenShift Services (RHOSO) cluster resources: `OpenStackControlPlane`, `OpenStackDataPlaneNodeSet`, `OpenStackDataPlaneDeployment` and `OpenStackVersions`. (link:https://issues.redhat.com/browse/OCPBUGS-38021[*OCPBUGS-38021*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add "the following" (maybe), and a comma:
* Insight Operator can collect data from the Red Hat OpenStack on OpenShift Services (RHOSO) cluster resources: `OpenStackControlPlane`, `OpenStackDataPlaneNodeSet`, `OpenStackDataPlaneDeployment` and `OpenStackVersions`. (link:https://issues.redhat.com/browse/OCPBUGS-38021[*OCPBUGS-38021*]) | |
* Insight Operator can collect data from the following Red Hat OpenStack on OpenShift Services (RHOSO) cluster resources: `OpenStackControlPlane`, `OpenStackDataPlaneNodeSet`, `OpenStackDataPlaneDeployment`, and `OpenStackVersions`. (link:https://issues.redhat.com/browse/OCPBUGS-38021[*OCPBUGS-38021*]) |
|
||
* Previously, when templates were defined for each failure domain, the installation program required an external connection to download the OVA in vSphere. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41885[*OCPBUGS-41885*]) | ||
|
||
* Previously, when the Operator Lifecycle Manager (OLM) evaluated a potential upgrade, it used the dynamic client list for all custom resource (CR) instances in the cluster. For clusters with a large number of CRs, that could result in timeouts from the apiserver and stranded upgrades. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41677[*OCPBUGS-41677*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Previously, when the Operator Lifecycle Manager (OLM) evaluated a potential upgrade, it used the dynamic client list for all custom resource (CR) instances in the cluster. For clusters with a large number of CRs, that could result in timeouts from the apiserver and stranded upgrades. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41677[*OCPBUGS-41677*]) | |
* Previously, when Operator Lifecycle Manager (OLM) evaluated a potential upgrade, it used the dynamic client list for all custom resource (CR) instances in the cluster. For clusters with a large number of CRs, that could result in timeouts from the API server and stranded upgrades. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41677[*OCPBUGS-41677*]) |
* Previously, when deploying a cluster into an Amazon Virtual Private Cloud (VPC) with multiple CIDR blocks, the installation program failed. With this release, network settings are updated to support VPCs with multiple CIDR blocks. | ||
(link:https://issues.redhat.com/browse/OCPBUGS-39496[*OCPBUGS-39496*]) | ||
|
||
* Previously, the order of an Ansible playbook was modified to run before the metadata.json file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39287[*OCPBUGS-39287*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Previously, the order of an Ansible playbook was modified to run before the metadata.json file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39287[*OCPBUGS-39287*]) | |
* Previously, the order of an Ansible playbook was modified to run before the `metadata.json` file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39287[*OCPBUGS-39287*]) |
|
||
* Previously, the installation program failed to install an {product-title} cluster in the `eu-es` (Madrid, Spain) region on a {ibm-power-server-title} platform that was configured as an `e980` system type. With this release, the installation program no longer fails to install a cluster in this environment. (link:https://issues.redhat.com/browse/OCPBUGS-38502[*OCPBUGS-38502*]) | ||
|
||
* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname was no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (http/s) and protocols that do not (ldap://). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-38058[*OCPBUGS-38058*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestions, and maybe break up the long paragraph:
* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname was no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (http/s) and protocols that do not (ldap://). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-38058[*OCPBUGS-38058*]) | |
* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (`http/s`) and protocols that do not (`ldap://`). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. | |
+ | |
With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-38058[*OCPBUGS-38058*]) |
Normally I would say plain (no backtick) "HTTP/S" and "LDAP" for generically referencing the protocols, but with the inclusion of the ://
in "ldap://", I'm not sure whether those are meant to be verbatim.
Edit: Looking at the note in the linked code change (https://github.com/openshift/hypershift/pull/4381/files#diff-8da8bfd0c13ccc31ad6b7f192c2363f68c33f402f760d1be8f5d32f8f8d93d34R206-R211), I might actually switch this to the plaintext uppercase style:
* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname was no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (http/s) and protocols that do not (ldap://). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-38058[*OCPBUGS-38058*]) | |
* Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (HTTP/S) and protocols that do not (LDAP). In addition, it did not honor the `no_proxy` variable that is configured in the `HostedCluster.spec.configuration.proxy` spec. | |
+ | |
With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your `no_proxy` settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-38058[*OCPBUGS-38058*]) |
|
||
* Previously, the `iptables-alerter` pod did not handle errors from `crictl`, which could cause the pod to incorrectly log events from `host-network` pods or cause pod restarts. With this release, the errors are handled correctly so that these issues no longer persist. (link:https://issues.redhat.com/browse/OCPBUGS-37763[*OCPBUGS-37763*]) | ||
|
||
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what the fix is here, maybe this:
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) | |
* Previously introduced IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
|
||
* Previously, the `iptables-alerter` pod did not handle errors from `crictl`, which could cause the pod to incorrectly log events from `host-network` pods or cause pod restarts. With this release, the errors are handled correctly so that these issues no longer persist. (link:https://issues.redhat.com/browse/OCPBUGS-37763[*OCPBUGS-37763*]) | ||
|
||
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: suggest an alternative to "spinning up" if possible, e.g.:
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) | |
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on starting two UPI installations on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
|
||
* Previously, the `iptables-alerter` pod did not handle errors from `crictl`, which could cause the pod to incorrectly log events from `host-network` pods or cause pod restarts. With this release, the errors are handled correctly so that these issues no longer persist. (link:https://issues.redhat.com/browse/OCPBUGS-37763[*OCPBUGS-37763*]) | ||
|
||
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure the fix on this part, maybe this:
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets and router happen to have the same name, which will interfere with one setup and prevent from deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) | |
* Previously introduced previously IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on spinning up two UPI installation on the same OpenStack cloud. The outcome of this will set network, subnets, and routers to have the same name, which will interfere with one setup and prevent deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. (link:https://issues.redhat.com/browse/OCPBUGS-36855[*OCPBUGS-36855*]) |
|
||
* Previously, some safe `sysctls` were erroneously omitted from the allow list. With this release, the `sysctls` are added back to the allow list and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-29403[*OCPBUGS-29403*]) | ||
|
||
* Previously, when an OpenShift cluster was upgraded from version 4.14 to 4.15, the vCenter cluster field was not populated in the configuration form of the UI. The infrastructure cluster resource did not have information for upgraded clusters. With this release, the UI uses the 'cloud-provider-config' config map for the vCenter cluster value and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41619[*OCPBUGS-41619*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Previously, when an OpenShift cluster was upgraded from version 4.14 to 4.15, the vCenter cluster field was not populated in the configuration form of the UI. The infrastructure cluster resource did not have information for upgraded clusters. With this release, the UI uses the 'cloud-provider-config' config map for the vCenter cluster value and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41619[*OCPBUGS-41619*]) | |
* Previously, when an {product-title} cluster was upgraded from version 4.14 to 4.15, the vCenter cluster field was not populated in the configuration form of the UI. The infrastructure cluster resource did not have information for upgraded clusters. With this release, the UI uses the `cloud-provider-config` config map for the vCenter cluster value and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-41619[*OCPBUGS-41619*]) |
|
||
* Previously, during the same scrape, Prometheus would drop samples from the same series and only consider one of them, even when they had different timestamps. When this issue occurred continuously, it triggered the `PrometheusDuplicateTimestamps` alert. With this release, all samples are now ingested if they meet the other conditions. (link:https://issues.redhat.com/browse/OCPBUGS-39179[*OCPBUGS-39179*]) | ||
|
||
* Previously, when a folder was undefined and the datacenter was located in a datacenter folder, an incorrect folder structure was created starting from the root of the vCenter server. By using the Govmomi `DatacenterFolders.VmFolder`, it used the an incorrect path. With this release, the folder structure uses the datacenter inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved.(link:https://issues.redhat.com/browse/OCPBUGS-39082[*OCPBUGS-39082*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Previously, when a folder was undefined and the datacenter was located in a datacenter folder, an incorrect folder structure was created starting from the root of the vCenter server. By using the Govmomi `DatacenterFolders.VmFolder`, it used the an incorrect path. With this release, the folder structure uses the datacenter inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved.(link:https://issues.redhat.com/browse/OCPBUGS-39082[*OCPBUGS-39082*]) | |
* Previously, when a folder was undefined and the datacenter was located in a datacenter folder, an incorrect folder structure was created starting from the root of the vCenter server. By using the Govmomi `DatacenterFolders.VmFolder`, it used the an incorrect path. With this release, the folder structure uses the datacenter inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39082[*OCPBUGS-39082*]) |
Version(s):
4.16
Issue:
OSDOCS-12054
Link to docs preview:
https://82313--ocpdocs-pr.netlify.app/openshift-enterprise/latest/release_notes/ocp-4-16-release-notes.html#ocp-4-16-14_release-notes
QE review:
N/A
Additional information:
Errata URLs will return 404 until go-live (9/24/24)