what you don't know can hurt you
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2021-5086-06

Red Hat Security Advisory 2021-5086-06
Posted Dec 14, 2021
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2021-5086-06 - Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. Issues addressed include a path sanitization vulnerability.

tags | advisory
systems | linux, redhat
advisories | CVE-2020-8565, CVE-2021-32803, CVE-2021-32804, CVE-2021-33195, CVE-2021-33197, CVE-2021-33198, CVE-2021-34558, CVE-2021-37701, CVE-2021-37712
SHA-256 | 774e5117e6048e40bc0540ccd8f805fad79e574958c9975e3e273b6f6ba3280c

Red Hat Security Advisory 2021-5086-06

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update
Advisory ID: RHSA-2021:5086-01
Product: Red Hat OpenShift Data Foundation
Advisory URL: https://access.redhat.com/errata/RHSA-2021:5086
Issue date: 2021-12-13
CVE Names: CVE-2020-8565 CVE-2021-32803 CVE-2021-32804
CVE-2021-33195 CVE-2021-33197 CVE-2021-33198
CVE-2021-34558 CVE-2021-37701 CVE-2021-37712
=====================================================================

1. Summary:

Updated images that include numerous enhancements, security, and bug fixes
are now available for Red Hat OpenShift Data Foundation 4.9.0 on Red Hat
Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat OpenShift Data Foundation is software-defined storage integrated
with and optimized for the Red Hat OpenShift Container Platform. Red Hat
OpenShift Data Foundation is a highly scalable, production-grade persistent
storage for stateful applications running in the Red Hat OpenShift
Container Platform. In addition to persistent storage, Red Hat OpenShift
Data Foundation provisions a multicloud data management service with an S3
compatible API.

Security Fix(es):

* kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in
logs when logLevel >= 9 (CVE-2020-8565)

* nodejs-tar: Insufficient symlink protection allowing arbitrary file
creation and overwrite (CVE-2021-32803)

* nodejs-tar: Insufficient absolute path sanitization allowing arbitrary
file creation and overwrite (CVE-2021-32804)

* golang: net: lookup functions may return invalid host names
(CVE-2021-33195)

* golang: net/http/httputil: ReverseProxy forwards connection headers if
first one is empty (CVE-2021-33197)

* golang: math/big.Rat: may cause a panic or an unrecoverable fatal error
if passed inputs with very large exponents (CVE-2021-33198)

* golang: crypto/tls: certificate of wrong type is causing TLS client to
panic (CVE-2021-34558)

* nodejs-tar: insufficient symlink protection due to directory cache
poisoning using symbolic links allowing arbitrary file creation and
overwrite (CVE-2021-37701)

* nodejs-tar: insufficient symlink protection due to directory cache
poisoning using symbolic links allowing arbitrary file creation and
overwrite (CVE-2021-37712)

For more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information, refer to the CVE
page(s) listed in the References section.

For more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information refer to the CVE
page(s) listed in the References section.

These updated images include numerous enhancements and bug fixes. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat OpenShift Data Foundation Release Notes for
information on the most significant of these changes:

https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/index

All Red Hat OpenShift Data Foundation users are advised to upgrade to
these updated images, which provide numerous bug fixes and enhancements.

3. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

4. Bugs fixed (https://bugzilla.redhat.com/):

1810525 - [GSS][RFE] [Tracker for Ceph # BZ # 1910272] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold.
1853638 - [RFE] - Can Force deletion of noobaa-db be automatically handled in case on hosting node shutdown (similar to OSD & MONS)
1886638 - CVE-2020-8565 kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel >= 9
1890438 - collect rados objects created by cephcsi to store internal mapping
1890978 - [External] Improve error logging in ocs-operator
1892709 - NooBaa storage class should be deleted when uninstalling
1901954 - Allow restoring snapshot to a different pool than parent volume
1910790 - [AWS 1AZ] [OCP 4.7 / OCS 4.6] ocs-operator stuck in Installing phase, and noobaa-db-0 pod in a Pending state
1927782 - With graceful mode, storagecluster/cephcluster deletion should be blocked if OBC based on RGW SC still exists
1929242 - [GSS] [RFE] QoS and limits in Object Bucket Claims in OpenShift for RGW
1932396 - [TRACKER for BZ #1943619] - RGW does not handle "Expect: 100-continue" answers from http requests not needing it
1934625 - [must-gather]improve logging and mention all instances in MG terminal log
1956285 - [must-gather] log collection for some ceph cmd failed with timeout: fork system call failed: Resource temporarily unavailable
1959793 - [RBD][Thick] PVC restored from a snapshot or cloned from a thick provisioned PVC, is not thick provisioned
1964083 - [RFE] ocs-must-gather should collect logs for RegionalDR
1965322 - Error code 500 is used when page not found
1968510 - OCS uninstall should check for Volumesnapshots before proceeding with graceful Uninstall
1968606 - OCS CSV Status moves to Failed and Installs again when a StorageCluster is created
1969216 - Rook may recreate a file system with existing pools
1973256 - [Tracker for BZ #1975608] [Mon Recovery testing(bz1965768)] After replacing degraded cephfs with new cephfs, the cephfs app-pod created before mon corruption is not accessible
1975272 - [RFE] [KMS] Add support for auto-detection of the Vault KV version
1975581 - OCS is still using deprecated api v1beta1
1979244 - [KMS] Keys are still listed in vault after deleting encrypted PVCs while using kv-v2 secret engine
1979502 - Multi-Cloud Object Gateway with v4.7 RC5 performs slow compared to v4.6 with mongodb - read flows
1980818 - RBD Snapshot of encrypted PVC may fail with error "BUG: "xyz" and "xyz" have the same VolID (xyz) set!? Call stack: goroutine 118 [running]:
1981331 - New namespace OBCs stuck in Pending, even though underlying bucketclass is Ready
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic
1983756 - [Multus] Deletion of CephBlockPool get stuck and blocks creation of new pools
1984284 - [GSS] [4.9 clone] Standalone Object Gateway is failing to connect
1984334 - [RFE] [4.9 clone] VAULT_BACKEND parameter should be added to the csi-kms-connection-details
1984396 - Failing the only OSD of a node on a 3 node cluster doesn't create blocking PDBs
1984735 - [External Mode] Monitoring spec is getting reset in CephCluster resource
1985074 - must-gather is skipping the ceph collection when there are two must-gather-helper pods
1986444 - Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted
1986794 - [Tracker for Ceph BZ #2000434] [RBD][Thick] PVC is not reaching Bound state
1987806 - [External] External Cluster resources are not updated even after updating the JSON input secret
1988518 - ocs-metrics-exporter runAsNonRoot error
1989482 - odf-operator.v4.9.0-43.ci fails to install
1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names
1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty
1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents
1990230 - ocs-operator.v4.9.0-50.ci csv not found
1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite
1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite
1991822 - ocs-operator.v4.9.0-53.ci: install strategy failed with "noobaa-operator" is invalid
1992472 - How to add toleration to OCS pods for any non OCS taints?
1994261 - odf-operator.v4.9.0-91.ci fails to install with odf-console
1994577 - openshift-storage namespace is not created automatically when installing odf-operator
1994584 - ocs-operator is not hidden when installed as a dependency of odf-operator 4.9
1994602 - Icon for odf-operator is missing during and post installation
1994606 - Details missing on the card while installing ODF Operator via UI
1994687 - [vSphere]: csv ocs-registry:4.9.0-91.ci is in Installing phase
1995009 - Warning to create storage system is missing in odf operator Details page
1995056 - OCS_CSV_NAME is not correct in odf-operator-manager-config configmap
1995271 - [GSS] noobaa-db-pg-0 Pod get in stuck CrashLoopBackOff state when enabling hugepages
1995718 - [External Mode] Script "ceph-external-cluster-details-exporter.py" incompatible with python 2.x
1997237 - [UI] 404: Page Not Found error when creating storage system
1997624 - [KMS] PV encryption fails when using vault namespace functionality in ODF 4.9
1997738 - RBD pvc creation fails on VMware
1997922 - ODF-Operator installed failed because odf-console pod is in ImagePullBackOff
1998851 - Disabling Ceph File System also disable Ceph RBD VolumeSnapshotStorageClass
1999050 - [UI] Storage system link on OpenShift Data Foundation Page is not redirecting to the details page of the particular storage system
1999731 - CVE-2021-37701 nodejs-tar: Insufficient symlink protection due to directory cache poisoning using symbolic links allowing arbitrary file creation and overwrite
1999739 - CVE-2021-37712 nodejs-tar: Insufficient symlink protection due to directory cache poisoning using symbolic links allowing arbitrary file creation and overwrite
1999748 - Update OCP CSI sidecar to 4.9
1999763 - odf-operator.v4.9.0-119.ci failed with version in range: 4.9.0-119.ci for noobaa-operator
1999767 - [Tracker for Ceph BZ #2002557] Raw capacity is not shown in block pool page
2000082 - 2 of the odf quick starts guides 'getting started' and 'configuration & management' doesn't disappear from UI on odf-operator uninstallation
2000098 - ODF operator 4.9.0-120.ci fails to install on s390x due to incompatible ibm-storage-odf-plugin Docker image
2000143 - OCS 4.8 to ODF 4.9 upgrade failed on OCP 4.9 AWS cluster
2000190 - Current must-gather doesn't collect some of the odf-operator and storagesystem related logs for odf 4.9
2000579 - [UI] ODF tab under Storage found missing and appears and then disappears again with 404: Page not found error message on URL reload
2000588 - Bucket creation is failing from nooba management console
2000860 - Hide thick provisioning from the UI
2000865 - Revert the creation of thick provisioned storage class in 4.9
2001482 - StorageCluster stuck in Progressing state for MCG only deployment
2001539 - [UI] ODF Overview showing two different status for the same storage system
2001580 - [UI] Title "OpenShift Data Foundation" should be used instead of "OpenShift Data Foundation Overview"
2001970 - Openshift Data Foundation navitem missing under "Storage" navsection.
2002225 - [Tracker for Ceph BZ #2002398] [4.9.0-129.ci]: ceph health in WARN state due to mon.a crashed
2003444 - [IBM] odf metrics data is not shown in odf-console
2003904 - odf-operator.v4.9.0-136.ci fails to install with odf-console in ImagePullBackOff
2004003 - [External Mode] rook TLS certificate was not created
2004013 - [DR] After performing failover mirroringStatus reports image_health: ERROR
2004030 - Storagecluster should have a reference under storageSystem
2004824 - remove ibm-console from the odf-operator
2005103 - Bucket replication does not work (objects are not replicated)
2005290 - namespace: openshift-storage label missing for few OCS alerts
2005812 - [Recovery DOC Tracker 1978769] recover documents when ceph cluster filled up because of CephFS snapshots and clones
2005838 - [Tracker for Ceph BZ #1981186] [DR] Rbdmirror pod keeps showing unable to find a keyring
2005843 - [DR] odfmo-controller-manager pod label should be set based on Deployment name
2005937 - Not able to add toleration for MDS pods via StorageCluster yaml
2006176 - [DR] token-exchange-agent pod logs flooded with failed to sync and Failed to watch messages
2006865 - Ceph alerts that are auto-resolved should not be fired
2007130 - [External Mode] Exporter script fails if multiple monitoring endpoints are provided
2007202 - VolumeReplication condition "Degraded" never moves to "False" when recovering from a split brain
2007212 - VolumeReplication condition "Degraded" never moves to "False" when recovering from a split brain
2007377 - CephObjectStore does not update the RGW configuration period if 'period --commit' fails in the first reconcile
2007717 - ODF 4.9 is failing to deploy
2010041 - Backport to 4.9: Inject default label to noobaa_system_reconciler
2010185 - MirrorPeer status always in ExchangedSecret Phase
2010188 - [DR] odfmo-controller-manager pod label should be set based on Deployment name
2010194 - [DR] token-exchange-agent pod logs flooded with failed to sync and Failed to watch messages
2010202 - Backport to 4.9: Adding the ability to change/add labels in service monitor
2011225 - NooBaa Operator repo generates invalid CSV

5. References:

https://access.redhat.com/security/cve/CVE-2020-8565
https://access.redhat.com/security/cve/CVE-2021-32803
https://access.redhat.com/security/cve/CVE-2021-32804
https://access.redhat.com/security/cve/CVE-2021-33195
https://access.redhat.com/security/cve/CVE-2021-33197
https://access.redhat.com/security/cve/CVE-2021-33198
https://access.redhat.com/security/cve/CVE-2021-34558
https://access.redhat.com/security/cve/CVE-2021-37701
https://access.redhat.com/security/cve/CVE-2021-37712
https://access.redhat.com/security/updates/classification/#moderate

6. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBYbei+dzjgjWX9erEAQj9ow//QhaukAiKpIm579W/smWGhYvMRojDp0gq
vx74LCpSz03rPPyZxeoaDxGNDJohtpKaqGXvBn+2yyrul/F6zrsN4YIuaILre4EI
Hk8BKm+0LsG6INfLvTGNIhjW36fXb+vgR+Iv7tDQ85swAoC6e9JWFingqeSZTi6h
jE6HlcVDox57X0cntB3o6D1nqJlASTMwi09tg6R0yknRunuXpUwVHdQBqx4xDpr7
74OyifVqJKpJ46xVg01LZPBuUdKhFIzU6q60JNFMOTN6m9oaDfVg35eaRti3QomV
0BJkosldZkl3DdNy1FlPDj3xETn23DGEd4O00uvtW4Wh0Nr0z1Xi28h8Rz4TrtEG
r+90LQG36HmJd/13eEIJh3Q5YYLOivr+y/HwZ0lzx8jYHGMjx8gEkb/TxSOMEqs0
rRnLB+o4qogObIyog+TntW3A6HqYQ2KPqLIOoyc/ybEqaPazSlyV9hlSBZYMGOKh
AL7dF+siGIAUjlFBwmFGBquJYkKHncMkrE7R71Nj5qQtFoIKUUO9RQwHUHgrSRRg
anFkMTnu3Y7SxdzuHPxxG8kear2T9u5zk0nTQvL5mKDCbGCsukJgx0+t0WSWCkZz
BjRx0Ey2Uvsul9HYcYABz9w2pWip4SXOws3k/eF23Tg2GAHALIm5GcFuL6ByvUHx
+7kNvf3sAbA=
=CApk
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://listman.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

March 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Mar 1st
    16 Files
  • 2
    Mar 2nd
    0 Files
  • 3
    Mar 3rd
    0 Files
  • 4
    Mar 4th
    32 Files
  • 5
    Mar 5th
    28 Files
  • 6
    Mar 6th
    42 Files
  • 7
    Mar 7th
    17 Files
  • 8
    Mar 8th
    13 Files
  • 9
    Mar 9th
    0 Files
  • 10
    Mar 10th
    0 Files
  • 11
    Mar 11th
    15 Files
  • 12
    Mar 12th
    19 Files
  • 13
    Mar 13th
    21 Files
  • 14
    Mar 14th
    38 Files
  • 15
    Mar 15th
    15 Files
  • 16
    Mar 16th
    0 Files
  • 17
    Mar 17th
    0 Files
  • 18
    Mar 18th
    10 Files
  • 19
    Mar 19th
    32 Files
  • 20
    Mar 20th
    46 Files
  • 21
    Mar 21st
    16 Files
  • 22
    Mar 22nd
    13 Files
  • 23
    Mar 23rd
    0 Files
  • 24
    Mar 24th
    0 Files
  • 25
    Mar 25th
    12 Files
  • 26
    Mar 26th
    31 Files
  • 27
    Mar 27th
    19 Files
  • 28
    Mar 28th
    42 Files
  • 29
    Mar 29th
    0 Files
  • 30
    Mar 30th
    0 Files
  • 31
    Mar 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2022 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close