exploit the possibilities
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2018-2179-01

Red Hat Security Advisory 2018-2179-01
Posted Jul 11, 2018
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2018-2179-01 - Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Issues addressed include a replay attack.

tags | advisory
systems | linux, redhat
advisories | CVE-2018-10861, CVE-2018-1128, CVE-2018-1129
SHA-256 | dcc4b3046d8cff4c77cd181b7bb36d7967e583f5ca3b5fab4427296c02f4669b

Red Hat Security Advisory 2018-2179-01

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat Ceph Storage 3.0 security and bug fix update
Advisory ID: RHSA-2018:2179-01
Product: Red Hat Ceph Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2018:2179
Issue date: 2018-07-11
CVE Names: CVE-2018-1128 CVE-2018-1129 CVE-2018-10861
=====================================================================

1. Summary:

An update for ceph is now available for Red Hat Ceph Storage for Ubuntu
16.04.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)

* ceph: cephx uses weak signatures (CVE-2018-1129)

* ceph: ceph-mon does not perform authorization on OSD pool ops
(CVE-2018-10861)

For more details about the security issue(s), including the impact, a CVSS
score, and other related information, refer to the CVE page(s) listed in
the References section.

Bug Fix(es):

* Previously, Ceph RADOS Gateway (RGW) instances in zones configured for
multi-site replication would crash if configured to disable sync
("rgw_run_sync_thread = false"). Therefor, multi-site replication
environments could not start dedicated non-replication RGW instances. With
this update, the "rgw_run_sync_thread" option can be used to configure RGW
instances that will not participate in replication even if their zone is
replicated. (BZ#1552202)

* Previously, when increasing "max_mds" from "1" to "2", if the Metadata
Server (MDS) daemon was in the starting/resolve state for a long period of
time, then restarting the MDS daemon lead to assert. This caused the Ceph
File System (CephFS) to be in degraded state. With this update, increasing
"max_mds" no longer causes CephFS to be in degraded state. (BZ#1566016)

* Previously, the transition to containerized Ceph left some "ceph-disk"
unit files. The files were harmless, but appeared as failing. With this
update, executing the
"switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook
disables the "ceph-disk" unit files too. (BZ#1577846)

* Previously, the "entries_behind_master" metric output from the "rbd
mirror image status" CLI tool did not always reduce to zero under synthetic
workloads. This could cause a false alarm that there is an issue with RBD
mirroring replications. With this update, the metric is now updated
periodically without the need for an explicit I/O flush in the workload.
(BZ#1578509)

* Previously, when using the "pool create" command with
"expected_num_objects", placement group (PG) directories were not
pre-created at pool creation time as expected, resulting in performance
drops when filestore splitting occurred. With this update, the
"expected_num_objects" parameter is now passed through to filestore
correctly, and PG directories for the expected number of objects are
pre-created at pool creation time. (BZ#1579039)

* Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved
incorrectly when attempting to sync containers with S3 object versioning
enabled. Objects in versioning-enabled containers would fail to sync in
some scenariosafor example, when using "s3cmd sync" to mirror a filesystem
directory. With this update, RGW multi-site replication logic has been
corrected for the known failure cases. (BZ#1580497)

* When restarting OSD daemons, the "ceph-ansible" restart script goes
through all the daemons by listing the units with systemctl list-units.
Under certain circumstances, the output of the command contains extra
spaces, which caused parsing and restart to fail. With this update, the
underlying code has been changed to handle the extra space.

* Previously, the Ceph RADOS Gateway (RGW) server treated negative
byte-range object requests ("bytes=0--1") as invalid. Applications that
expect the AWS behavior for negative or other invalid range requests saw
unexpected errors and could fail. With this update, a new option
"rgw_ignore_get_invalid_range" has been added to RGW. When
"rgw_ignore_get_invalid_range" is set to "true", the RGW behavior for
invalid range requests is backwards compatible with AWS.

3. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

4. Bugs fixed (https://bugzilla.redhat.com/):

1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack
1576057 - CVE-2018-1129 ceph: cephx uses weak signatures
1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops

5. References:

https://access.redhat.com/security/cve/CVE-2018-1128
https://access.redhat.com/security/cve/CVE-2018-1129
https://access.redhat.com/security/cve/CVE-2018-10861
https://access.redhat.com/security/updates/classification/#moderate

6. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2018 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBW0ZKz9zjgjWX9erEAQgRgA//ScZYMxaIUOg2jWoe8J+5Ez/py8CvZxDR
RJw5g1jzzKBqbbHp24+TWGiZn08WcXcpX3f0o4rPyBSL+Vyp2FcedyyxEqc3cueN
I/xGuT7+VM0rm3IiLobMecZ2aLU/I4RF0T1pKtfLT5NvtIiUOJFZ9vl6gHu+sea4
oSm5i3JXZjZZ0N36Y8zyBZAkMKj3Fu2RdFWm0SCkwP78neYiz82bfLJJGh3Y/zn9
nBiAZ/Kwqblrc8fg1HZEe3yAH1qDcON9XnwKsqc7hlHwnsrq2GkKdW63xk1FKIS2
+22M5G31I+2oafyN++G6k0UHOkm5+B099Fwuy+0lNiYAO5mFDwN8HldxSzP/6uYv
ebD7U0BC5Ybn/SMpa1NzCmK8CNhwtIrhfr7H7cJ3m5yzebT6Gyqtf63gsFBC2+BQ
wcgv90FHPvRGuzwc/NLBzIae3MvdK8qAP4fcw5CLoGmrEDOUI5uqzdxXGihPoHf0
AHcC4Hrf75GGhOfhhfm8ZGtuBCHhx7+QFMY6h/DQYx6z7NiS1Sd9diGlaSJS2KZ1
LHD5xLYb6UN2KyQbOfZaJYek0IO/kUQX1UhDfti2wCv76dmY8Rvzgj8yx9kGN/Ub
/wM0LpyUzWLlGMpahUo1YDky4CtR0PxogdauLFZ8DA7rGCBwuE9Ga2NLagmLREHq
q7JI4CKE2ME=
=1KKu
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

August 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Aug 1st
    15 Files
  • 2
    Aug 2nd
    22 Files
  • 3
    Aug 3rd
    0 Files
  • 4
    Aug 4th
    0 Files
  • 5
    Aug 5th
    15 Files
  • 6
    Aug 6th
    11 Files
  • 7
    Aug 7th
    43 Files
  • 8
    Aug 8th
    42 Files
  • 9
    Aug 9th
    36 Files
  • 10
    Aug 10th
    0 Files
  • 11
    Aug 11th
    0 Files
  • 12
    Aug 12th
    27 Files
  • 13
    Aug 13th
    18 Files
  • 14
    Aug 14th
    50 Files
  • 15
    Aug 15th
    33 Files
  • 16
    Aug 16th
    23 Files
  • 17
    Aug 17th
    0 Files
  • 18
    Aug 18th
    0 Files
  • 19
    Aug 19th
    43 Files
  • 20
    Aug 20th
    29 Files
  • 21
    Aug 21st
    42 Files
  • 22
    Aug 22nd
    26 Files
  • 23
    Aug 23rd
    25 Files
  • 24
    Aug 24th
    0 Files
  • 25
    Aug 25th
    0 Files
  • 26
    Aug 26th
    0 Files
  • 27
    Aug 27th
    0 Files
  • 28
    Aug 28th
    0 Files
  • 29
    Aug 29th
    0 Files
  • 30
    Aug 30th
    0 Files
  • 31
    Aug 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2024 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close