Twenty Year Anniversary

Red Hat Security Advisory 2018-2179-01

Red Hat Security Advisory 2018-2179-01
Posted Jul 11, 2018
Authored by Red Hat | Site

Red Hat Security Advisory 2018-2179-01 - Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Issues addressed include a replay attack.

tags | advisory
systems | linux, redhat
advisories | CVE-2018-10861, CVE-2018-1128, CVE-2018-1129
MD5 | 680f123b307525ab720c8060a21e44bf

Red Hat Security Advisory 2018-2179-01

Change Mirror Download
Hash: SHA256

Red Hat Security Advisory

Synopsis: Moderate: Red Hat Ceph Storage 3.0 security and bug fix update
Advisory ID: RHSA-2018:2179-01
Product: Red Hat Ceph Storage
Advisory URL:
Issue date: 2018-07-11
CVE Names: CVE-2018-1128 CVE-2018-1129 CVE-2018-10861

1. Summary:

An update for ceph is now available for Red Hat Ceph Storage for Ubuntu

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)

* ceph: cephx uses weak signatures (CVE-2018-1129)

* ceph: ceph-mon does not perform authorization on OSD pool ops

For more details about the security issue(s), including the impact, a CVSS
score, and other related information, refer to the CVE page(s) listed in
the References section.

Bug Fix(es):

* Previously, Ceph RADOS Gateway (RGW) instances in zones configured for
multi-site replication would crash if configured to disable sync
("rgw_run_sync_thread = false"). Therefor, multi-site replication
environments could not start dedicated non-replication RGW instances. With
this update, the "rgw_run_sync_thread" option can be used to configure RGW
instances that will not participate in replication even if their zone is
replicated. (BZ#1552202)

* Previously, when increasing "max_mds" from "1" to "2", if the Metadata
Server (MDS) daemon was in the starting/resolve state for a long period of
time, then restarting the MDS daemon lead to assert. This caused the Ceph
File System (CephFS) to be in degraded state. With this update, increasing
"max_mds" no longer causes CephFS to be in degraded state. (BZ#1566016)

* Previously, the transition to containerized Ceph left some "ceph-disk"
unit files. The files were harmless, but appeared as failing. With this
update, executing the
"switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook
disables the "ceph-disk" unit files too. (BZ#1577846)

* Previously, the "entries_behind_master" metric output from the "rbd
mirror image status" CLI tool did not always reduce to zero under synthetic
workloads. This could cause a false alarm that there is an issue with RBD
mirroring replications. With this update, the metric is now updated
periodically without the need for an explicit I/O flush in the workload.

* Previously, when using the "pool create" command with
"expected_num_objects", placement group (PG) directories were not
pre-created at pool creation time as expected, resulting in performance
drops when filestore splitting occurred. With this update, the
"expected_num_objects" parameter is now passed through to filestore
correctly, and PG directories for the expected number of objects are
pre-created at pool creation time. (BZ#1579039)

* Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved
incorrectly when attempting to sync containers with S3 object versioning
enabled. Objects in versioning-enabled containers would fail to sync in
some scenariosafor example, when using "s3cmd sync" to mirror a filesystem
directory. With this update, RGW multi-site replication logic has been
corrected for the known failure cases. (BZ#1580497)

* When restarting OSD daemons, the "ceph-ansible" restart script goes
through all the daemons by listing the units with systemctl list-units.
Under certain circumstances, the output of the command contains extra
spaces, which caused parsing and restart to fail. With this update, the
underlying code has been changed to handle the extra space.

* Previously, the Ceph RADOS Gateway (RGW) server treated negative
byte-range object requests ("bytes=0--1") as invalid. Applications that
expect the AWS behavior for negative or other invalid range requests saw
unexpected errors and could fail. With this update, a new option
"rgw_ignore_get_invalid_range" has been added to RGW. When
"rgw_ignore_get_invalid_range" is set to "true", the RGW behavior for
invalid range requests is backwards compatible with AWS.

3. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

4. Bugs fixed (

1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack
1576057 - CVE-2018-1129 ceph: cephx uses weak signatures
1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops

5. References:

6. Contact:

The Red Hat security contact is <>. More contact
details at

Copyright 2018 Red Hat, Inc.
Version: GnuPG v1


RHSA-announce mailing list


RSS Feed Subscribe to this comment feed

No comments yet, be the first!

Login or Register to post a comment

File Archive:

October 2018

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Oct 1st
    26 Files
  • 2
    Oct 2nd
    15 Files
  • 3
    Oct 3rd
    15 Files
  • 4
    Oct 4th
    15 Files
  • 5
    Oct 5th
    15 Files
  • 6
    Oct 6th
    2 Files
  • 7
    Oct 7th
    3 Files
  • 8
    Oct 8th
    23 Files
  • 9
    Oct 9th
    16 Files
  • 10
    Oct 10th
    15 Files
  • 11
    Oct 11th
    19 Files
  • 12
    Oct 12th
    16 Files
  • 13
    Oct 13th
    2 Files
  • 14
    Oct 14th
    2 Files
  • 15
    Oct 15th
    15 Files
  • 16
    Oct 16th
    20 Files
  • 17
    Oct 17th
    19 Files
  • 18
    Oct 18th
    21 Files
  • 19
    Oct 19th
    0 Files
  • 20
    Oct 20th
    0 Files
  • 21
    Oct 21st
    0 Files
  • 22
    Oct 22nd
    0 Files
  • 23
    Oct 23rd
    0 Files
  • 24
    Oct 24th
    0 Files
  • 25
    Oct 25th
    0 Files
  • 26
    Oct 26th
    0 Files
  • 27
    Oct 27th
    0 Files
  • 28
    Oct 28th
    0 Files
  • 29
    Oct 29th
    0 Files
  • 30
    Oct 30th
    0 Files
  • 31
    Oct 31st
    0 Files

Top Authors In Last 30 Days

File Tags


packet storm

© 2018 Packet Storm. All rights reserved.

Security Services
Hosting By