Twenty Year Anniversary

Red Hat Security Advisory 2018-2179-01

Red Hat Security Advisory 2018-2179-01
Posted Jul 11, 2018
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2018-2179-01 - Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Issues addressed include a replay attack.

tags | advisory
systems | linux, redhat
advisories | CVE-2018-10861, CVE-2018-1128, CVE-2018-1129
MD5 | 680f123b307525ab720c8060a21e44bf

Red Hat Security Advisory 2018-2179-01

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat Ceph Storage 3.0 security and bug fix update
Advisory ID: RHSA-2018:2179-01
Product: Red Hat Ceph Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2018:2179
Issue date: 2018-07-11
CVE Names: CVE-2018-1128 CVE-2018-1129 CVE-2018-10861
=====================================================================

1. Summary:

An update for ceph is now available for Red Hat Ceph Storage for Ubuntu
16.04.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)

* ceph: cephx uses weak signatures (CVE-2018-1129)

* ceph: ceph-mon does not perform authorization on OSD pool ops
(CVE-2018-10861)

For more details about the security issue(s), including the impact, a CVSS
score, and other related information, refer to the CVE page(s) listed in
the References section.

Bug Fix(es):

* Previously, Ceph RADOS Gateway (RGW) instances in zones configured for
multi-site replication would crash if configured to disable sync
("rgw_run_sync_thread = false"). Therefor, multi-site replication
environments could not start dedicated non-replication RGW instances. With
this update, the "rgw_run_sync_thread" option can be used to configure RGW
instances that will not participate in replication even if their zone is
replicated. (BZ#1552202)

* Previously, when increasing "max_mds" from "1" to "2", if the Metadata
Server (MDS) daemon was in the starting/resolve state for a long period of
time, then restarting the MDS daemon lead to assert. This caused the Ceph
File System (CephFS) to be in degraded state. With this update, increasing
"max_mds" no longer causes CephFS to be in degraded state. (BZ#1566016)

* Previously, the transition to containerized Ceph left some "ceph-disk"
unit files. The files were harmless, but appeared as failing. With this
update, executing the
"switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook
disables the "ceph-disk" unit files too. (BZ#1577846)

* Previously, the "entries_behind_master" metric output from the "rbd
mirror image status" CLI tool did not always reduce to zero under synthetic
workloads. This could cause a false alarm that there is an issue with RBD
mirroring replications. With this update, the metric is now updated
periodically without the need for an explicit I/O flush in the workload.
(BZ#1578509)

* Previously, when using the "pool create" command with
"expected_num_objects", placement group (PG) directories were not
pre-created at pool creation time as expected, resulting in performance
drops when filestore splitting occurred. With this update, the
"expected_num_objects" parameter is now passed through to filestore
correctly, and PG directories for the expected number of objects are
pre-created at pool creation time. (BZ#1579039)

* Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved
incorrectly when attempting to sync containers with S3 object versioning
enabled. Objects in versioning-enabled containers would fail to sync in
some scenariosafor example, when using "s3cmd sync" to mirror a filesystem
directory. With this update, RGW multi-site replication logic has been
corrected for the known failure cases. (BZ#1580497)

* When restarting OSD daemons, the "ceph-ansible" restart script goes
through all the daemons by listing the units with systemctl list-units.
Under certain circumstances, the output of the command contains extra
spaces, which caused parsing and restart to fail. With this update, the
underlying code has been changed to handle the extra space.

* Previously, the Ceph RADOS Gateway (RGW) server treated negative
byte-range object requests ("bytes=0--1") as invalid. Applications that
expect the AWS behavior for negative or other invalid range requests saw
unexpected errors and could fail. With this update, a new option
"rgw_ignore_get_invalid_range" has been added to RGW. When
"rgw_ignore_get_invalid_range" is set to "true", the RGW behavior for
invalid range requests is backwards compatible with AWS.

3. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

4. Bugs fixed (https://bugzilla.redhat.com/):

1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack
1576057 - CVE-2018-1129 ceph: cephx uses weak signatures
1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops

5. References:

https://access.redhat.com/security/cve/CVE-2018-1128
https://access.redhat.com/security/cve/CVE-2018-1129
https://access.redhat.com/security/cve/CVE-2018-10861
https://access.redhat.com/security/updates/classification/#moderate

6. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2018 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBW0ZKz9zjgjWX9erEAQgRgA//ScZYMxaIUOg2jWoe8J+5Ez/py8CvZxDR
RJw5g1jzzKBqbbHp24+TWGiZn08WcXcpX3f0o4rPyBSL+Vyp2FcedyyxEqc3cueN
I/xGuT7+VM0rm3IiLobMecZ2aLU/I4RF0T1pKtfLT5NvtIiUOJFZ9vl6gHu+sea4
oSm5i3JXZjZZ0N36Y8zyBZAkMKj3Fu2RdFWm0SCkwP78neYiz82bfLJJGh3Y/zn9
nBiAZ/Kwqblrc8fg1HZEe3yAH1qDcON9XnwKsqc7hlHwnsrq2GkKdW63xk1FKIS2
+22M5G31I+2oafyN++G6k0UHOkm5+B099Fwuy+0lNiYAO5mFDwN8HldxSzP/6uYv
ebD7U0BC5Ybn/SMpa1NzCmK8CNhwtIrhfr7H7cJ3m5yzebT6Gyqtf63gsFBC2+BQ
wcgv90FHPvRGuzwc/NLBzIae3MvdK8qAP4fcw5CLoGmrEDOUI5uqzdxXGihPoHf0
AHcC4Hrf75GGhOfhhfm8ZGtuBCHhx7+QFMY6h/DQYx6z7NiS1Sd9diGlaSJS2KZ1
LHD5xLYb6UN2KyQbOfZaJYek0IO/kUQX1UhDfti2wCv76dmY8Rvzgj8yx9kGN/Ub
/wM0LpyUzWLlGMpahUo1YDky4CtR0PxogdauLFZ8DA7rGCBwuE9Ga2NLagmLREHq
q7JI4CKE2ME=
=1KKu
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce

Comments

RSS Feed Subscribe to this comment feed

No comments yet, be the first!

Login or Register to post a comment

Want To Donate?


Bitcoin: 18PFeCVLwpmaBuQqd5xAYZ8bZdvbyEWMmU

File Archive:

July 2018

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Jul 1st
    1 Files
  • 2
    Jul 2nd
    26 Files
  • 3
    Jul 3rd
    15 Files
  • 4
    Jul 4th
    11 Files
  • 5
    Jul 5th
    13 Files
  • 6
    Jul 6th
    4 Files
  • 7
    Jul 7th
    4 Files
  • 8
    Jul 8th
    1 Files
  • 9
    Jul 9th
    16 Files
  • 10
    Jul 10th
    15 Files
  • 11
    Jul 11th
    32 Files
  • 12
    Jul 12th
    22 Files
  • 13
    Jul 13th
    15 Files
  • 14
    Jul 14th
    1 Files
  • 15
    Jul 15th
    1 Files
  • 16
    Jul 16th
    21 Files
  • 17
    Jul 17th
    15 Files
  • 18
    Jul 18th
    15 Files
  • 19
    Jul 19th
    17 Files
  • 20
    Jul 20th
    11 Files
  • 21
    Jul 21st
    1 Files
  • 22
    Jul 22nd
    1 Files
  • 23
    Jul 23rd
    0 Files
  • 24
    Jul 24th
    0 Files
  • 25
    Jul 25th
    0 Files
  • 26
    Jul 26th
    0 Files
  • 27
    Jul 27th
    0 Files
  • 28
    Jul 28th
    0 Files
  • 29
    Jul 29th
    0 Files
  • 30
    Jul 30th
    0 Files
  • 31
    Jul 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2018 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close