exploit the possibilities
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2015-1845-01

Red Hat Security Advisory 2015-1845-01
Posted Oct 5, 2015
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2015-1845-01 - Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges. Red Hat Gluster Storage's Unified File and Object Storage is built on OpenStack's Object Storage. A flaw was found in the metadata constraints in Red Hat Gluster Storage's OpenStack Object Storage. By adding metadata in several separate calls, a malicious user could bypass the max_meta_count constraint, and store more metadata than allowed by the configuration.

tags | advisory
systems | linux, redhat
advisories | CVE-2014-8177
SHA-256 | 461ddcf991096b35f17de2c450f919683c621c960e5c6ac5cfb8a2d8e423db13

Red Hat Security Advisory 2015-1845-01

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat Gluster Storage 3.1 update
Advisory ID: RHSA-2015:1845-01
Product: Red Hat Storage
Advisory URL: https://rhn.redhat.com/errata/RHSA-2015-1845.html
Issue date: 2015-10-05
CVE Names: CVE-2014-8177
=====================================================================

1. Summary:

Red Hat Gluster Storage 3.1 Update 1, which fixes one security issue,
several bugs, and adds various enhancements, is now available for Red Hat
Enterprise Linux 6.

Red Hat Product Security has rated this update as having Moderate security
impact. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available from the CVE link in the
References section.

2. Relevant releases/architectures:

Red Hat Gluster Storage NFS 3.1 - x86_64
Red Hat Gluster Storage Nagios 3.1 on RHEL-6 - noarch
Red Hat Gluster Storage Server 3.1 on RHEL-6 - noarch, x86_64
Red Hat Storage Native Client for Red Hat Enterprise Linux 6 - x86_64

3. Description:

Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.

Red Hat Gluster Storage's Unified File and Object Storage is built on
OpenStack's Object Storage (swift).

A flaw was found in the metadata constraints in Red Hat Gluster Storage's
OpenStack Object Storage (swiftonfile). By adding metadata in several
separate calls, a malicious user could bypass the max_meta_count
constraint, and store more metadata than allowed by the configuration.
(CVE-2014-8177)

This update also fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.

This advisory introduces the following new features:

* Gdeploy is a tool which automates the process of creating, formatting,
and mounting bricks. When setting up a fresh cluster, gdeploy could be the
preferred choice of cluster set up, as manually executing numerous commands
can be error prone. The advantages of using gdeploy includes automated
brick creation, flexibility in choosing the drives to configure (sd, vd,
etc.), and flexibility in naming the logical volumes (LV) and volume groups
(VG). (BZ#1248899)

* The gstatus command is now fully supported. The gstatus command provides
an easy-to-use, high-level view of the health of a trusted storage pool
with a single command. It gathers information about the health of a Red Hat
Gluster Storage trusted storage pool for distributed, replicated,
distributed-replicated, dispersed, and distributed-dispersed volumes.
(BZ#1250453)

* You can now recover a bad file detected by BitRot from a replicated
volume. The information about the bad file will be logged in the scrubber
log file located at /var/log/glusterfs/scrub.log. (BZ#1238171)

* Two tailored tuned profiles are introduced to improve the performance for
specific Red Hat Gluster Storage workloads. They are: rhgs-sequential-io,
which improves performance of large files with sequential I/O workloads,
and rhgs-random-io, which improves performance of small files with random
I/O workloads (BZ# 1251360)

All users of Red Hat Gluster Storage are advised to apply this update.

4. Solution:

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1027723 - Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
1064265 - quota: allowed to set soft-limit %age beyond 100%
1076033 - Unknown Key: <bricks> are reported when the glusterd was restarted
1091936 - Incase of ACL not set on a file, nfs4_getfacl should return a default acl
1134288 - "Unable to get transaction opinfo for transaction ID" error messages in glusterd logs
1178100 - [USS]: gluster volume reset <vol-name>, resets the uss configured option but snapd process continues to run
1213893 - rebalance stuck at 0 byte when auth.allow is set
1215816 - 1 mkdir generates tons of log messages from dht and disperse xlators
1225452 - [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
1226665 - gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
1226817 - nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
1227724 - Quota: Used space of the volume is wrongly calculated
1227759 - Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
1228135 - [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
1228158 - nfs-ganesha: error seen while delete node "Error: unable to create resource/fence device 'nfs5-cluster_ip-1', 'nfs5-cluster_ip-1' already exists on this system"
1229606 - Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
1229621 - Quota: Seeing error message in brick logs "E [posix-handle.c:157:posix_make_ancestryfromgfid] 0-vol0-posix: could not read the link from the gfid handle /rhs/brick1/b1/.glusterfs/a3/f3/a3f3664f-df98-448e-b5c8-924349851c7e (No such file or directory)"
1231080 - Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
1232216 - [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
1232569 - [Backup]: Glusterfind list shows the session as corrupted on the peer node
1234213 - [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
1234399 - `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
1234610 - ACL created on a dht.linkto file on a files that skipped rebalance
1234708 - Volume option cluster.enable-shared-storage is not listed in "volume set help-xml" output
1235182 - quota: marker accounting miscalculated when renaming a file on with write is in progress
1235571 - snapd crashed due to stack overflow
1235971 - nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"
1236038 - Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
1236546 - [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
1236672 - quota: brick crashes when create and remove performed in parallel
1236990 - glfsheal crashed
1238070 - snapd/quota/nfs runs on the RHGS node, even after that node was detached from trusted storage pool
1238071 - Quota: Quota Daemon doesn't start after node reboot
1238111 - Detaching a peer from the cluster doesn't remove snap related info and peer probe initiated from that node fails
1238116 - Gluster-swift object server leaks fds in failure cases (when exceptions are raised)
1238118 - nfs-ganesha: coredump for ganesha process post executing the volume start twice
1238147 - Object expirer daemon times out and raises exception while attempting to expire a million objects
1238171 - Not able to recover the corrupted file on Replica volume
1238398 - Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
1238977 - Scrubber log should mark file corrupted message as Alert not as information
1239021 - AFR: gluster v restart force or brick process restart doesn't heal the files
1239075 - [geo-rep]: rename followed by deletes causes ESTALE
1240614 - Gluster nfs started running on one of the nodes of ganesha cluster, even though ganesha was running on it
1240657 - Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
1241385 - [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
1241761 - nfs-ganesha: coredump "pthread_spin_lock () from /lib64/libpthread.so.0"
1241807 - Brick crashed after a complete node failure
1241862 - EC volume: Replace bricks is not healing version of root directory
1241871 - Symlink mount fails for nfs-ganesha volume
1242803 - Quota list on a volume hangs after glusterd restart an a node.
1243542 - [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
1243722 - glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
1243886 - huge mem leak in posix xattrop
1244415 - Enabling management SSL on a gluster cluster already configured can crash glusterd
1244527 - DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
1245162 - python-argparse not installed as a dependency package
1245165 - Some times files are not getting signed
1245536 - [RHGS-AMI] Same UUID generated across instances
1245542 - quota/marker: errors in log file 'Failed to get metadata for'
1245897 - gluster snapshot status --xml gives back unexpected non xml output
1245915 - snap-view:mount crash if debug mode is enabled
1245919 - USS: Take ref on root inode
1245924 - [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
1246946 - critical message seen in glusterd log file, when detaching a peer, but no functional loss
1247445 - [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, self-heal daemon is not coming online
1247537 - yum groups for RHGS Server and Console are listed under Available Language Groups instead of Available groups
1248899 - [Feature 3.1.1 gdeploy] Develop tool to setup thinp backend and create Gluster volumes
1249989 - [GSS] python-gluster packages not being treated as dependent package for gluster-swift packages
1250453 - [Feature]: Qualify gstatus to 3.1.1 release
1250821 - [RHGS 3.1 RHEL-7 AMI] RHEL-7 repo disabled by default, NFS and samba repos enabled by default
1251360 - Update RHGS tuned profiles for RHEL-6
1251925 - .trashcan is listed as container and breaks object expiration in gluster-swift
1253141 - [RHGS-AMI] RHUI repos not accessible on RHGS-3.1 RHEL-7 AMI
1254432 - gstatus: Overcommit field show wrong information when one of the node is down
1254514 - gstatus: Status message doesn;t show the storage node name which is down
1254866 - gstatus: Running gstatus with -b option gives error
1254991 - gdeploy: unmount doesn't remove fstab entries
1255015 - gdeploy: unmount fails with fstype parameter
1255308 - Inconsistent data returned when objects are modified from file interface
1255471 - [libgfapi] crash when NFS Ganesha Volume is 100% full
1257099 - gdeploy: checks missing for brick mounts when there are existing physical volumes
1257162 - gdeploy: volume force option doesn't work as expected
1257468 - gdeploy: creation of thin pool stuck after brick cleanup
1257509 - Disperse volume: df -h on a nfs mount throws Invalid argument error
1257525 - CVE-2014-8177 gluster-swift metadata constraints are not correctly enforced
1258434 - gdeploy: peer probe issues during an add-brick operation with fresh hosts
1258810 - gdeploy: change all references to brick_dir in config file
1258821 - gdeploy: inconsistency in the way backend setup and volume creation uses brick_dirs value
1259750 - DHT: Few files are missing after remove-brick operation
1260086 - snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
1260982 - gdeploy: ENOTEMPTY errors when gdeploy fails
1262236 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
1262291 - `getfattr -n replica.split-brain-status <file>' command hung on the mount
1263094 - nfs-ganesha crashes due to usage of invalid fd in glfs_close
1263581 - nfs-ganesha: nfsd coredumps once quota limits cross while creating a file larger than the quota limit set
1263653 - dht: Avoid double unlock in dht_refresh_layout_cbk

6. Package List:

Red Hat Gluster Storage NFS 3.1:

Source:
nfs-ganesha-2.2.0-9.el6rhs.src.rpm

x86_64:
nfs-ganesha-2.2.0-9.el6rhs.x86_64.rpm
nfs-ganesha-debuginfo-2.2.0-9.el6rhs.x86_64.rpm
nfs-ganesha-gluster-2.2.0-9.el6rhs.x86_64.rpm

Red Hat Gluster Storage Nagios 3.1 on RHEL-6:

Source:
gluster-nagios-common-0.2.2-1.el6rhs.src.rpm
nagios-server-addons-0.2.2-1.el6rhs.src.rpm

noarch:
gluster-nagios-common-0.2.2-1.el6rhs.noarch.rpm
nagios-server-addons-0.2.2-1.el6rhs.noarch.rpm

Red Hat Gluster Storage Server 3.1 on RHEL-6:

Source:
gdeploy-1.0-12.el6rhs.src.rpm
gluster-nagios-addons-0.2.5-1.el6rhs.src.rpm
gluster-nagios-common-0.2.2-1.el6rhs.src.rpm
glusterfs-3.7.1-16.el6rhs.src.rpm
gstatus-0.65-1.el6rhs.src.rpm
openstack-swift-1.13.1-6.el6ost.src.rpm
redhat-storage-server-3.1.1.0-2.el6rhs.src.rpm
swiftonfile-1.13.1-5.el6rhs.src.rpm
vdsm-4.16.20-1.3.el6rhs.src.rpm

noarch:
gdeploy-1.0-12.el6rhs.noarch.rpm
gluster-nagios-common-0.2.2-1.el6rhs.noarch.rpm
openstack-swift-1.13.1-6.el6ost.noarch.rpm
openstack-swift-account-1.13.1-6.el6ost.noarch.rpm
openstack-swift-container-1.13.1-6.el6ost.noarch.rpm
openstack-swift-doc-1.13.1-6.el6ost.noarch.rpm
openstack-swift-object-1.13.1-6.el6ost.noarch.rpm
openstack-swift-proxy-1.13.1-6.el6ost.noarch.rpm
redhat-storage-server-3.1.1.0-2.el6rhs.noarch.rpm
swiftonfile-1.13.1-5.el6rhs.noarch.rpm
vdsm-cli-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-debug-plugin-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-gluster-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-hook-ethtool-options-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-hook-faqemu-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-hook-openstacknet-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-hook-qemucmdline-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-jsonrpc-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-python-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-python-zombiereaper-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-reg-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-tests-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-xmlrpc-4.16.20-1.3.el6rhs.noarch.rpm
vdsm-yajsonrpc-4.16.20-1.3.el6rhs.noarch.rpm

x86_64:
gluster-nagios-addons-0.2.5-1.el6rhs.x86_64.rpm
gluster-nagios-addons-debuginfo-0.2.5-1.el6rhs.x86_64.rpm
glusterfs-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-api-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-api-devel-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-cli-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-client-xlators-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-debuginfo-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-devel-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-fuse-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-ganesha-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-geo-replication-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-libs-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-rdma-3.7.1-16.el6rhs.x86_64.rpm
glusterfs-server-3.7.1-16.el6rhs.x86_64.rpm
gstatus-0.65-1.el6rhs.x86_64.rpm
gstatus-debuginfo-0.65-1.el6rhs.x86_64.rpm
python-gluster-3.7.1-16.el6rhs.x86_64.rpm
vdsm-4.16.20-1.3.el6rhs.x86_64.rpm
vdsm-debuginfo-4.16.20-1.3.el6rhs.x86_64.rpm

Red Hat Storage Native Client for Red Hat Enterprise Linux 6:

Source:
glusterfs-3.7.1-16.el6.src.rpm

x86_64:
glusterfs-3.7.1-16.el6.x86_64.rpm
glusterfs-api-3.7.1-16.el6.x86_64.rpm
glusterfs-api-devel-3.7.1-16.el6.x86_64.rpm
glusterfs-cli-3.7.1-16.el6.x86_64.rpm
glusterfs-client-xlators-3.7.1-16.el6.x86_64.rpm
glusterfs-debuginfo-3.7.1-16.el6.x86_64.rpm
glusterfs-devel-3.7.1-16.el6.x86_64.rpm
glusterfs-fuse-3.7.1-16.el6.x86_64.rpm
glusterfs-libs-3.7.1-16.el6.x86_64.rpm
glusterfs-rdma-3.7.1-16.el6.x86_64.rpm
python-gluster-3.7.1-16.el6.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2014-8177
https://access.redhat.com/security/updates/classification/#moderate
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Technical_Notes/index.html

8. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2015 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iD4DBQFWElmqXlSAg2UNWIIRAoS4AJjPtCNBvpCBGOdoLCrTVZKPEU/EAJ9BOd7U
q65kLOt2tI8lW5GXiAps1w==
=zCq3
-----END PGP SIGNATURE-----


--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

April 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Apr 1st
    10 Files
  • 2
    Apr 2nd
    26 Files
  • 3
    Apr 3rd
    40 Files
  • 4
    Apr 4th
    6 Files
  • 5
    Apr 5th
    26 Files
  • 6
    Apr 6th
    0 Files
  • 7
    Apr 7th
    0 Files
  • 8
    Apr 8th
    22 Files
  • 9
    Apr 9th
    14 Files
  • 10
    Apr 10th
    10 Files
  • 11
    Apr 11th
    13 Files
  • 12
    Apr 12th
    14 Files
  • 13
    Apr 13th
    0 Files
  • 14
    Apr 14th
    0 Files
  • 15
    Apr 15th
    30 Files
  • 16
    Apr 16th
    10 Files
  • 17
    Apr 17th
    22 Files
  • 18
    Apr 18th
    45 Files
  • 19
    Apr 19th
    0 Files
  • 20
    Apr 20th
    0 Files
  • 21
    Apr 21st
    0 Files
  • 22
    Apr 22nd
    0 Files
  • 23
    Apr 23rd
    0 Files
  • 24
    Apr 24th
    0 Files
  • 25
    Apr 25th
    0 Files
  • 26
    Apr 26th
    0 Files
  • 27
    Apr 27th
    0 Files
  • 28
    Apr 28th
    0 Files
  • 29
    Apr 29th
    0 Files
  • 30
    Apr 30th
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2022 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close