what you don't know can hurt you
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2015-1495-01

Red Hat Security Advisory 2015-1495-01
Posted Aug 3, 2015
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2015-1495-01 - Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges. Red Hat Gluster Storage's Unified File and Object Storage is built on OpenStack's Object Storage. A flaw was found in the metadata constraints in OpenStack Object Storage. By adding metadata in several separate calls, a malicious user could bypass the max_meta_count constraint, and store more metadata than allowed by the configuration.

tags | advisory
systems | linux, redhat
advisories | CVE-2014-5338, CVE-2014-5339, CVE-2014-5340, CVE-2014-7960
SHA-256 | 84376bdb91826099c8d1fa4579e5493c43a6f53f2686c6e646e7dfa8e57ef9c7

Red Hat Security Advisory 2015-1495-01

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

=====================================================================
Red Hat Security Advisory

Synopsis: Important: Red Hat Gluster Storage 3.1 update
Advisory ID: RHSA-2015:1495-01
Product: Red Hat Storage
Advisory URL: https://rhn.redhat.com/errata/RHSA-2015-1495.html
Issue date: 2015-07-29
CVE Names: CVE-2014-5338 CVE-2014-5339 CVE-2014-5340
CVE-2014-7960
=====================================================================

1. Summary:

Red Hat Gluster Storage 3.1, which fixes multiple security issues, several
bugs, and adds various enhancements, is now available.

Red Hat Product Security has rated this update as having Important security
impact. Common Vulnerability Scoring System (CVSS) base scores, which give
detailed severity ratings, are available for each vulnerability from the
CVE links in the References section.

2. Relevant releases/architectures:

Red Hat Gluster Storage NFS 3.1 - x86_64
Red Hat Gluster Storage Nagios 3.1 on RHEL-6 - noarch, x86_64
Red Hat Gluster Storage Server 3.1 on RHEL-6 - noarch, x86_64
Red Hat Storage Native Client for Red Hat Enterprise Linux 5 - x86_64
Red Hat Storage Native Client for Red Hat Enterprise Linux 6 - x86_64

3. Description:

Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.

Red Hat Gluster Storage's Unified File and Object Storage is built on
OpenStack's Object Storage (swift).

A flaw was found in the metadata constraints in OpenStack Object Storage
(swift). By adding metadata in several separate calls, a malicious user
could bypass the max_meta_count constraint, and store more metadata than
allowed by the configuration. (CVE-2014-7960)

Multiple flaws were found in check-mk, a plug-in for the Nagios monitoring
system, which is used to provide monitoring and alerts for the Red Hat
Gluster Storage network and infrastructure: a reflected cross-site
scripting flaw due improper output encoding, a flaw that could allow
attackers to write .mk files in arbitrary file system locations, and a flaw
that could possibly allow remote attackers to execute code in the wato (web
based admin) module due to the unsafe use of the pickle() function.
(CVE-2014-5338, CVE-2014-5339, CVE-2014-5340)

This update also fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.

This advisory introduces the following new features:

* NFS-Ganesha is now supported in highly available active-active
environment. In a highly available active-active environment, if a
NFS-Ganesha server that is connected to a NFS client running a particular
application crashes, the application/NFS client is seamlessly connected to
another NFS-Ganesha server without any administrative intervention.

* Snapshot scheduler creates snapshots automatically based on the
configured scheduled interval of time. The snapshots can be created every
hour, a particular day of the month, particular month, or a particular day
of the week.

* You can now create a clone of a snapshot. This is a writable clone and
behaves like a regular volume. A new volume can be created from a
particular snapshot clone. Snapshot Clone is a technology preview feature.

* Red Hat Gluster Storage supports network encryption using TLS/SSL.
Red Hat Gluster Storage uses TLS/SSL for authentication and authorization,
in place of the home grown authentication framework used for normal
connections.

* BitRot detection is a technique used in Red Hat Gluster Storage to
identify the silent corruption of data with no indication from the disk to
the storage software layer when the error has occurred. BitRot also helps
in catching backend tinkering of bricks, where the data is directly
manipulated on the bricks without going through FUSE, NFS or any other
access protocols.

* Glusterfind is a utility that provides the list of files that are
modified between the previous backup session and the current period.
This list of files can then be used by any industry standard backup
application for backup.

* The Parallel Network File System (pNFS) is part of the NFS v4.1 protocol
that allows compute clients to access storage devices directly and in
parallel. pNFS is a technology preview feature.

* Tiering improves the performance, and the compliance aspects in a Red Hat
Gluster Storage environment. It serves as an enabling technology for other
enhancements by combining cost-effective or archivally oriented storage for
the majority of user data with high-performance storage to absorb the
majority of I/O workload. Tiering is a technology preview feature.

All users of Red Hat Gluster Storage are advised to apply this update.

4. Solution:

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

826836 - Error reading info on glusterfsd service while removing 'Gluster File System' group via yum
871727 - [RHEV-RHS] Bringing down one storage node in a pure replicate volume (1x2) moved one of the VM to paused state.
874745 - [SELinux] [RFE] [RHGS] Red Hat Storage daemons need SELinux confinement
980043 - quota: regex in logging message
987980 - Dist-geo-rep : after remove brick commit from the machine having multiple bricks, the change_detector becomes xsync.
990029 - [RFE] enable gfid to path conversion
1002991 - Dist-geo-rep: errors in log related to syncdutils.py and monitor.py (status is Stable though)
1006840 - Dist-geo-rep : After data got synced, on slave volume; few directories(owner is non privileged User) have different permission then master volume
1008826 - [RFE] Dist-geo-rep : remove-brick commit(for brick(s) on master volume) should kill geo-rep worker process for the bricks getting removed.
1009351 - [RFE] Dist-geo-rep : no need of restarting other geo replication instances when they receives 'ECONNABORTED' on remove-brick commit of some other brick
1010327 - Dist-geo-rep : session status is defunct after syncdutils.py errors in log
1021820 - quota: quotad.socket in /tmp
1023416 - quota: limit set cli issues with setting in Bytes(B) or without providing the type(size)
1026831 - Dist-geo-rep : In the newly added node, the gsyncd uses xsync as change_detector instead of changelog,
1027142 - Dist-geo-rep : After remove brick commit it should stop the gsyncd running on the removed node
1027693 - Quota: features.quota-deem-statfs is "on" even after disabling quota.
1027710 - [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
1028965 - Dist-geo-rep : geo-rep config shows ignore_deletes as true always, even though its not true.
1029104 - dist-geo-rep: For a node with has passive status, "crawl status" is listed as "Hybrid Crawl"
1031515 - Dist-geo-rep : too much logging in slave gluster logs when there are some 20 million files for xsync to crawl
1032445 - Dist-geo-rep : When an active brick goes down and comes back, the gsyncd associated with it starts using xsync as change_detector
1039008 - Dist-geo-rep : After checkpoint set, status detail doesn't show updated checkpoint info until second execution.
1039674 - quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O.
1044344 - Assertion failed:uuid null while running getfattr on a file in a directory which has quota limit set
1047481 - DHT: Setfattr doesn't take rebalance into consideration
1048122 - [SNAPSHOT] : gluster snapshot delete doesnt provide option to delete all / multiple snaps of a given volume
1054154 - dist-geo-rep : gsyncd crashed in syncdutils.py while removing a file.
1059255 - dist-geo-rep : checkpoint doesn't reach because checkpoint became stale.
1062401 - RFE: move code for directory tree setup on hcfs to standalone script
1063215 - gluster cli crashed upon running 'heal info' command with the binaries compiled with -DDEBUG
1082659 - glusterfs-api package should pull glusterfs package as dependency
1083024 - [SNAPSHOT]: Setting config snap-max-hard-limit values require correction in output in different scenarios
1085202 - [SNAPSHOT]: While rebalance is in progress as part of remove-brick the snapshot creation fails with prevalidation
1093838 - Brick-sorted order of filenames in RHS directory harms Hadoop mapreduce performance
1098093 - [SNAPSHOT]: setting the -ve values in snapshot config should result in proper message
1098200 - [SNAPSHOT]: Stale options (Snap volume) needs to be removed from volume info
1101270 - quota a little bit lower than max LONG fails
1101697 - [barrier] Spelling correction in glusterd log message while enabling/disabling barrier
1102047 - [RFE] Need gluster cli command to retrieve current op-version on the RHS Node
1103971 - quota: setting limit to 16384PB shows wrong stat with list commands
1104478 - [SNAPSHOT] Create snaphost failed with error "unbarrier brick opfailed with the error quorum is not met"
1109111 - While doing yum update observed error reading information on service glusterfsd: No such file or directory
1109689 - [SNAPSHOT]: once we reach the soft-limit and auto-delete is set to disable than we warn user which is not logged into the logs
1110715 - "error reading information on service glusterfsd: No such file or directory" in install.log
1113424 - Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands are executed.
1114015 - [SNAPSHOT]: setting config valuses doesn't delete the already created snapshots,but wrongly warns the user that it might delete
1114976 - nfs-ganesha: logs inside the /tmp directory
1116084 - Quota: Null client error messages are repeatedly written to quotad.log.
1117172 - DHT : - rename of files failed with 'No such File or Directory' when Source file was already present and all sub-volumes were up
1117270 - [SNAPSHOT]: error message for invalid snapshot status should be aligned with error messages of info and list
1120907 - [RFE] Add confirmation dialog to to snapshot restore operation
1121560 - [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
1122064 - [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes back
1127401 - [EARLY ACCESS] ignore-deletes option is not something you can configure
1130998 - [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
1131044 - DHT : - renaming same file from multiple mount failed with - 'Structure needs cleaning' error on all mount
1131418 - remove-brick: logs display the error related to "Operation not permitted"
1131968 - [SNAPSHOT]: snapshoted volume is read only but it shows rw attributes in mount
1132026 - [SNAPSHOT]: nouuid is appended for every snapshoted brick which causes duplication if the original brick has already nouuid
1132337 - CVE-2014-5338 CVE-2014-5339 CVE-2014-5340 check-mk: multiple flaws fixed in versions 1.2.4p4 and 1.2.5i4
1134690 - [SNAPSHOT]: glusterd crash while snaphshot creation was in progress
1139106 - [RFE] geo-rep mount broker setup has to be simplified.
1140183 - dist-geo-rep: Concurrent renames and node reboots results in slave having both source and destination of file with destination being 0 byte sticky file
1140506 - [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
1141433 - [SNAPSHOT]: output correction in setting snap-max-hard/soft-limit for system/volume
1144088 - dist-gep-rep: Files on master and slave are not in sync after file renames on master volume.
1147627 - dist-geo-rep: Few symlinks not synced to slave after an Active node got rebooted
1150461 - CVE-2014-7960 openstack-swift: Swift metadata constraints are not correctly enforced
1150899 - FEATURE REQUEST: Add "disperse" feature from GlusterFS 3.6
1156637 - Gluster small-file creates do not scale with brick count
1160790 - RFE: bandwidth throttling of geo-replication
1165663 - [USS]: Inconsistent behaviour when a snapshot is default deactivated and when it is activated and than deactivated
1171662 - libgfapi crashes in glfs_fini for RDMA type volumes
1176835 - [USS] : statfs call fails on USS.
1177911 - [USS]:Giving the wrong input while setting USS fails as expected but gluster v info shows the wrong value set in features.uss
1178130 - quota: quota list displays double the size of previous value, post heal completion.
1179701 - dist-geo-rep: Geo-rep skipped some files after replacing a node with the same hostname and IP
1181108 - [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
1183988 - DHT:Quota:- brick process crashed after deleting .glusterfs from backend
1186328 - [SNAPSHOT]: Refactoring snapshot functions from glusterd-utils.c
1195659 - rhs-hadoop package is missing dependencies
1198021 - [SNAPSHOT]: Schedule snapshot creation with frequency ofhalf-hourly ,hourly,daily,weekly,monthly and yearly
1201712 - [georep]: Transition from xsync to changelog doesn't happen once the brick is brought online
1201732 - [dist-geo-rep]:Directory not empty and Stale file handle errors in geo-rep logs during deletes from master in history/changelog crawl
1202388 - [SNAPSHOT]: After a volume which has quota enabled is restored to a snap, attaching another node to the cluster is not successful
1203901 - NFS: IOZone tests hang, disconnects and hung tasks seen in logs.
1204044 - [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes
1208420 - [SELinux] [SMB]: smb service fails to start with SELINUX enabled on RHEL6.6 and RHS 3.0.4 samba rpms
1209132 - RHEL7:Need samba build for RHEL7
1211839 - While performing in-service software update, glusterfs-geo-replication and glusterfs-cli packages are updated even when glusterfsd or distributed volume is up
1212576 - Inappropriate error message generated when non-resolvable hostname is given for peer in 'gluster volume create' command for distribute-replicate volume creation
1212701 - Remove replace-brick with data migration support from gluster cli
1213245 - Volume creation fails with error "host is not in 'Peer in Cluster' state"
1213325 - SMB:Clustering entries not removed from smb.conf even after stopping the ctdb volume when selinux running in permissive mode
1214211 - NFS logs are filled with system.posix_acl_access messages
1214253 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file /var/run/glusterd.socket
1214258 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from unlink access on the sock_file /var/run/glusterd.socket
1214616 - nfs-ganesha: iozone write test is causing nfs server crash
1215430 - erasure coded volumes can't read large directory trees
1215635 - [SELinux] [ctdb] SELinux is preventing /bin/bash from execute access on the file /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
1215637 - [SELinux] [RHGS-3.1] AVC's of all the executable hooks under /var/lib/glusterd/hooks/ on RHEL-6.7
1215640 - [SELinux] [smb] SELinux is preventing /usr/sbin/smbd from execute_no_trans access on the file /usr/sbin/smbd
1215885 - [SELinux] SMB: WIth selinux in enforcing mode the mount to a gluster volume on cifs fails with i/o error.
1216941 - [SELinux] RHEL7:SMB: ctdbd does not have write permissions on fuse mount when SELinux is enabled
1217852 - enable all HDP/GlusterFS stacks
1218902 - [SELinux] [SMB]: RHEL7.1- SELinux policy for all AVC's on Samba and CTDB
1219793 - Dependency problem due to glusterfs-api depending on glusterfs instead of only glusterfs-libs [rhel-6]
1220999 - [SELinux] [nfs-ganesha]: Volume export fails when SELinux is in Enforcing mode - RHEL-6.7
1221344 - Change hive/warehouse perms from 0755 to 0775
1221585 - [RFE] Red Hat Gluster Storage server support on RHEL 7 platform.
1221612 - Minor tweaks for Samba spec to fix build issues found in QA
1221743 - glusterd not starting after a fresh install of 3.7.0-1.el6rhs build due to missing library files
1222442 - I/O's hanging on tiered volumes (NFS)
1222776 - [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
1222785 - [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport
1222856 - [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
1223201 - Simplify creation and set-up of meta-volume (shared storage)
1223205 - [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
1223206 - "Snap_scheduler disable" should have different return codes for different failures.
1223209 - [Snapshot] Do not run scheduler if ovirt scheduler is running
1223225 - cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
1223238 - Update of glusterfs native client rpms in RHEL 7 rh-common channel for RHGS 3.1
1223299 - Data Tiering:Frequency counters not working
1223677 - [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
1223695 - [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
1223715 - Though brick demon is not running, gluster vol status command shows the pid
1223738 - Allow only lookup and delete operation on file that is in split-brain
1223906 - Downstream bz for vdsm dist-git
1224033 - [Backup]: Crash observed when glusterfind pre is run on a dist-rep volume
1224043 - [Backup]: Incorrect error message displayed when glusterfind post is run with invalid volume name
1224046 - [Backup]: Misleading error message when glusterfind delete is given with non-existent volume
1224065 - gluster nfs-ganesha enable command failed
1224068 - [Backup]: Packages to be installed for glusterfind api to work
1224076 - [Backup]: Glusterfind not working with change-detector as 'changelog'
1224077 - Directories are missing on the mount point after attaching tier to distribute replicate volume.
1224081 - Detaching tier start failed on dist-rep volume
1224086 - Detach tier commit failed on a dist-rep volume
1224109 - [Backup]: Unable to create a glusterfind session
1224126 - NFS logs are filled with system.posix_acl_access messages
1224159 - data tiering:detach-tier start command fails with "Commit failed on localhost"
1224164 - data tiering: detach tier status not working
1224165 - SIGNING FAILURE Error messages are poping up in the bitd log
1224175 - Glusterd fails to start after volume restore, tier attach and node reboot
1224183 - quota: glusterfsd crash once quota limit-usage is executed
1224215 - nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
1224218 - BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
1224229 - BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file
1224232 - BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected
1224236 - [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
1224239 - [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes the volume topology to nx2 - causing inconsistent data between bricks in the replica set
1224240 - BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation
1224246 - [SNAPSHOT] : Appending time stamp to snap name while using scheduler to create snapshots should be removed.
1224609 - /etc/redhat-storage-release needs update to provide identity for RHGS Server 3.1.0
1224610 - nfs-ganesha: execution of script ganesha-ha.sh throws a error for a file
1224615 - NFS-Ganesha : Building downstream NFS-Ganesha rpms for 3.1
1224618 - Ganesha server became unresponsive after successfull failover
1224619 - nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
1224629 - RHEL7: Samba build for rhel7 fails to install with dependency errors.
1224639 - 'glusterd.socket' file created by rpm scriptlet is not cleaned-up properly post installation
1224658 - RHEL7: CTDB build needed for RHEl7 with dependencies resolved
1224662 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
1225338 - [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
1225371 - peers connected in the middle of a transaction are participating in the transaction
1225417 - Disks not visible in Storage devices tab on clicking Sync option
1225507 - nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
1226132 - [RFE] Provide hourly scrubbing option
1226167 - tiering: use sperate log/socket/pid file for tiering
1226168 - Do not allow detach-tier commands on a non tiered volume
1226820 - Brick process crashed during self-heal process
1226844 - NFS-Ganesha: ACL should not be enabled by default
1226863 - nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
1226889 - [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink
1226898 - [SELinux] redhat-storage-server should stop disabling SELinux
1227029 - glusterfs-devel: 3.7.0-3.el6 client package fails to install on dependency
1227179 - GlusterD fills the logs when the NFS-server is disabled
1227187 - The tiering feature requires counters.
1227197 - Disperse volume : Memory leak in client glusterfs
1227241 - cli/tiering:typo errors in tiering
1227311 - nfs-ganesha: 8 node pcs cluster setup fails
1227317 - Updating rfc.sh to point to the downstream branch.
1227326 - [SELinux] [BVT]: SELinux throws AVC errors while running DHT automation on RHEL-7.1
1227469 - should not spawn another migration daemon on graph switch
1227618 - [geo-rep]: use_meta_volume config option should be validated for its values
1227649 - linux untar hanged after the bricks are up in a 8+4 config
1227691 - [Backup]: Rename is getting recorded as a MODIFY entry in output file
1227704 - [Backup]: Glusterfind create should display a msg if the session is successfully created
1227709 - Not able to export volume using nfs-ganesha
1227869 - [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
1228017 - [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files
1228127 - Volume needs restart after editing auth.ssl-allow list for volume options which otherwise has to be automatic
1228150 - nfs-ganesha: Upcall infrastructure support
1228152 - [RFE] nfs-ganesha: pNFS for RHGS 3.1
1228153 - nfs-ganesha: Fix gfapi.log location
1228155 - [RFE]nfs-ganesha: ACL feature support
1228164 - [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node
1228173 - [geo-rep]: RENAME are not synced to slave when quota is enabled.
1228222 - Disable pNFS by default for nfs 4.1 mount
1228225 - nfs-ganesha : Performance improvement for pNFS
1228246 - Data tiering:UI: volume status of a tier volume shows all bricks as hot bricks
1228247 - [Backup]: File movement across directories does not get captured in the output file in a X3 volume
1228294 - Disperse volume : Geo-replication failed
1228315 - [RFE] Provide nfs-ganesha for RHGS 3.1 on RHEL7
1228495 - [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
1228496 - Disperse volume : glusterfsd crashed
1228525 - Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
1228529 - Disperse volume : glusterfs crashed
1228597 - [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile
1228598 - [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
1228626 - nfs-ganesha: add node fails to add a new node to the cluster
1228674 - Need to upgrade CTDB to version 2.5.5
1229202 - VDSM service is not running without mom in RHEL-7
1229242 - data tiering:force Remove brick is detaching-tier
1229245 - Data Tiering:Replica type volume not getting converted to tier type after attaching tier
1229248 - Data Tiering:UI:changes required to CLI responses for attach and detach tier
1229251 - Data Tiering; Need to change volume info details like type of volume and number of bricks when tier is attached to a EC(disperse) volume
1229256 - Incorrect and unclear "vol info" o/p for tiered volume
1229257 - Incorrect vol info post detach on disperse volume
1229260 - Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
1229261 - data tiering: do not allow tiering related volume set options on a regular volume
1229263 - Data Tiering:do not allow detach-tier when the volume is in "stopped" status
1229266 - [Tiering] : Attaching another node to the cluster which has a tiered volume times out
1229268 - Files migrated should stay on a tier for a full cycle
1229274 - tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume.
1229567 - context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
1229569 - FSAL_GLUSTER : inherit ACLs is not working properly for group write permissions
1229607 - nfs-ganesha: unexporting a volume fails and nfs-ganesha process coredumps
1229623 - [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR
1229664 - [Backup]: Glusterfind create/pre/post/delete prompts for password of the peer node
1229667 - nfs-ganesha: gluster nfs-ganesha disable Error : Request timed out
1229674 - [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
1230101 - [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1
1230129 - [SELinux]: [geo-rep]: AVC logged in RHEL6.7 during geo-replication setup between master and slave volume
1230186 - disable ping timer between glusterds
1230202 - [SELinux] [Snapshot] : avc logged in RHEL 6.7 set up during snapshot creation
1230252 - [New] - Creating a brick using RAID6 on RHEL7 gives unexpected exception
1230269 - [SELinux]: [geo-rep]: RHEL7.1 can not initialize the geo-rep session between master and slave volume, Permission Denied
1230513 - Disperse volume : data corruption with appending writes in 8+4 config
1230522 - Disperse volume : client crashed while running IO
1230607 - [geo-rep]: RHEL7.1: rsync should be made dependent package for geo-replication
1230612 - Disperse volume : NFS and Fuse mounts hung with plain IO
1230635 - Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
1230646 - Not able to create snapshots for geo-replicated volumes when session is created with root user
1230764 - RHGS-3.1 op-version need to be corrected
1231166 - Disperse volume : fuse mount hung on renames on a distributed disperse volume
1231210 - [New] - xfsprogs should be pulled in as part of vdsm installation.
1231223 - Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
1231635 - glusterd crashed when testing heal full on replaced disks
1231647 - [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
1231651 - nfs-ganesha: 100% CPU usage with upcall feature enabled
1231732 - Renamed Files are missing after self-heal
1231771 - glusterd: Porting logging messages to new logging framework
1231775 - protocol client : Porting log messages to a new framework
1231776 - protocol server : Porting log messages to a new framework
1231778 - nfs : porting log messages to a new framework
1231781 - dht: Porting logging messages to new logging framework
1231782 - rdma : porting log messages to a new framework
1231784 - performance translators: Porting logging messages to new logging framework
1231788 - libgfapi : porting log messages to a new framework
1231792 - libglusterfs: Porting log messages to new framework and allocating segments
1231797 - tiering: Porting log messages to new framework
1231813 - Packages downgraded in RHGS 3.1 ISO image as compared to RHS 3.0.4 ISO image
1231831 - [RHGSS-3.1 ISO] redhat-storage-server package is not available in the ISO
1231835 - [RHGSS-3.1 ISO] ISO is based out of RHEL-6.6 and and not RHEL-6.7
1232159 - Incorrect mountpoint for lv with existing snapshot lv
1232230 - [geo-rep]: Directory renames are not captured in changelog hence it doesn't sync to the slave and glusterfind output
1232237 - [Backup]: Directory creation followed by its subsequent movement logs a NEW entry with the old path
1232272 - [New] - gluster-nagios-addons is not present in default ISO installation.
1232428 - [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
1232603 - upgrade and install tests failing for RHGS 3.1 glusterfs client packages due to failed dependencies on glusterfs-client-xlators
1232609 - [geo-rep]: RHEL7.1 segmentation faults are observed on all the master nodes
1232624 - gluster v set help needs to be updated for cluster.enable-shared-storage option
1232625 - Data Tiering: Files not getting promoted once demoted
1232641 - while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up
1232691 - [RHGS] RHGS 3.1 ISO menu title is obsolete
1233033 - nfs-ganesha: ganesha-ha.sh --refresh-config not working
1233062 - [Backup]: Modify after a rename is getting logged as a rename entry (only) in the outfile
1233147 - [Backup]: Rename and simultaneous movement of a hardlink logs an incorrect entry of RENAME
1233248 - glusterfsd, quotad and gluster-nfs process crashed while running nfs-sanity on a SSL enabled volume
1233486 - [RHGS client on RHEL 5] Failed to build *3.7.1-4 due to missing files
1233575 - [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
1233694 - Quota: Porting logging messages to new logging framework
1234419 - [geo-rep]: Feature fan-out fails with the use of meta volume config
1234720 - glusterd: glusterd crashes while importing a USS enabled volume which is already started
1234725 - [New] - Bricks fail to restore when a new node is added to the cluster and rebooted when having management and data on two different interfaces
1234916 - nfs-ganesha:acls enabled and "rm -rf" causes ganesha process crash
1235121 - nfs-ganesha: pynfs failures
1235147 - FSAL_GLUSTER : symlinks are not working properly if acl is enabled
1235225 - [geo-rep]: set_geo_rep_pem_keys.sh needs modification in gluster path to support mount broker functionality
1235244 - Missing trusted.ec.config xattr for files after heal process
1235540 - peer probe results in Peer Rejected(Connected)
1235544 - Upcall: Directory or file creation should send cache invalidation requests to parent directories
1235547 - Discrepancy in the rcu build for rhel 7
1235599 - Update of rhs-hadoop packages in RHEL 6 RH-Common Channel for RHGS 3.1 release
1235613 - [SELinux] SMB: SELinux policy to be set for /usr/sbin/ctdbd_wrapper.
1235628 - Provide and use a common way to do reference counting of (internal) structures
1235735 - glusterfsd crash observed after upgrading from 3.0.4 to 3.1
1235776 - libxslt package in RHGS 3.1 advisory is older in comparison to already released package
1236556 - Ganesha volume export failed
1236980 - [SELinux]: RHEL7.1CTDB node goes to DISCONNECTED/BANNED state when multiple nodes are rebooted
1237053 - Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
1237063 - SMB:smb encrypt details to be updated in smb.conf man page for samba
1237065 - [ISO] warning: %post(samba-vfs-glusterfs-0:4.1.17-7.el6rhs.x86_64) scriptlet failed, exit status 255 seen in install.log
1237085 - SMB: smb3 encryption doesn't happen when smb encrypt is set to enabled for global and for share
1237165 - Incorrect state created in '/var/lib/nfs/statd'
1238149 - FSAL_GLUSTER : avoid possible memory corruption for inherit acl
1238156 - FSAL_GLUSTER : all operations on deadlink will fail when acl is enabled
1238979 - Though nfs-ganesha is not selected while installation, packages is getting installed
1239057 - ganesha volume export fails in rhel7.1
1239108 - Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
1239280 - glusterfsd crashed after volume start force
1239317 - quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
1240168 - Glustershd crashed
1240196 - Unable to pause georep session if one of the nodes in cluster is not part of master volume.
1240228 - [SELinux] samba-vfs-glusterfs should have a dependency on selinux packages (RHEL-6.7)
1240233 - [SELinux] samba-vfs-glusterfs should have a dependency on some selinux packages (RHEL-7.1)
1240245 - Disperse volume: NFS crashed
1240251 - [SELinux] ctdb should have a dependency on selinux packages (RHEL-6.7)
1240253 - [SELinux] ctdb should have a dependency on selinux packages (RHEL-7.1)
1240617 - Disperse volume : rebalance failed with add-brick
1240782 - Quota: Larger than normal perf hit with quota enabled.
1240800 - Package on ISO RHGSS-3.1-20150707.n.0-RHS-x86_64-DVD1.iso missing in yum repos
1241150 - quota: marker accounting can get miscalculated after upgrade to 3.7
1241366 - nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
1241449 - [ISO] RHGSS-3.1-RHEL-7-20150708.n.0-RHGSS-x86_64-dvd1.iso installation fails - NoSuchPackage: teamd
1241772 - rebase gstatus to latest upstream
1241839 - nfs-ganesha: bricks crash while executing acl related operation for named group/user
1241843 - CTDB:RHEL7: Yum remove/install ctdb gives error in pre_uninstall and post_install sections and fails to remove ctdb package
1241996 - [ISO] RHEL 7.1 based RHGS ISO uses workstation not server
1242162 - [ISO] RHEL 7.1 based RHGS ISO does not have "openssh-server" installed and thus prevents ssh login
1242367 - with Management SSL on, 'gluster volume create' crashes glusterd
1242423 - Disperse volume : client glusterfs crashed while running IO
1242487 - [SELinux] nfs-ganesha: AVC denied for nfs-ganesha.service , ganesha cluster setup fails in Rhel7
1242543 - replacing a offline brick fails with "replace-brick" command
1242767 - SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.
1243297 - [ISO] Packages missing in RHGS-3.1 el7 ISO
1243358 - NFS hung while running IO - Malloc/free deadlock
1243725 - Do not install RHEL-6 tuned profiles in RHEL-7 based RHGS
1243732 - [New] - vdsm: wrong package mapping
1244338 - Cannot install IPA server and client (sssd-common) on RHGS 3.1 on RHEL 7 because of version conflict in libldb
1245563 - [RHGS-AMI] Root partition too small and not configurable
1245896 - rebuild rhsc-doc with latest doc build
1245988 - repoclosure complains that pcs-0.9.139-9.el6.x86_64 has unresolved dependencies
1246128 - RHGSS-3.1-RHEL-6-20150722.2-RHS-x86_64-DVD1.iso contains glusterfs-resource-agents which should be removed
1246216 - i686 packages in RHGS ISO that are absent in puddle repos [el6]

6. Package List:

Red Hat Storage Native Client for Red Hat Enterprise Linux 5:

Source:
glusterfs-3.7.1-11.el5.src.rpm

x86_64:
glusterfs-3.7.1-11.el5.x86_64.rpm
glusterfs-api-3.7.1-11.el5.x86_64.rpm
glusterfs-api-devel-3.7.1-11.el5.x86_64.rpm
glusterfs-cli-3.7.1-11.el5.x86_64.rpm
glusterfs-client-xlators-3.7.1-11.el5.x86_64.rpm
glusterfs-debuginfo-3.7.1-11.el5.x86_64.rpm
glusterfs-devel-3.7.1-11.el5.x86_64.rpm
glusterfs-fuse-3.7.1-11.el5.x86_64.rpm
glusterfs-libs-3.7.1-11.el5.x86_64.rpm
glusterfs-rdma-3.7.1-11.el5.x86_64.rpm
python-gluster-3.7.1-11.el5.x86_64.rpm

Red Hat Gluster Storage NFS 3.1:

Source:
nfs-ganesha-2.2.0-5.el6rhs.src.rpm

x86_64:
nfs-ganesha-2.2.0-5.el6rhs.x86_64.rpm
nfs-ganesha-debuginfo-2.2.0-5.el6rhs.x86_64.rpm
nfs-ganesha-gluster-2.2.0-5.el6rhs.x86_64.rpm
nfs-ganesha-nullfs-2.2.0-5.el6rhs.x86_64.rpm

Red Hat Gluster Storage Nagios 3.1 on RHEL-6:

Source:
check-mk-1.2.6p1-3.el6rhs.src.rpm
gluster-nagios-common-0.2.0-1.el6rhs.src.rpm
nagios-plugins-1.4.16-12.el6rhs.src.rpm
nagios-server-addons-0.2.1-4.el6rhs.src.rpm
nrpe-2.15-4.1.el6rhs.src.rpm
pnp4nagios-0.6.22-2.1.el6rhs.src.rpm
pynag-0.9.1-1.el6rhs.src.rpm
python-cpopen-1.3-4.el6_5.src.rpm

noarch:
gluster-nagios-common-0.2.0-1.el6rhs.noarch.rpm
nagios-server-addons-0.2.1-4.el6rhs.noarch.rpm
pynag-0.9.1-1.el6rhs.noarch.rpm
pynag-examples-0.9.1-1.el6rhs.noarch.rpm

x86_64:
check-mk-1.2.6p1-3.el6rhs.x86_64.rpm
check-mk-debuginfo-1.2.6p1-3.el6rhs.x86_64.rpm
check-mk-livestatus-1.2.6p1-3.el6rhs.x86_64.rpm
nagios-plugins-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-debuginfo-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-dummy-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-nrpe-2.15-4.1.el6rhs.x86_64.rpm
nagios-plugins-ping-1.4.16-12.el6rhs.x86_64.rpm
nrpe-debuginfo-2.15-4.1.el6rhs.x86_64.rpm
pnp4nagios-0.6.22-2.1.el6rhs.x86_64.rpm
pnp4nagios-debuginfo-0.6.22-2.1.el6rhs.x86_64.rpm
python-cpopen-1.3-4.el6_5.x86_64.rpm
python-cpopen-debuginfo-1.3-4.el6_5.x86_64.rpm

Red Hat Gluster Storage Server 3.1 on RHEL-6:

Source:
augeas-1.0.0-10.el6.src.rpm
clufter-0.11.2-1.el6.src.rpm
cluster-3.0.12.1-73.el6.src.rpm
clustermon-0.16.2-31.el6.src.rpm
corosync-1.4.7-2.el6.src.rpm
ctdb2.5-2.5.5-7.el6rhs.src.rpm
fence-virt-0.2.3-19.el6.src.rpm
gluster-nagios-addons-0.2.4-4.el6rhs.src.rpm
gluster-nagios-common-0.2.0-1.el6rhs.src.rpm
glusterfs-3.7.1-11.el6rhs.src.rpm
gstatus-0.64-3.1.el6rhs.src.rpm
libqb-0.17.1-1.el6.src.rpm
libtalloc-2.1.1-4.el6rhs.src.rpm
nagios-plugins-1.4.16-12.el6rhs.src.rpm
nrpe-2.15-4.1.el6rhs.src.rpm
openais-1.1.1-7.el6.src.rpm
openstack-swift-1.13.1-4.el6ost.src.rpm
pacemaker-1.1.12-8.el6.src.rpm
pcs-0.9.139-9.el6.src.rpm
python-blivet-1.0.0.2-1.el6rhs.src.rpm
python-cpopen-1.3-4.el6_5.src.rpm
python-eventlet-0.14.0-1.el6.src.rpm
python-greenlet-0.4.2-1.el6.src.rpm
python-keystoneclient-0.9.0-5.el6ost.src.rpm
python-prettytable-0.7.2-1.el6.src.rpm
python-pyudev-0.15-2.el6rhs.src.rpm
redhat-storage-logos-60.0.20-1.el6rhs.src.rpm
redhat-storage-server-3.1.0.3-1.el6rhs.src.rpm
resource-agents-3.9.5-24.el6.src.rpm
ricci-0.16.2-81.el6.src.rpm
userspace-rcu-0.7.9-2.el6rhs.src.rpm
vdsm-4.16.20-1.2.el6rhs.src.rpm

noarch:
clufter-cli-0.11.2-1.el6.noarch.rpm
clufter-lib-ccs-0.11.2-1.el6.noarch.rpm
clufter-lib-general-0.11.2-1.el6.noarch.rpm
clufter-lib-pcs-0.11.2-1.el6.noarch.rpm
gluster-nagios-common-0.2.0-1.el6rhs.noarch.rpm
openstack-swift-1.13.1-4.el6ost.noarch.rpm
openstack-swift-account-1.13.1-4.el6ost.noarch.rpm
openstack-swift-container-1.13.1-4.el6ost.noarch.rpm
openstack-swift-doc-1.13.1-4.el6ost.noarch.rpm
openstack-swift-object-1.13.1-4.el6ost.noarch.rpm
openstack-swift-proxy-1.13.1-4.el6ost.noarch.rpm
python-blivet-1.0.0.2-1.el6rhs.noarch.rpm
python-eventlet-0.14.0-1.el6.noarch.rpm
python-eventlet-doc-0.14.0-1.el6.noarch.rpm
python-keystoneclient-0.9.0-5.el6ost.noarch.rpm
python-keystoneclient-doc-0.9.0-5.el6ost.noarch.rpm
python-prettytable-0.7.2-1.el6.noarch.rpm
python-pyudev-0.15-2.el6rhs.noarch.rpm
redhat-storage-logos-60.0.20-1.el6rhs.noarch.rpm
redhat-storage-server-3.1.0.3-1.el6rhs.noarch.rpm
vdsm-cli-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-debug-plugin-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-gluster-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-hook-ethtool-options-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-hook-faqemu-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-hook-openstacknet-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-hook-qemucmdline-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-jsonrpc-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-python-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-python-zombiereaper-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-reg-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-tests-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-xmlrpc-4.16.20-1.2.el6rhs.noarch.rpm
vdsm-yajsonrpc-4.16.20-1.2.el6rhs.noarch.rpm

x86_64:
augeas-1.0.0-10.el6.x86_64.rpm
augeas-debuginfo-1.0.0-10.el6.x86_64.rpm
augeas-devel-1.0.0-10.el6.x86_64.rpm
augeas-libs-1.0.0-10.el6.x86_64.rpm
ccs-0.16.2-81.el6.x86_64.rpm
clufter-debuginfo-0.11.2-1.el6.x86_64.rpm
cluster-cim-0.16.2-31.el6.x86_64.rpm
cluster-debuginfo-3.0.12.1-73.el6.x86_64.rpm
cluster-snmp-0.16.2-31.el6.x86_64.rpm
clusterlib-3.0.12.1-73.el6.x86_64.rpm
clusterlib-devel-3.0.12.1-73.el6.x86_64.rpm
clustermon-debuginfo-0.16.2-31.el6.x86_64.rpm
cman-3.0.12.1-73.el6.x86_64.rpm
corosync-1.4.7-2.el6.x86_64.rpm
corosync-debuginfo-1.4.7-2.el6.x86_64.rpm
corosynclib-1.4.7-2.el6.x86_64.rpm
corosynclib-devel-1.4.7-2.el6.x86_64.rpm
ctdb2.5-2.5.5-7.el6rhs.x86_64.rpm
ctdb2.5-debuginfo-2.5.5-7.el6rhs.x86_64.rpm
fence-virt-0.2.3-19.el6.x86_64.rpm
fence-virt-debuginfo-0.2.3-19.el6.x86_64.rpm
fence-virtd-0.2.3-19.el6.x86_64.rpm
fence-virtd-checkpoint-0.2.3-19.el6.x86_64.rpm
fence-virtd-libvirt-0.2.3-19.el6.x86_64.rpm
fence-virtd-multicast-0.2.3-19.el6.x86_64.rpm
fence-virtd-serial-0.2.3-19.el6.x86_64.rpm
gfs2-utils-3.0.12.1-73.el6.x86_64.rpm
gluster-nagios-addons-0.2.4-4.el6rhs.x86_64.rpm
gluster-nagios-addons-debuginfo-0.2.4-4.el6rhs.x86_64.rpm
glusterfs-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-api-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-api-devel-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-cli-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-client-xlators-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-debuginfo-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-devel-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-fuse-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-ganesha-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-geo-replication-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-libs-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-rdma-3.7.1-11.el6rhs.x86_64.rpm
glusterfs-server-3.7.1-11.el6rhs.x86_64.rpm
gstatus-0.64-3.1.el6rhs.x86_64.rpm
gstatus-debuginfo-0.64-3.1.el6rhs.x86_64.rpm
libqb-0.17.1-1.el6.x86_64.rpm
libqb-debuginfo-0.17.1-1.el6.x86_64.rpm
libqb-devel-0.17.1-1.el6.x86_64.rpm
libtalloc-2.1.1-4.el6rhs.x86_64.rpm
libtalloc-debuginfo-2.1.1-4.el6rhs.x86_64.rpm
libtalloc-devel-2.1.1-4.el6rhs.x86_64.rpm
libvirt-debuginfo-0.10.2-54.el6.x86_64.rpm
libvirt-lock-sanlock-0.10.2-54.el6.x86_64.rpm
modcluster-0.16.2-31.el6.x86_64.rpm
nagios-plugins-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-debuginfo-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-ide_smart-1.4.16-12.el6rhs.x86_64.rpm
nagios-plugins-procs-1.4.16-12.el6rhs.x86_64.rpm
nrpe-2.15-4.1.el6rhs.x86_64.rpm
nrpe-debuginfo-2.15-4.1.el6rhs.x86_64.rpm
openais-1.1.1-7.el6.x86_64.rpm
openais-debuginfo-1.1.1-7.el6.x86_64.rpm
openaislib-1.1.1-7.el6.x86_64.rpm
openaislib-devel-1.1.1-7.el6.x86_64.rpm
pacemaker-1.1.12-8.el6.x86_64.rpm
pacemaker-cli-1.1.12-8.el6.x86_64.rpm
pacemaker-cluster-libs-1.1.12-8.el6.x86_64.rpm
pacemaker-cts-1.1.12-8.el6.x86_64.rpm
pacemaker-debuginfo-1.1.12-8.el6.x86_64.rpm
pacemaker-doc-1.1.12-8.el6.x86_64.rpm
pacemaker-libs-1.1.12-8.el6.x86_64.rpm
pacemaker-libs-devel-1.1.12-8.el6.x86_64.rpm
pacemaker-remote-1.1.12-8.el6.x86_64.rpm
pcs-0.9.139-9.el6.x86_64.rpm
pcs-debuginfo-0.9.139-9.el6.x86_64.rpm
pytalloc-2.1.1-4.el6rhs.x86_64.rpm
pytalloc-devel-2.1.1-4.el6rhs.x86_64.rpm
python-clufter-0.11.2-1.el6.x86_64.rpm
python-cpopen-1.3-4.el6_5.x86_64.rpm
python-cpopen-debuginfo-1.3-4.el6_5.x86_64.rpm
python-gluster-3.7.1-11.el6rhs.x86_64.rpm
python-greenlet-0.4.2-1.el6.x86_64.rpm
python-greenlet-debuginfo-0.4.2-1.el6.x86_64.rpm
python-greenlet-devel-0.4.2-1.el6.x86_64.rpm
resource-agents-3.9.5-24.el6.x86_64.rpm
resource-agents-debuginfo-3.9.5-24.el6.x86_64.rpm
resource-agents-sap-3.9.5-24.el6.x86_64.rpm
ricci-0.16.2-81.el6.x86_64.rpm
ricci-debuginfo-0.16.2-81.el6.x86_64.rpm
userspace-rcu-0.7.9-2.el6rhs.x86_64.rpm
userspace-rcu-debuginfo-0.7.9-2.el6rhs.x86_64.rpm
userspace-rcu-devel-0.7.9-2.el6rhs.x86_64.rpm
vdsm-4.16.20-1.2.el6rhs.x86_64.rpm
vdsm-debuginfo-4.16.20-1.2.el6rhs.x86_64.rpm

Red Hat Storage Native Client for Red Hat Enterprise Linux 6:

Source:
glusterfs-3.7.1-11.el6.src.rpm

x86_64:
glusterfs-3.7.1-11.el6.x86_64.rpm
glusterfs-api-3.7.1-11.el6.x86_64.rpm
glusterfs-api-devel-3.7.1-11.el6.x86_64.rpm
glusterfs-cli-3.7.1-11.el6.x86_64.rpm
glusterfs-client-xlators-3.7.1-11.el6.x86_64.rpm
glusterfs-debuginfo-3.7.1-11.el6.x86_64.rpm
glusterfs-devel-3.7.1-11.el6.x86_64.rpm
glusterfs-fuse-3.7.1-11.el6.x86_64.rpm
glusterfs-libs-3.7.1-11.el6.x86_64.rpm
glusterfs-rdma-3.7.1-11.el6.x86_64.rpm
python-gluster-3.7.1-11.el6.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2014-5338
https://access.redhat.com/security/cve/CVE-2014-5339
https://access.redhat.com/security/cve/CVE-2014-5340
https://access.redhat.com/security/cve/CVE-2014-7960
https://access.redhat.com/security/updates/classification/#important
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Technical_Notes/index.html

8. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2015 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iD8DBQFVuGYqXlSAg2UNWIIRAs3HAKC5TfYm5iz1TjOyacyQQI6tQNflYACeMxpw
DXQ4TOrVl3XI0Q1olVF1WxE=
=heKB
-----END PGP SIGNATURE-----


--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

March 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Mar 1st
    16 Files
  • 2
    Mar 2nd
    0 Files
  • 3
    Mar 3rd
    0 Files
  • 4
    Mar 4th
    32 Files
  • 5
    Mar 5th
    28 Files
  • 6
    Mar 6th
    42 Files
  • 7
    Mar 7th
    17 Files
  • 8
    Mar 8th
    13 Files
  • 9
    Mar 9th
    0 Files
  • 10
    Mar 10th
    0 Files
  • 11
    Mar 11th
    15 Files
  • 12
    Mar 12th
    19 Files
  • 13
    Mar 13th
    21 Files
  • 14
    Mar 14th
    38 Files
  • 15
    Mar 15th
    15 Files
  • 16
    Mar 16th
    0 Files
  • 17
    Mar 17th
    0 Files
  • 18
    Mar 18th
    10 Files
  • 19
    Mar 19th
    32 Files
  • 20
    Mar 20th
    46 Files
  • 21
    Mar 21st
    16 Files
  • 22
    Mar 22nd
    13 Files
  • 23
    Mar 23rd
    0 Files
  • 24
    Mar 24th
    0 Files
  • 25
    Mar 25th
    12 Files
  • 26
    Mar 26th
    31 Files
  • 27
    Mar 27th
    19 Files
  • 28
    Mar 28th
    42 Files
  • 29
    Mar 29th
    0 Files
  • 30
    Mar 30th
    0 Files
  • 31
    Mar 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2022 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close