exploit the possibilities
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2017-0486-01

Red Hat Security Advisory 2017-0486-01
Posted Mar 23, 2017
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2017-0486-01 - Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges. The following packages have been upgraded to a later upstream version: glusterfs, redhat-storage-server, vdsm. Multiple security issues have been addressed.

tags | advisory
systems | linux, redhat
advisories | CVE-2015-1795
SHA-256 | 05ccadb8422bd3f3bd16a938142cda7e5d16ceec2b9a6a2f0b766b2576986aac

Red Hat Security Advisory 2017-0486-01

Change Mirror Download

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update
Advisory ID: RHSA-2017:0486-01
Product: Red Hat Gluster Storage
Advisory URL: https://rhn.redhat.com/errata/RHSA-2017-0486.html
Issue date: 2017-03-23
CVE Names: CVE-2015-1795
=====================================================================

1. Summary:

An update is now available for Red Hat Gluster Storage 3.2 on Red Hat
Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Gluster Storage Server 3.2 on RHEL-7 - noarch, x86_64
Red Hat Storage Native Client for Red Hat Enterprise Linux 7 - noarch, x86_64

3. Description:

Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.

The following packages have been upgraded to a later upstream version:
glusterfs (3.8.4), redhat-storage-server (3.2.0.2), vdsm (4.17.33).
(BZ#1362376)

Security Fix(es):

* It was found that glusterfs-server RPM package would write file with
predictable name into world readable /tmp directory. A local attacker could
potentially use this flaw to escalate their privileges to root by modifying
the shell script during the installation of the glusterfs-server package.
(CVE-2015-1795)

This issue was discovered by Florian Weimer of Red Hat Product Security.

Bug Fix(es):

* Bricks remain stopped if server quorum is no longer met, or if server
quorum is disabled, to ensure that bricks in maintenance are not started
incorrectly. (BZ#1340995)

* The metadata cache translator has been updated to improve Red Hat Gluster
Storage performance when reading small files. (BZ#1427783)

* The 'gluster volume add-brick' command is no longer allowed when the
replica count has increased and any replica bricks are unavailable.
(BZ#1404989)

* Split-brain resolution commands work regardless of whether client-side
heal or the self-heal daemon are enabled. (BZ#1403840)

Enhancement(s):

* Red Hat Gluster Storage now provides Transport Layer Security support for
Samba and NFS-Ganesha. (BZ#1340608, BZ#1371475)

* A new reset-sync-time option enables resetting the sync time attribute to
zero when required. (BZ#1205162)

* Tiering demotions are now triggered at most 5 seconds after a
hi-watermark breach event. Administrators can use the
cluster.tier-query-limit volume parameter to specify the number of records
extracted from the heat database during demotion. (BZ#1361759)

* The /var/log/glusterfs/etc-glusterfs-glusterd.vol.log file is now named
/var/log/glusterfs/glusterd.log. (BZ#1306120)

* The 'gluster volume attach-tier/detach-tier' commands are considered
deprecated in favor of the new commands, 'gluster volume tier VOLNAME
attach/detach'. (BZ#1388464)

* The HA_VOL_SERVER parameter in the ganesha-ha.conf file is no longer used
by Red Hat Gluster Storage. (BZ#1348954)

* The volfile server role can now be passed to another server when a server
is unavailable. (BZ#1351949)

* Ports can now be reused when they stop being used by another service.
(BZ#1263090)

* The thread pool limit for the rebalance process is now dynamic, and is
determined based on the number of available cores. (BZ#1352805)

* Brick verification at reboot now uses UUID instead of brick path.
(BZ#1336267)

* LOGIN_NAME_MAX is now used as the maximum length for the slave user
instead of __POSIX_LOGIN_NAME_MAX, allowing for up to 256 characters
including the NULL byte. (BZ#1400365)

* The client identifier is now included in the log message to make it
easier to determine which client failed to connect. (BZ#1333885)

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1168606 - [USS]: setting the uss option to on fails when volume is in stopped state
1200927 - CVE-2015-1795 glusterfs: glusterfs-server %pretrans rpm script temporary file issue
1205162 - [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
1211845 - glusterd: response not aligned
1240333 - [geo-rep]: original directory and renamed directory both at the slave after rename on master
1241314 - when enable-shared-storage is enabled, volume get still shows that the option is disabled
1245084 - [RFE] changes needed in snapshot info command's xml output.
1248998 - [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one.
1256483 - Unreleased packages in RHGS 3.1 AMI [RHEL 7]
1256524 - [RFE] reset brick
1257182 - Rebalance is not considering the brick sizes while fixing the layout
1258267 - 1 mkdir generates tons of log messages from dht xlator
1263090 - glusterd: add brick command should re-use the port for listening which is freed by remove-brick.
1264310 - DHT: Rebalance hang while migrating the files of disperse volume
1278336 - nfs client I/O stuck post IP failover
1278385 - Data Tiering:Detach tier operation should be resilient(continue) when the volume is restarted
1278394 - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
1278900 - check_host_list() should be more robust
1284873 - Poor performance of directory enumerations over SMB
1286038 - glusterd process crashed while setting the option "cluster.extra-hash-regex"
1286572 - [FEAT] DHT - rebalance - rebalance status o/p should be different for 'fix-layout' option, it should not show 'Rebalanced-files' , 'Size', 'Scanned' etc as it is not migrating any files.
1294035 - gluster fails to propagate permissions on the root of a gluster export when adding bricks
1296796 - [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
1298118 - Unable to get the client statedump, as /var/run/gluster directory is not available by default
1299841 - [tiering]: Files of size greater than that of high watermark level should not be promoted
1306120 - [GSS] [RFE] Change the glusterd log file name to glusterd.log
1306656 - [GSS] - Brick ports changed after configuring I/O and management encryption
1312199 - [RFE] quota: enhance quota enable and disable process
1315544 - [GSS] -Gluster NFS server crashing in __mnt3svc_umountall
1317653 - EINVAL errors while aggregating the directory size by quotad
1318000 - [GSS] - Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps"
1319078 - files having different Modify and Change date on replicated brick
1319886 - gluster volume info --xml returns 0 for nonexistent volume
1324053 - quota/cli: quota list with path not working when limit is not set
1325821 - gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
1326066 - [hc][selinux] AVC denial messages seen in audit.log while starting the volume in HCI environment
1327952 - rotated FUSE mount log is using to populate the information after log rotate.
1328451 - observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick
1332080 - [geo-rep+shard]: Files which were synced to slave before enabling shard doesn't get sync/remove upon modification
1332133 - glusterd + bitrot : unable to create clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
1332542 - Tiering related core observed with "uuid_is_null () message".
1333406 - [HC]: After bringing down and up of the bricks VM's are getting paused
1333484 - slow readdir performance in SMB clients
1333749 - glusterd: glusterd provides stale port information when a volume is recreated with same brick path
1333885 - client ID should logged when SSL connection fails
1334664 - Excessive errors messages are seen in hot tier's brick logs in a tiered volume
1334858 - [Perf] : ls-l is not as performant as it used to be on older RHGS builds
1335029 - set errno in case of inode_link failures
1336267 - [scale]: Bricks not started after node reboot.
1336339 - Sequential volume start&stop is failing with SSL enabled setup.
1336377 - Polling failure errors getting when volume is started&stopped with SSL enabled setup.
1336764 - Bricks doesn't come online after reboot [ Brick Full ]
1337391 - [Bitrot] Need a way to set scrub interval to a minute, for ease of testing
1337444 - [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
1337450 - [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped'
1337477 - [Volume Scale] Volume start failed with "Error : Request timed out" after successfully creating & starting around 290 gluster volumes using heketi-cli
1337495 - [Volume Scale] gluster node randomly going to Disconnected state after scaling to more than 290 gluster volumes
1337565 - "nfs-grace-monitor" timed out messages observed
1337811 - [GSS] - enabling glusternfs with nfs.rpc-auth-allow to many hosts failed
1337836 - [Volume Scale] heketi-cli should not attempt to stop and delete a volume as soon as it receives a CLI timeout (120sec) but instead wait until the frame-timeout of 600sec
1337863 - [SSL] : I/O hangs when run from multiple clients on an SSL enabled volume
1338615 - [SSL] : gluster v set help does not show ssl options
1338748 - SAMBA : Error and warning messages related to xlator/features/snapview-client.so adding up to the windows client log on performing IO operations
1339159 - [geo-rep]: Worker died with [Errno 2] No such file or directory
1340338 - "volume status inode" command is getting timed out if number of files are more in the mount point
1340608 - [RFE] : Support SSL enabled volume via SMB v3
1340756 - [geo-rep]: AttributeError: 'Popen' object has no attribute 'elines'
1340995 - Bricks are starting when server quorum not met.
1341934 - [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume
1342459 - [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored.
1343178 - [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages.
1343320 - [GSS] Gluster fuse client crashed generating core dump
1343695 - [Disperse] : Assertion Failed Error messages in rebalance log post add-brick/rebalance.
1344322 - [geo-rep]: Worker crashed with OSError: [Errno 9] Bad file descriptor
1344651 - tiering : Multiple brick processes crashed on tiered volume while taking snapshots
1344675 - Stale file handle seen on the mount of dist-disperse volume when doing IOs with nfs-ganesha protocol
1344826 - [geo-rep]: Worker crashed with "KeyError: "
1344908 - [geo-rep]: If the data is copied from .snaps directory to the master, it doesn't get sync to slave [First Copy]
1345732 - SAMBA-DHT : Crash seen while rename operations in cifs mount and windows access of share mount
1347251 - fix the issue of Rolling upgrade or non-disruptive upgrade of disperse or erasure code volume to work
1347257 - spurious heal info as pending heal entries never end on an EC volume while IOs are going on
1347625 - [geo-rep] Stopped geo-rep session gets started automatically once all the master nodes are upgraded
1347922 - nfs-ganesha disable doesn't delete nfs-ganesha folder from /var/run/gluster/shared_storage
1347923 - ganesha.enable remains on in volume info file even after we disable nfs-ganesha on the cluster.
1348949 - ganesha/scripts : [RFE] store volume related configuration in shared storage
1348954 - ganesha/glusterd : remove 'HA_VOL_SERVER' from ganesha-ha.conf
1348962 - ganesha/scripts : copy modified export file during refresh-config
1351589 - [RFE] Eventing for Gluster
1351732 - gluster volume status <volume> client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
1351825 - yum groups install RH-Gluster-NFS-Ganesha fails due to outdated nfs-ganesha-nullfs
1351949 - management connection loss when volfile-server goes down
1352125 - Error: quota context not set inode (gfid:nnn) [Invalid argument]
1352805 - [GSS] Rebalance crashed
1353427 - [RFE] CLI to get local state representation for a cluster
1354260 - quota : rectify quota-deem-statfs default value in gluster v set help command
1356058 - glusterd doesn't scan for free ports from base range (49152) if last allocated port is greater than base port
1356804 - Healing of one of the file not happened during upgrade from 3.0.4 to 3.1.3 ( In-service )
1359180 - Make client.io-threads enabled by default
1359588 - [Bitrot - RFE]: On demand scrubbing option to scrub
1359605 - [RFE] Simplify Non Root Geo-replication Setup
1359607 - [RFE] Non root Geo-replication Error logs improvements
1359619 - [GSS]"gluster vol status all clients --xml" get malformed at times, causes gstatus to fail
1360807 - [RFE] Generate events in GlusterD
1360978 - [RFE]Reducing number of network round trips
1361066 - [RFE] DHT Events
1361068 - [RFE] Tier Events
1361078 - [ RFE] Quota Events
1361082 - [RFE]: AFR events
1361084 - [RFE]: EC events
1361086 - [RFE]: posix events
1361098 - Feature: Entry self-heal performance enhancements using more granular changelogs
1361101 - [RFE] arbiter for 3 way replication
1361118 - [RFE] Geo-replication Events
1361155 - Upcall related events
1361170 - [Bitrot - RFE]: Bitrot Events
1361184 - [RFE] Provide snapshot events for the new eventing framework
1361513 - EC: Set/unset dirty flag for all the update operations
1361519 - [Disperse] dd + rm + ls lead to IO hang
1362376 - [RHEL7] Rebase glusterfs at RHGS-3.2.0 release
1364422 - [libgfchangelog]: If changelogs are not available for the requested time range, no distinguished error
1364551 - GlusterFS lost track of 7,800+ file paths preventing self-heal
1366128 - "heal info --xml" not showing the brick name of offline bricks.
1367382 - [RFE]: events from protocol server
1367472 - [GSS]Quota version not changing in the quota.conf after upgrading to 3.1.1 from 3.0.x
1369384 - [geo-replication]: geo-rep Status is not showing bricks from one of the nodes
1369391 - configuration file shouldn't be marked as executable and systemd complains for it
1370350 - Hosted Engine VM paused post replace-brick operation
1371475 - [RFE] : Support SSL enabled volume via NFS Ganesha
1373976 - [geo-rep]: defunct tar process while using tar+ssh sync
1374166 - [GSS]deleted file from nfs-ganesha export goes in to .glusterfs/unlink in RHGS 3.1.3
1375057 - [RHEL-7]Include vdsm and related dependency packages at RHGS 3.2.0 ISO
1375465 - [RFE] Implement multi threaded self-heal for ec volumes
1376464 - [RFE] enable sharding with virt profile - /var/lib/glusterd/groups/virt
1377062 - /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
1377387 - glusterd experiencing repeated connect/disconnect messages when shd is down
1378030 - glusterd fails to start without installing glusterfs-events package
1378131 - [GSS] - Recording (ffmpeg) processes on FUSE get hung
1378300 - Modifications to AFR Events
1378342 - Getting "NFS Server N/A" entry in the volume status by default.
1378484 - warning messages seen in glusterd logs for each 'gluster volume status' command
1378528 - [SSL] glustershd disconnected from glusterd
1378676 - "transport.address-family: inet" option is not showing in the Vol info for 3.1.3 volume after updating to 3.2.
1378677 - "nfs.disable: on" is not showing in Vol info by default for the 3.1.3 volumes after updating to 3.2
1378867 - Poor smallfile read performance on Arbiter volume compared to Replica 3 volume
1379241 - qemu-img segfaults while creating qcow2 image on the gluster volume using libgfapi
1379919 - VM errors out while booting from the image on gluster replica 3 volume with compound fops enabled
1379924 - gfapi: Fix fd ref leaks
1379963 - [SELinux] [Eventing]: gluster-eventsapi shows a traceback while adding a webhook
1379966 - Volume restart couldn't re-export the volume exported via ganesha.
1380122 - Labelled geo-rep checkpoints hide geo-replication status
1380257 - [RFE] eventsapi/georep: Events are not available for Checkpoint and Status Change
1380276 - Poor write performance with arbiter volume after enabling sharding on arbiter volume
1380419 - gNFS: Revalidate lookup of a file in case of gfid mismatch
1380605 - Error and warning message getting while removing glusterfs-events-3.8.4-2 package
1380619 - Ganesha crashes with segfault while doing refresh-config with 3.2 builds.
1380638 - Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2)
1380655 - Continuous errors getting in the mount log when the volume mount server glusterd is down.
1380710 - invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument]
1380742 - Some tests in pynfs test suite fails with latest 3.2 builds.
1381140 - OOM kill of glusterfs fuse mount process seen on both the clients with one doing rename and the other doing delete of same files
1381353 - Ganesha crashes on volume restarts
1381452 - OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
1381822 - glusterd.log is flooded with socket.management: accept on 11 failed (Too many open files) and glusterd service stops
1381831 - dom_md/ids is always reported in the self-heal info
1381968 - md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
1382065 - SAMBA-ClientIO-Thread : Samba crashes with segfault while doing multiple mount & unmount of volume share with 3.2 builds
1382277 - Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state"
1382345 - [RHEL7] SELinux prevents starting of RDMA transport type volumes
1384070 - inconsistent file permissions b/w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled (and within the invalidation cycle)
1384311 - [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
1384316 - [Eventing]: Events not seen when command is triggered from one of the peer nodes
1384459 - Track the client that performed readdirp
1384460 - segment fault while join thread reaper_thr in fini()
1384481 - [SELinux] Snaphsot : Seeing AVC denied messages generated when snapshot and clones are created
1384865 - USS: Snapd process crashed ,doing parallel clients operations
1384993 - refresh-config fails and crashes ganesha when mdcache is enabled on the volume.
1385468 - During rebalance continuous "table not found" warning messages are seen in rebalance logs
1385474 - [granular entry sh] - Provide a CLI to enable/disable the feature that checks that there are no heals pending before allowing the operation
1385525 - Continuous warning messages getting when one of the cluster node is down on SSL setup.
1385561 - [Eventing]: BRICK_CONNECTED and BRICK_DISCONNECTED events seen at every heartbeat when a brick-is-killed/volume-stopped
1385605 - fuse mount point not accessible
1385606 - 4 of 8 bricks (2 dht subvols) crashed on systemic setup
1386127 - Remove-brick status output is showing status of fix-layout instead of original remove-brick status output
1386172 - [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
1386177 - SMB[md-cache]:While multiple connect and disconnect of samba share hang is seen and other share becomes inaccessible
1386185 - [Eventing]: 'gluster volume tier <volname> start force' does not generate a TIER_START event
1386280 - Rebase of redhat-release-server to that of RHEL-7.3
1386366 - The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
1386472 - [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name
1386477 - [Eventing]: TIER_DETACH_FORCE and TIER_DETACH_COMMIT events seen even after confirming negatively
1386538 - pmap_signin event fails to update brickinfo->signed_in flag
1387152 - [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster
1387204 - [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
1387205 - SMB:[MD-Cache]:while connecting and disconnecting samba share multiple times from a windows client , saw multiple crashes
1387501 - Asynchronous Unsplit-brain still causes Input/Output Error on system calls
1387544 - [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped
1387558 - libgfapi core dumps
1387563 - [RFE]: md-cache performance enhancement
1388464 - throw warning to show that older tier commands are depricated and will be removed.
1388560 - I/O Errors seen while accessing VM images on gluster volumes using libgfapi
1388711 - Needs more testings of rebalance for the distributed-dispersed volume
1388734 - glusterfs can't self heal character dev file for invalid dev_t parameters
1388755 - Checkpoint completed event missing master node detail
1389168 - glusterd: Display proper error message and fail the command if S32gluster_enable_shared_storage.sh hook script is not present during gluster volume set all cluster.enable-shared-storage <enable/disable> command
1389422 - SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
1389661 - Refresh config fails while exporting subdirectories within a volume
1390843 - write-behind: flush stuck by former failed write
1391072 - SAMBA : Unable to play video files in samba share mounted over windows system
1391093 - [Samba-Crash] : Core logs were generated while working on random IOs
1391808 - [setxattr_cbk] "Permission denied" warning messages are seen in logs while running pjd-fstest suite
1392299 - [SAMBA-mdcache]Read hungs and leads to disconnect of samba share while creating IOs from one client & reading from another client
1392761 - During sequential reads backtraces are seen leading to IO hung
1392837 - A hard link is lost during rebalance+lookup
1392895 - Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
1392899 - stat of file is hung with possible deadlock
1392906 - Input/Output Error seen while running iozone test on nfs-ganesha+mdcache enabled volume.
1393316 - OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
1393526 - [Ganesha] : Ganesha crashes intermittently during nfs-ganesha restarts.
1393694 - The directories get renamed when data bricks are offline in 4*(2+1) volume
1393709 - [Compound FOPs] Client side IObuff leaks at a high pace consumes complete client memory and hence making gluster volume inaccessible
1393758 - I/O errors on FUSE mount point when reading and writing from 2 clients
1394219 - Better logging when reporting failures of the kind "<file-path> Failing MKNOD as quorum is not met"
1394752 - Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
1395539 - ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
1395541 - Lower version of package "redhat-release-server" is present in the RHEL7 RHGS3.2 ISO
1395574 - netstat: command not found message is seen in /var/log/messages when IO's are running.
1395603 - [RFE] JSON output for all Events CLI commands
1395613 - Delayed Events if any one Webhook is slow
1396166 - self-heal info command hangs after triggering self-heal
1396361 - Scheduler : Scheduler should not depend on glusterfs-events package
1396449 - [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
1397257 - capture volume tunables in get-state dump
1397267 - File creation fails with Input/output error + FUSE logs throws "invalid argument: inode [Invalid argument]"
1397286 - Wrong value in Last Synced column during Hybrid Crawl
1397364 - [compound FOPs]: file operation hangs with compound fops
1397430 - PEER_REJECT, EVENT_BRICKPATH_RESOLVE_FAILED, EVENT_COMPARE_FRIEND_VOLUME_FAILED are not seen
1397450 - NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
1397681 - [Eventing]: EVENT_POSIX_HEALTH_CHECK_FAILED event not seen when brick underlying filesystem crashed
1397846 - [Compound FOPS]: seeing lot of brick log errors saying matching lock not found for unlock
1398188 - [Arbiter] IO's Halted and heal info command hung
1398257 - [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log
1398261 - After ganesha node reboot/shutdown, portblock process goes to FAILED state
1398311 - [compound FOPs]:in replica pair one brick is down the other Brick process and fuse client process consume high memory at a increasing pace
1398315 - [compound FOPs]: Memory leak while doing FOPs with brick down
1398331 - With compound fops on, client process crashes when a replica is brought down while IO is in progress
1398798 - [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
1399100 - GlusterFS client crashes during remove-brick operation
1399105 - possible memory leak on client when writing to a file while another client issues a truncate
1399476 - IO got hanged while doing in-service update from 3.1.3 to 3.2
1399598 - [USS,SSL] .snaps directory is not reachable when I/O encryption (SSL) is enabled
1399698 - AVCs seen when ganesha cluster nodes are rebooted
1399753 - "Insufficient privileges" messages observed in pcs status for nfs_unblock resource agent [RHEL7]
1399757 - Ganesha services are not stopped when pacemaker quorum is lost
1400037 - [Arbiter] Fixed layout failed on the volume after remove-brick while rmdir is progress
1400057 - self-heal not happening, as self-heal info lists the same pending shards to be healed
1400068 - [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
1400093 - ls and move hung on disperse volume
1400395 - Memory leak in client-side background heals.
1400599 - [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
1401380 - [Compound FOPs] : Memory leaks while doing deep directory creation
1401806 - [GANESHA] Volume restart(stop followed by start) does not reexporting the volume
1401814 - [Arbiter] Directory lookup failed with 11(EAGAIN) leading to rebalance failure
1401817 - glusterfsd crashed while taking snapshot using scheduler
1401869 - Rebalance not happened, which triggered after adding couple of bricks.
1402360 - CTDB:NFS: CTDB failover doesn't work because of SELinux AVC's
1402683 - Installation of latest available RHGS3.2 RHEL7 ISO is failing with error in the "SOFTWARE SELECTION"
1402774 - Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
1403120 - Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
1403672 - Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
1403770 - Incorrect incrementation of volinfo refcnt during volume start
1403840 - [GSS]xattr 'replica.split-brain-status' shows the file is in data-splitbrain but "heal split-brain latest-mtime" fails
1404110 - [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
1404541 - Found lower version of packages "python-paramiko", "python-httplib2" and "python-netifaces" in the latest RHGS3.2 RHEL7 ISO.
1404569 - glusterfs-rdma package is not pulled while doing layered installation of RHGS 3.2 on RHEL7 and not present in RHGS RHEL7 ISO also by default and vdsm-cli pkg not pulled during lay.. Installation
1404633 - GlusterFS process crashed after add-brick
1404982 - VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume/replica volume
1404989 - Fail add-brick command if replica count changes
1404996 - gNFS: nfs.disable option to be set to off on existing volumes after upgrade to 3.2 and on for new volumes on 3.2
1405000 - Remove-brick rebalance failed while rm -rf is in progress
1405299 - fuse mount crashed when VM installation is in progress & one of the brick killed
1405302 - vm does not boot up when first data brick in the arbiter volume is killed.
1406025 - [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ganesha/ganesha.conf file
1406322 - repeated operation failed warnings in gluster mount logs with disperse volume
1406401 - [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
1406723 - [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
1408112 - [Arbiter] After Killing a brick writes drastically slow down
1408413 - [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount.
1408426 - with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
1408576 - [Ganesha+SSL] : Bonnie++ hangs during rewrites.
1408639 - [Perf] : Sequential Writes are off target by 12% on EC backed volumes over FUSE
1408641 - [Perf] : Sequential Writes have regressed by ~25% on EC backed volumes over SMB3
1408655 - [Perf] : mkdirs are 85% slower on EC
1408705 - [GNFS+EC] Cthon failures/issues with Lock/Special Test cases on disperse volume with GNFS mount
1408836 - [ganesha+ec]: Contents of original file are not seen when hardlink is created
1409135 - [Replicate] "RPC call decoding failed" leading to IO hang & mount inaccessible
1409472 - brick crashed on systemic setup due to eventing regression
1409563 - [SAMBA-SSL] Volume Share hungs when multiple mount & unmount is performed over a windows client on a SSL enabled cluster
1409782 - NFS Server is not coming up for the 3.1.3 volume after updating to 3.2.0 ( latest available build )
1409808 - [Mdcache] clients being served wrong information about a file, can lead to file inconsistency
1410025 - Extra lookup/fstats are sent over the network when a brick is down.
1410406 - ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp
1411270 - [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount
1411329 - OOM kill of glusterfsd during continuous add-bricks
1411617 - Spurious split-brain error messages are seen in rebalance logs
1412554 - [RHV-RHGS]: Application VM paused after add brick operation and VM didn't comeup after power cycle.
1412955 - Quota: After upgrade from 3.1.3 to 3.2 , gluster quota list command shows "No quota configured on volume repvol"
1413351 - [Scale] : Brick process oom-killed and rebalance failed.
1413513 - glusterfind: After glusterfind pre command execution all temporary files and directories /usr/var/lib/misc/glusterfsd/glusterfind/<session>/<volume>/ should be removed
1414247 - client process crashed due to write behind translator
1414663 - [GANESHA] Cthon lock test case is failing on nfs-ganesha mounted Via V3
1415101 - glustershd process crashed on systemic setup
1415583 - [Stress] : SHD Logs flooded with "Heal Failed" messages,filling up "/" quickly
1417177 - Split brain resolution must check for all the bricks to be up to avoiding serving of inconsistent data(visible on x3 or more)
1417955 - [RFE] Need to have group cli option to set all md-cache options using a single command
1418011 - [RFE] disable client.io-threads on replica volume creation
1418603 - Lower version packages ( heketi, libtiff ) present in RHGS3.2.0 RHEL7 ISO.
1418901 - Include few more options in virt file
1419859 - [Perf] : Renames are off target by 28% on EC FUSE mounts
1420324 - [GSS] The bricks once disconnected not connects back if SSL is enabled
1420635 - Modified volume options not synced once offline nodes comes up.
1422431 - multiple glusterfsd process crashed making the complete subvolume unavailable
1422576 - [RFE]: provide an cli option to reset the stime while deleting the geo-rep session
1425740 - Disconnects in nfs mount leads to IO hang and mount inaccessible
1426324 - common-ha: setup after teardown often fails
1426559 - heal info is not giving correct output
1427783 - Improve read performance on tiered volumes

6. Package List:

Red Hat Gluster Storage Server 3.2 on RHEL-7:

Source:
glusterfs-3.8.4-18.el7rhgs.src.rpm
redhat-storage-server-3.2.0.2-1.el7rhgs.src.rpm
vdsm-4.17.33-1.1.el7rhgs.src.rpm

noarch:
python-gluster-3.8.4-18.el7rhgs.noarch.rpm
redhat-storage-server-3.2.0.2-1.el7rhgs.noarch.rpm
vdsm-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-cli-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-debug-plugin-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-hook-ethtool-options-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-hook-faqemu-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-hook-openstacknet-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-hook-qemucmdline-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-infra-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-jsonrpc-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-python-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-tests-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-xmlrpc-4.17.33-1.1.el7rhgs.noarch.rpm
vdsm-yajsonrpc-4.17.33-1.1.el7rhgs.noarch.rpm

x86_64:
glusterfs-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-api-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-api-devel-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-cli-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-client-xlators-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-debuginfo-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-devel-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-events-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-fuse-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-ganesha-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-geo-replication-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-libs-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-rdma-3.8.4-18.el7rhgs.x86_64.rpm
glusterfs-server-3.8.4-18.el7rhgs.x86_64.rpm

Red Hat Storage Native Client for Red Hat Enterprise Linux 7:

Source:
glusterfs-3.8.4-18.el7.src.rpm

noarch:
python-gluster-3.8.4-18.el7.noarch.rpm

x86_64:
glusterfs-3.8.4-18.el7.x86_64.rpm
glusterfs-api-3.8.4-18.el7.x86_64.rpm
glusterfs-api-devel-3.8.4-18.el7.x86_64.rpm
glusterfs-cli-3.8.4-18.el7.x86_64.rpm
glusterfs-client-xlators-3.8.4-18.el7.x86_64.rpm
glusterfs-debuginfo-3.8.4-18.el7.x86_64.rpm
glusterfs-devel-3.8.4-18.el7.x86_64.rpm
glusterfs-fuse-3.8.4-18.el7.x86_64.rpm
glusterfs-libs-3.8.4-18.el7.x86_64.rpm
glusterfs-rdma-3.8.4-18.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2015-1795
https://access.redhat.com/security/updates/classification/#moderate
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/3.2_release_notes/

8. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2017 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iD8DBQFY03g6XlSAg2UNWIIRAjWOAJ0UTc4GcDXKGwXruZfKnxgtk1Me0gCdHAN5
0LfDqpcqxs8oTpS7jVq/hXg=
=xR1v
-----END PGP SIGNATURE-----


--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

March 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Mar 1st
    16 Files
  • 2
    Mar 2nd
    0 Files
  • 3
    Mar 3rd
    0 Files
  • 4
    Mar 4th
    32 Files
  • 5
    Mar 5th
    28 Files
  • 6
    Mar 6th
    42 Files
  • 7
    Mar 7th
    17 Files
  • 8
    Mar 8th
    13 Files
  • 9
    Mar 9th
    0 Files
  • 10
    Mar 10th
    0 Files
  • 11
    Mar 11th
    15 Files
  • 12
    Mar 12th
    19 Files
  • 13
    Mar 13th
    21 Files
  • 14
    Mar 14th
    38 Files
  • 15
    Mar 15th
    15 Files
  • 16
    Mar 16th
    0 Files
  • 17
    Mar 17th
    0 Files
  • 18
    Mar 18th
    10 Files
  • 19
    Mar 19th
    32 Files
  • 20
    Mar 20th
    46 Files
  • 21
    Mar 21st
    16 Files
  • 22
    Mar 22nd
    13 Files
  • 23
    Mar 23rd
    0 Files
  • 24
    Mar 24th
    0 Files
  • 25
    Mar 25th
    12 Files
  • 26
    Mar 26th
    31 Files
  • 27
    Mar 27th
    19 Files
  • 28
    Mar 28th
    0 Files
  • 29
    Mar 29th
    0 Files
  • 30
    Mar 30th
    0 Files
  • 31
    Mar 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2022 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close