exploit the possibilities
Home Files News &[SERVICES_TAB]About Contact Add New

Red Hat Security Advisory 2018-2607-01

Red Hat Security Advisory 2018-2607-01
Posted Sep 4, 2018
Authored by Red Hat | Site access.redhat.com

Red Hat Security Advisory 2018-2607-01 - GlusterFS is a key building block of Red Hat Gluster Storage. It is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnections into one large, parallel network file system. Issues addressed include buffer overflow, denial of service, deserialization, local file inclusion, and remote file inclusion vulnerabilities.

tags | advisory, remote, denial of service, overflow, local, vulnerability, file inclusion
systems | linux, redhat
advisories | CVE-2018-10904, CVE-2018-10907, CVE-2018-10911, CVE-2018-10913, CVE-2018-10914, CVE-2018-10923, CVE-2018-10926, CVE-2018-10927, CVE-2018-10928, CVE-2018-10929, CVE-2018-10930
SHA-256 | 1869d3dbb0d19201b396114a7ac010439cd91183d33b11fbfc38ece6f506392a

Red Hat Security Advisory 2018-2607-01

Change Mirror Download
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Important: Red Hat Gluster Storage security, bug fix, and enhancement update
Advisory ID: RHSA-2018:2607-01
Product: Red Hat Gluster Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2018:2607
Issue date: 2018-09-04
CVE Names: CVE-2018-10904 CVE-2018-10907 CVE-2018-10911
CVE-2018-10913 CVE-2018-10914 CVE-2018-10923
CVE-2018-10926 CVE-2018-10927 CVE-2018-10928
CVE-2018-10929 CVE-2018-10930
=====================================================================

1. Summary:

Updated glusterfs packages that fix multiple security issues and bugs, and
add various enhancements are now available for Red Hat Gluster Storage 3.4
on Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact
of Important. A Common Vulnerability Scoring System (CVSS) base score,
which gives a detailed severity rating, is available for each vulnerability
from the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Gluster Storage Server 3.4 on RHEL-7 - noarch, x86_64
Red Hat Storage Native Client for Red Hat Enterprise Linux 7 - x86_64

3. Description:

GlusterFS is a key building block of Red Hat Gluster Storage. It is based
on a stackable user-space design and can deliver exceptional performance
for diverse workloads. GlusterFS aggregates various storage servers over
network interconnections into one large, parallel network file system.

Security Fix(es):

* glusterfs: Unsanitized file names in debug/io-stats translator can allow
remote attackers to execute arbitrary code (CVE-2018-10904)

* glusterfs: Stack-based buffer overflow in server-rpc-fops.c allows remote
attackers to execute arbitrary code (CVE-2018-10907)

* glusterfs: I/O to arbitrary devices on storage server (CVE-2018-10923)

* glusterfs: Device files can be created in arbitrary locations
(CVE-2018-10926)

* glusterfs: File status information leak and denial of service
(CVE-2018-10927)

* glusterfs: Improper resolution of symlinks allows for privilege
escalation (CVE-2018-10928)

* glusterfs: Arbitrary file creation on storage server allows for execution
of arbitrary code (CVE-2018-10929)

* glusterfs: Files can be renamed outside volume (CVE-2018-10930)

* glusterfs: Improper deserialization in dict.c:dict_unserialize() can
allow attackers to read arbitrary memory (CVE-2018-10911)

* glusterfs: remote denial of service of gluster volumes via
posix_get_file_contents function in posix-helpers.c (CVE-2018-10914)

* glusterfs: Information Exposure in posix_get_file_contents function in
posix-helpers.c (CVE-2018-10913)

For more details about the security issue(s), including the impact, a CVSS
score, and other related information, refer to the CVE page(s) listed in
the References section.

Red Hat would like to thank Michael Hanselmann (hansmi.ch) for reporting
these issues.

Additional Changes:

These updated glusterfs packages include numerous bug fixes and
enhancements. Space precludes documenting all of these changes in this
advisory. Users are directed to the Red Hat Gluster Storage 3.4 Release
Notes for information on the most significant of these changes:

https://access.redhat.com/site/documentation/en-US/red_hat_gluster_storage/
3.4/html/3.4_release_notes/

All users of Red Hat Gluster Storage are advised to upgrade to these
updated packages, which provide numerous bug fixes and enhancements.

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1118770 - DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
1167789 - DHT: Rebalance- Misleading log messages from __dht_check_free_space function
1186664 - AFR: 3-way-replication: gluster volume set cluster.quorum-count should validate max no. of brick count to accept
1215556 - Disperse volume: rebalance and quotad crashed
1226874 - nfs-ganesha: in case pcs cluster setup fails then nfs-ganesha process should not start
1234884 - Selfheal on a volume stops at a particular point and does not resume for a long time
1260479 - DHT:While removing the brick, rebalance is trying to migrate files to the brick which doesn't have space due to this migration is failing
1262230 - [quorum]: Replace brick is happened when Quorum not met.
1277924 - Though files are in split-brain able to perform writes to the file
1282318 - DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
1282731 - Entry heal messages in glustershd.log while no entries shown in heal info
1283045 - Index entries are not being purged in case of file does not exist
1286092 - Duplicate files seen on mount point while trying to create files which are greater than the brick size
1286820 - [GSS] [RFE] Addition of "summary" option in "gluster volume heal" command.
1288115 - [RFE] Pass slave volume in geo-rep as read-only
1293332 - [geo-rep+tiering]: Hot tier bricks changelogs reports rsync failure
1293349 - AFR Can ignore the zero size files while checking for spli-brain
1294412 - [RFE] : Start glusterd even when the glusterd is unable to resolve the bricks path.
1299740 - [geo-rep]: On cascaded setup for every entry their is setattr recorded in changelogs of slave
1301474 - [GSS]Intermittent file creation fail,while doing concurrent writes on distributed volume has more than 40 bricks
1319271 - auth.allow and auth.reject not working host mentioned with hostnames/FQDN
1324531 - [GSS] [RFE] Create trash directory only when its is enabled
1330526 - adding brick to a single brick volume to convert to replica is not triggering self heal
1333705 - gluster volume heal info "healed" and "heal-failed" showing wrong information
1338693 - [geo-rep]: [Errno 16] Device or resource busy: '/tmp/gsyncd-aux-mount-5BA95I'
1339054 - Need to improve remove-brick failure message when the brick process is down.
1339765 - Permission denied errors in the brick logs
1341190 - conservative merge happening on a x3 volume for a deleted file
1342785 - [geo-rep]: Worker crashes with permission denied during hybrid crawl caused via replace brick
1345828 - SAMBA-DHT : Rename ends up creates nested directories with same gfid
1356454 - DHT: slow readdirp performance
1360331 - default timeout of 5min not honored for analyzing split-brain files post setfattr replica.split-brain-heal-finalize
1361209 - need to throw right error message when self heal deamon is disabled and user tried to trigger manual heal
1369312 - [RFE] DHT performance improvements for directory operations
1369420 - AVC denial message getting related to glusterd in the audit.log
1375094 - [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
1378371 - "ganesha.so cannot open" warning message in glusterd log in non ganesha setup.
1384762 - glusterd status showing failed when it's stopped in RHEL7
1384979 - glusterd crashed and core dumped
1384983 - split-brain observed with arbiter & replica 3 volume.
1388218 - Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
1392905 - Rebalance should skip the file if the file has hardlinks instead of failing
1397798 - MTSH: multithreaded self heal hogs cpu consistently over 150%
1401969 - Bringing down data bricks in cyclic order results in arbiter brick becoming the source for heal.
1406363 - [GSS][RFE] Provide option to control heal load for disperse volume
1408158 - IO is paused for minimum one and half minute when one of the EC volume hosted cluster node goes down.
1408354 - [GSS] gluster fuse client losing connection to gluster volume frequently
1409102 - [Arbiter] IO Failure and mount point inaccessible after killing a brick
1410719 - [GSS] [RFE] glusterfs-ganesha package installation need to work when glusterfs process running
1413005 - [Remove-brick] Lookup failed errors are seen in rebalance logs during rm -rf
1413959 - [RFE] Need a way to resolve gfid split brains
1414456 - [GSS]Entry heal pending for directories which has symlinks to a different replica set
1419438 - gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
1419807 - [Perf]: 25% regression on sequential reads on EC over SMB3
1425681 - [Glusterd] Volume operations fail on a (tiered) volume because of a stale lock held by one of the nodes
1426042 - performance/write-behind should respect window-size & trickling-writes should be configurable
1436673 - Restore atime/mtime for symlinks and other non-regular files.
1442983 - Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
1444820 - Script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh failing
1446046 - glusterd: TLS verification fails when using intermediate CA instead of self-signed certificates
1448334 - [GSS]glusterfind pre crashes with "UnicodeDecodeError: 'utf8' codec can't decode" error when the `--no-encode` is used
1449638 - Poor write speed performance of fio test on distributed-disperse volume
1449867 - [GSS] glusterd fails to start
1452915 - healing fails with wrong error when one of the glusterd holds a lock
1459101 - [GSS] low sequential write performance on distributed dispersed volume on RHGS 3.2
1459895 - Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes
1460639 - [Stress] : IO errored out with ENOTCONN.
1460918 - [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
1461695 - glusterd crashed and core dumped, when the network interface is down
1463112 - EC version not updating to latest post healing when another brick is down
1463114 - [GSS][RFE] Log entry of files skipped/failed during rebalance operation
1463592 - [Parallel-Readdir]Warning messages in client log saying 'parallel-readdir' is not recognized.
1463964 - heal info shows root directory as "Possibly undergoing heal" when heal is pending and heal deamon is disabled
1464150 - [GSS] Unable to delete snapshot because it's in use
1464350 - [RFE] Posix xlator needs to reserve disk space to prevent the brick from getting full.
1466122 - Event webhook should work with HTTPS urls
1466129 - Add generated HMAC token in header for webhook calls
1467536 - Seeing timer errors in the rebalance logs
1468972 - [GSS][RFE] Improve geo-replication logging
1470566 - [RFE] Support changing from distribute to replicate with no active client operations
1470599 - log messages appear stating mdkir failed on the new brick while adding brick to increase replica count.
1470967 - [GSS] geo-replication failed due to ENTRY failures on slave volume
1472757 - Running sysbench on vm disk from plain distribute gluster volume causes disk corruption
1474012 - [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start
1474745 - [RFE] Reserved port range for Gluster
1475466 - [geo-rep]: Scheduler help needs correction for description of --no-color
1475475 - [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script
1475779 - quota: directories doesn't get heal on newly added bricks when quota is full on sub-directory
1475789 - As long as appends keep happening on a file healing never completes on a brick when another brick is brought down in between
1476827 - scripts: invalid test in S32gluster_enable_shared_storage.sh
1476876 - [geo-rep]: RSYNC throwing internal errors
1477087 - [geo-rep] master worker crash with interrupted system call
1477250 - Negative Test: glusterd crashes for some of the volume options if set at cluster level
1478395 - Extreme Load from self-heal
1479335 - [GSS]glusterfsd is reaching 1200% CPU utilization
1480041 - zero byte files with null gfid getting created on the brick instead of directory.
1480042 - More useful error - replace 'not optimal'
1480188 - writes amplified on brick with gluster-block
1482376 - IO errors on gluster-block device
1482812 - [afr] split-brain observed on T files post hardlink and rename in x3 volume
1483541 - [geo-rep]: Slave has more entries than Master in multiple hardlink/rename scenario
1483730 - [GSS] glusterfsd (brick) process crashed
1483828 - DHT: readdirp fails to read some directories.
1484113 - [geo-rep+qr]: Crashes observed at slave from qr_lookup_sbk during rename/hardlink/rebalance cases
1484446 - [GSS] [RFE] Control Gluster process/resource using cgroup through tunables
1487495 - client-io-threads option not working for replicated volumes
1488120 - Moving multiple temporary files to the same destination concurrently causes ESTALE error
1489876 - cli must throw a warning to discourage use of x2 volume which will be deprecated
1491785 - Poor write performance on gluster-block
1492591 - [GSS] Error No such file or directory for new file writes
1492782 - self-heal daemon stuck
1493085 - Sharding sends all application sent fsyncs to the main shard file
1495161 - [GSS] Few brick processes are consuming more memory after patching 3.2
1498391 - [RFE] Changelog option in a gluster volume disables with no warning
1498730 - The output of the "gluster help" command is difficult to read
1499644 - Eager lock should be present for both metadata and data transactions
1499784 - [Downstream Only] : Retain cli and scripts for nfs-ganesha integration
1499865 - [RFE] Implement DISCARD FOP for EC
1500704 - gfapi: API needed to set lk_owner
1501013 - [fuse-bridge] - Make event-history option configurable and have it disabled by default.
1501023 - Make choose-local configurable through `volume-set` command
1501253 - [GSS]Issues in accessing renamed file from multiple clients
1501345 - [QUOTA] man page of gluster should be updated to list quota commands
1501885 - "replace-brick" operation on a distribute volume kills all the glustershd daemon process in a cluster
1502812 - [GSS] Client segfaults when grepping $UUID.meta files on EC vol.
1503167 - [Geo-rep]: Make changelog batch size configurable
1503173 - [Geo-rep] Master and slave mounts are not accessible to take client profile info
1503174 - [Geo-rep]: symlinks trigger faulty geo-replication state (rsnapshot usecase)
1503244 - socket poller error in glusterd logs
1504234 - [GSS] gluster volume status command is missing in man page
1505363 - Brick Multiplexing: stale brick processes getting created and volume status shows brick as down(pkill glusterfsd glusterfs ,glusterd restart)
1507361 - [GSS] glusterfsd processes consuming high memory on all gluster nodes from trusted pool
1507394 - [GSS] Not able to create snapshot
1508780 - [RHEL7] rebase RHGS 3.4.0 to upstream glusterfs-3.12.2
1508999 - [Fuse Sub-dir] After performing add-brick on volume,doing rm -rf * on subdir mount point fails with "Transport endpoint is not connected"
1509102 - In distribute volume after glusterd restart, brick goes offline
1509191 - detach start does not kill the tierd
1509810 - [Disperse] Implement open fd heal for disperse volume
1509830 - Improve performance with xattrop update.
1509833 - [Disperse] : Improve heal info command to handle obvious cases
1510725 - [GSS] glustersfd (brick) process crashed
1511766 - The number of bytes of the quota specified in version 3.7 or later is incorrect
1511767 - After detach tier start glusterd log flooded with "0-transport: EPOLLERR - disconnecting now" messages
1512496 - Not all files synced using geo-replication
1512963 - [GSS] Writing data to file on gluster volume served by ctdb/samba causes bricks to crash
1515051 - bug-1247563.t is failing on master
1516249 - help for volume profile is not in man page
1517463 - [bitrot] scrub ondemand reports it's start as success without additional detail
1517987 - [GSS] high mem/cpu usage, brick processes not starting and ssl encryption issues while testing CRS scaling with multiplexing (500-800 vols)
1518260 - EC DISCARD doesn't punch hole properly
1519076 - glusterfs client crash when removing directories
1519740 - [GSS]ganesha-gfapi log is filling at rate of 1gb/hr
1520767 - 500% -600% CPU utitlisation when one brick is down in EC volume
1522833 - high memory usage by glusterd on executing gluster volume set operations
1523216 - fuse xlator uses block size and fragment size 128KB leading to rounding off in df output
1527309 - entries not getting cleared post healing of softlinks (stale entries showing up in heal info)
1528566 - Performance Drop observed when cluster.eager-lock turned on
1528733 - memory leak: get-state leaking memory in small amounts
1529072 - parallel-readdir = TRUE prevents directories listing
1529451 - glusterd leaks memory when vol status is issued
1530146 - dht_(f)xattrop does not implement migration checks
1530325 - Brick multiplexing: glustershd fails to start on a volume force start after a brick is down
1530512 - clean up port map on brick disconnect
1530519 - disperse eager-lock degrades performance for file create workloads
1531041 - Use after free in cli_cmd_volume_create_cbk
1534253 - remove ExclusiveArch directive from SPEC file
1534530 - spec: unpackaged files found for RHEL-7 client build
1535281 - possible memleak in glusterfsd process with brick multiplexing on
1535852 - glusterfind is extremely slow if there are lots of changes
1537357 - [RFE] - get-state option should mark profiling enabled flag at volume level
1538366 - [GSS] Git clone --bare --mirror of git bundle fails when cloning on gluster storage
1539699 - tests/bugs/cli/bug-822830.t fails on Centos 7 and locally
1540600 - glusterd fails to attach brick during restart of the node
1540664 - Files are unavailable on the mount point
1540908 - Do lock conflict check correctly for wait-list
1540961 - The used space in the volume increases when the volume is expanded
1541122 - Improve geo-rep pre-validation logs
1541830 - Volume wrong size
1541932 - A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
1543068 - [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upgrade.
1543296 - After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to peer rejected(conncted).
1544382 - Geo-replication is faulty on latest RHEL7.5 Snapshot2.0
1544451 - [GSS] log-level=ERROR mount option not working, W level messages rapidly filling up storage
1544824 - [Ganesha] : Cluster creation fails on selinux enabled/enforced nodes.
1544852 - build: glusterfs.spec %post ganesha is missing %{?rhel} test
1545277 - Brick process crashed after upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5)
1545486 - [RFE] Generic support of fuse sub dir export at RHGS
1545523 - [GSS] AIX client failed to write to a temporarily file to gluster volume by gNFS.
1545570 - DHT calls dht_lookup_everywhere for 1xn volumes
1546075 - Hook up script for managing SELinux context on bricks failed to execute post volume creation
1546717 - Removing directories from multiple clients throws ESTALE errors
1546941 - [Rebalance] ENOSPC errors on few files in rebalance logs
1546945 - [Rebalance] "Migrate file failed: <filepath>: failed to get xattr [No data available]" warnings in rebalance logs
1546960 - Typo error in __dht_check_free_space function log message
1547012 - Bricks getting assigned to different pids depending on whether brick path is IP or hostname based
1547903 - Stale entries of snapshots need to be removed from /var/run/gluster/snaps
1548337 - hitting EIO error when a brick is restarted in ecvolume
1548829 - [BMux] : Stale brick processes on the nodes after vol deletion.
1549023 - Observing continuous "disconnecting socket" error messages on client glusterd logs
1550315 - [GSS] ACL settings on directories is different on newly added bricks compared to original bricks after rebalance completion
1550474 - Don't display copyright and any upstream specific information in gluster --version
1550771 - [GSS] Duplicate directory created on newly added bricks after rebalancing volume
1550896 - No rollback of renames on succeeded subvols during failure
1550918 - More than 330% CPU utilization by glusterfsd while IO in progress
1550982 - After setting storage.reserve limits, df from client shows increased volume used space though the mount point is empty
1550991 - fallocate created data set is crossing storage reserve space limits resulting 100% brick full
1551186 - [Ganesha] Duplicate volume export entries in ganesha.conf causing volume unexport to fail
1552360 - memory leak in pre-op in replicate volumes for every write
1552414 - Take full lock on files in 3 way replication
1552425 - Make afr_fsync a transaction
1553677 - [Remove-brick] Many files were not migrated from the decommissioned bricks; commit results in data loss
1554291 - When storage reserve limit is reached, appending data to an existing file throws EROFS error
1554905 - Creating a replica 2 volume throws split brain possibility warning - which has a link to upstream Docs.
1555261 - After a replace brick command, self-heal takes some time to start healing files on disperse volumes
1556895 - [RHHI]Fuse mount crashed with only one VM running with its image on that volume
1557297 - Pause/Resume of geo-replication with wrong user specified returns success
1557365 - [RFE] DHT : Enable lookup-optimize by default
1557551 - quota crawler fails w/ TLS enabled
1558433 - vmcore generated due to discard file operation
1558463 - Rebase redhat-release-server from RHEL-7.5
1558515 - [RFE][RHEL7] update redhat-storage-server build for RHGS 3.4.0
1558517 - [RFE] [RHEL7] product certificate update for RHEL 7.5
1558948 - linux untar errors out at completion during disperse volume inservice upgrade
1558989 - 60% regression on small-file creates from 3.3.1
1558990 - 30% regression on small-file reads from 3.3.1
1558991 - 19% regression on smallfile appends over
1558993 - 60% regression in small-file deletes from 3.3.1
1558994 - 47% regression in mkdir from 3.3.1
1558995 - 30% regression on small-file rmdirs from 3.3.1
1559084 - [EC] Read performance of EC volume exported over gNFS is significantly lower than write performance
1559452 - Volume status inode is broken with brickmux
1559788 - Remove use-compound-fops feature
1559831 - [RHHI] FUSE mount crash while running one Engine VM on replicated volume
1559884 - Linkto files visible in mount point
1559886 - Brick process hung, and looks like a deadlock in inode locks
1560955 - After performing remove-brick followed by add-brick operation, brick went offline state
1561733 - Rebalance failures on a dispersed volume with lookup-optimize enabled
1561999 - rm command hangs in fuse_request_send
1562744 - [EC] slow heal speed on disperse volume after brick replacement
1563692 - Linux kernel untar failed with "xz: (stdin): Read error: Invalid argument" immediate after add-brick
1563804 - Client can create denial of service (DOS) conditions on server
1565015 - [Ganesha] File Locking test is failing on ganesha v3 protocol
1565119 - Rebalance on few nodes doesn't seem to complete - stuck at FUTEX_WAIT
1565399 - [GSS] geo-rep in faulty session due to OSError: [Errno 95] Operation not supported
1565577 - [geo-rep]: Lot of changelogs retries and "dict is null" errors in geo-rep logs
1565962 - Disable features.selinux
1566336 - [GSS] Pending heals are not getting completed in CNS environment
1567001 - [Ganesha+EC] Bonnie failed with I/O error while crefi and parallel lookup were going on in parallel from 4 clients
1567100 - "Directory selfheal failed: Unable to form layout " log messages seen on client
1567110 - Make cluster.localtime-logging not to be visible in gluster v get
1567899 - growing glusterd memory usage with connected RHGSWA
1568297 - Disable choose-local in groups virt and gluster-block
1568374 - timer: Possible race condition between gf_timer_* routines
1568655 - [GSS] symbolic links to read-only filesystem causing geo-replication session to enter faulty state
1568896 - [geo-rep]: geo-replication scheduler is failing due to unsuccessful umount
1569457 - EIO errors on some operations when volume has mixed brick versions on a disperse volume
1569490 - [geo-rep]: in-service upgrade fails, session in FAULTY state
1569951 - Amends in volume profile option 'gluster-block'
1570514 - [RFE] make RHGS version available with glusterfs-server package
1570541 - [Ganesha] Ganesha enable command errors out while setting up ganesha on 4 node out of 5 node gluster cluster
1570582 - Build is failed due to access rpc->refcount in wrong way in quota.c
1570586 - Glusterd crashed on a few (master) nodes
1571645 - Remove unused variable
1572043 - [Geo-rep]: Status in ACTIVE/Created state
1572075 - glusterfsd crashing because of RHGS WA?
1572087 - Redundant synchronization in rename codepath for a single subvolume DHT
1572570 - [GSS] Glusterfind process crashes with UnicodeDecodeError
1572585 - Remove-brick failed on Distributed volume while rm -rf is in-progress
1575539 - [GSS] Glusterd memory leaking in gf_gld_mt_linebuf
1575555 - [GSS] Warning messages generated for the removal of extended attribute security.ima flodding client logs
1575557 - [Ganesha] "Gluster nfs-ganesha enable" commands sometimes gives output as "failed" with "Unlocking failed" error messages ,even though cluster is up and healthy in backend
1575840 - brick crash seen while creating and deleting two volumes in loop
1575877 - [geo-rep]: Geo-rep scheduler fails
1575895 - DHT Log flooding in mount log "key=trusted.glusterfs.dht.mds [Invalid argument]"
1577051 - [Remove-brick+Rename] Failure count shows zero though there are file migration failures
1578647 - If parallel-readdir is enabled, the readdir-optimize option even when it is set to on it behaves as off
1579981 - When the customer tries to migrate an RHV 4.1 disk from one storage domain to another, the glusterfsd core dumps.
1580120 - [Ganesha] glusterfs (posix-acl xlator layer) checks for "write permission" instead for "file owner" during open() when writing to a file
1580344 - Remove EIO from the dht_inode_missing macro
1581047 - [geo-rep+tiering]: Hot and Cold tier brick changelogs report rsync failure
1581057 - writes succeed when only good brick is down in 1x3 volume
1581184 - After creating and starting 601 volumes, self heal daemon went down and seeing continuous warning messages in glusterd log
1581219 - centos regression fails for tests/bugs/replicate/bug-1292379.t
1581231 - quota crawler not working unless lookup is done from mount
1581553 - [distribute]: Excessive 'dict is null' errors in geo-rep logs
1581647 - Brick process crashed immediate after volume start with force option
1582066 - Inconsistent access permissions on directories after bringing back the down sub-volumes
1582119 - 'custom extended attributes' set on a directory are not healed after bringing back the down sub-volumes
1582417 - [Geo-rep]: Directory renames are not synced in hybrid crawl
1583047 - changelog: Changelog is not capturing rename of files
1588408 - Fops are sent to glusterd and uninitialized brick stack when client reconnects to brick
1592666 - lookup not assigning gfid if file is not present in all bricks of replica
1593865 - shd crash on startup
1594658 - Block PVC fails to mount on Jenkins pod
1597506 - Introduce database group profile (to be only applied for CNS)
1597511 - introduce cluster.daemon-log-level option
1597654 - "gluster vol heal <volname> info" is locked forever
1597768 - br-state-check.t crashed while brick multiplex is enabled
1598105 - core dump generated while doing file system operations
1598356 - delay gluster-blockd start until all bricks comeup
1598384 - [geo-rep]: [Errno 2] No such file or directory
1599037 - [GSS] Cleanup stale (unusable) XSYNC changelogs.
1599362 - memory leak in get-state when geo-replication session is configured
1599823 - [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"
1599998 - When reserve limits are reached, append on an existing file after truncate operation results to hang
1600057 - crash on glusterfs_handle_brick_status of the glusterfsd
1600790 - Segmentation fault while using gfapi while getting volume utilization
1601245 - [Ganesha] Ganesha crashed in mdcache_alloc_and_check_handle while running bonnie and untars with parallel lookups
1601298 - CVE-2018-10904 glusterfs: Unsanitized file names in debug/io-stats translator can allow remote attackers to execute arbitrary code
1601314 - [geo-rep]: Geo-replication not syncing renamed symlink
1601331 - dht: Crash seen in thread dht_dir_attr_heal
1601642 - CVE-2018-10907 glusterfs: Stack-based buffer overflow in server-rpc-fops.c allows remote attackers to execute arbitrary code
1601657 - CVE-2018-10911 glusterfs: Improper deserialization in dict.c:dict_unserialize() can allow attackers to read arbitrary memory
1607617 - CVE-2018-10914 glusterfs: remote denial of service of gluster volumes via posix_get_file_contents function in posix-helpers.c
1607618 - CVE-2018-10913 glusterfs: Information Exposure in posix_get_file_contents function in posix-helpers.c
1608352 - glusterfsd process crashed in a multiplexed configuration during cleanup of a single brick-graph triggered by volume-stop.
1609163 - Fuse mount of volume fails when gluster_shared_storage is enabled
1609724 - brick (glusterfsd) crashed at in quota_lookup
1610659 - CVE-2018-10923 glusterfs: I/O to arbitrary devices on storage server
1611151 - turn off disperse-other-eager-lock by default to avoid performance hit on simultaneous lookups
1612098 - Brick not coming up on a volume after rebooting the node
1612658 - CVE-2018-10927 glusterfs: File status information leak and denial of service
1612659 - CVE-2018-10928 glusterfs: Improper resolution of symlinks allows for privilege escalation
1612660 - CVE-2018-10929 glusterfs: Arbitrary file creation on storage server allows for execution of arbitrary code
1612664 - CVE-2018-10930 glusterfs: Files can be renamed outside volume
1613143 - CVE-2018-10926 glusterfs: Device files can be created in arbitrary locations
1615338 - Rebalance status shows wrong count of "Rebalanced-files" if the file has hardlinks
1615440 - turn off brick multiplexing for stand alone RHGS
1615911 - [geo-rep]: No such file or directory when a node is shut down and brought back
1619416 - memory grows until swap is 100% utilized and some brick daemons crashes during creating of large number of small files
1619538 - Snapshot status fails with commit failure
1620469 - Brick process NOT ONLINE for heketidb and block-hosting volume
1620765 - posix_mknod does not update trusted.pgfid.xx xattr correctly
1622029 - [geo-rep]: geo-rep reverse sync in FO/FB can accidentally delete the content at original master incase of gfid conflict in 3.4.0 without explicit user rmdir
1622452 - Bricks for heketidb and some other volumes not ONLINE in gluster volume status

6. Package List:

Red Hat Gluster Storage Server 3.4 on RHEL-7:

Source:
glusterfs-3.12.2-18.el7rhgs.src.rpm
redhat-release-server-7.5-11.el7rhgs.src.rpm
redhat-storage-server-3.4.0.0-1.el7rhgs.src.rpm

noarch:
glusterfs-resource-agents-3.12.2-18.el7rhgs.noarch.rpm
redhat-storage-server-3.4.0.0-1.el7rhgs.noarch.rpm

x86_64:
glusterfs-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-api-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-api-devel-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-cli-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-client-xlators-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-debuginfo-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-devel-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-events-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-fuse-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-ganesha-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-geo-replication-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-libs-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-rdma-3.12.2-18.el7rhgs.x86_64.rpm
glusterfs-server-3.12.2-18.el7rhgs.x86_64.rpm
python2-gluster-3.12.2-18.el7rhgs.x86_64.rpm
redhat-release-server-7.5-11.el7rhgs.x86_64.rpm

Red Hat Storage Native Client for Red Hat Enterprise Linux 7:

Source:
glusterfs-3.12.2-18.el7.src.rpm

x86_64:
glusterfs-3.12.2-18.el7.x86_64.rpm
glusterfs-api-3.12.2-18.el7.x86_64.rpm
glusterfs-api-devel-3.12.2-18.el7.x86_64.rpm
glusterfs-cli-3.12.2-18.el7.x86_64.rpm
glusterfs-client-xlators-3.12.2-18.el7.x86_64.rpm
glusterfs-debuginfo-3.12.2-18.el7.x86_64.rpm
glusterfs-devel-3.12.2-18.el7.x86_64.rpm
glusterfs-fuse-3.12.2-18.el7.x86_64.rpm
glusterfs-libs-3.12.2-18.el7.x86_64.rpm
glusterfs-rdma-3.12.2-18.el7.x86_64.rpm
python2-gluster-3.12.2-18.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2018-10904
https://access.redhat.com/security/cve/CVE-2018-10907
https://access.redhat.com/security/cve/CVE-2018-10911
https://access.redhat.com/security/cve/CVE-2018-10913
https://access.redhat.com/security/cve/CVE-2018-10914
https://access.redhat.com/security/cve/CVE-2018-10923
https://access.redhat.com/security/cve/CVE-2018-10926
https://access.redhat.com/security/cve/CVE-2018-10927
https://access.redhat.com/security/cve/CVE-2018-10928
https://access.redhat.com/security/cve/CVE-2018-10929
https://access.redhat.com/security/cve/CVE-2018-10930
https://access.redhat.com/security/updates/classification/#important
https://access.redhat.com/site/documentation/en-US/red_hat_gluster_storage/3.4/html/3.4_release_notes/

8. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2018 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBW44lxdzjgjWX9erEAQiUvxAAkYQAuC+FPEFtlw2o857mnAgENS1O3VSZ
NT3vI7fTDLuhJVgPyVvdsAQvVldryuTVnS6encimoWkRiCIQ4eeVbCglKqT4nc3L
BS6SpLr4WwBcMJFU2TmwiOdvS565y9nIbRynp2atCoav0MlfdyRR7W1bBZpKZRt0
MdUBzu6kXQc4x18XBu1YyY7r1UtSclNNH3/zuvc+PSrfdJCbGiy/8ShenMuwevT9
C9hvVHyUPL8tRNmUaNujjZwC1pl2j+NltVbNp9GW/SasNkJ36m9lonL1cYdhxrqS
5mpma59e2jiHi5p0bPyzAEgp0F6QQOW/uYi5vEd2BLl59NjNDOVy0rx5tdLRsMEq
QQ1FCP1tIkZGZRtOxbUDxzm+/OEPU1qmOjh6diiv+3t430jd2JAmLm0nWTxN+PNv
unXB6webzyPhRYYP5HYlKCnYjemOc0R6oq/G3TF5BIW/VvqQWdngrTGIkDTJ3oGa
QqwQsEnU1JhkMg+j9SqIyQ+FTmjRdx5PnSGF3/JCUy2s1YG+uz922IUJEqyRtjro
pxP+H7BvYuO7aWK/E17F6FylImN+34ErJ8+BwnENB6jxN7RpBGdMmbWnaZx1H0Hy
HeUAmDzm3QmoT84lhvt9/oLAFkk0n50fx3cGQAt4TPcugh31vlFMf1Gort5Mpll1
cy5PrHwY8Nc=
=YJ/G
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce
Login or Register to add favorites

File Archive:

October 2024

  • Su
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • 1
    Oct 1st
    39 Files
  • 2
    Oct 2nd
    23 Files
  • 3
    Oct 3rd
    18 Files
  • 4
    Oct 4th
    20 Files
  • 5
    Oct 5th
    0 Files
  • 6
    Oct 6th
    0 Files
  • 7
    Oct 7th
    17 Files
  • 8
    Oct 8th
    0 Files
  • 9
    Oct 9th
    0 Files
  • 10
    Oct 10th
    0 Files
  • 11
    Oct 11th
    0 Files
  • 12
    Oct 12th
    0 Files
  • 13
    Oct 13th
    0 Files
  • 14
    Oct 14th
    0 Files
  • 15
    Oct 15th
    0 Files
  • 16
    Oct 16th
    0 Files
  • 17
    Oct 17th
    0 Files
  • 18
    Oct 18th
    0 Files
  • 19
    Oct 19th
    0 Files
  • 20
    Oct 20th
    0 Files
  • 21
    Oct 21st
    0 Files
  • 22
    Oct 22nd
    0 Files
  • 23
    Oct 23rd
    0 Files
  • 24
    Oct 24th
    0 Files
  • 25
    Oct 25th
    0 Files
  • 26
    Oct 26th
    0 Files
  • 27
    Oct 27th
    0 Files
  • 28
    Oct 28th
    0 Files
  • 29
    Oct 29th
    0 Files
  • 30
    Oct 30th
    0 Files
  • 31
    Oct 31st
    0 Files

Top Authors In Last 30 Days

File Tags

Systems

packet storm

© 2024 Packet Storm. All rights reserved.

Services
Security Services
Hosting By
Rokasec
close