Package Release Info

rook-1.4.5+git5.ge3c837f8-bp153.1.48

Update Info: Base Release
Available in Package Hub : 15 SP3

platforms

AArch64
ppc64le
s390x
x86-64

subpackages

rook
rook-ceph-helm-charts
rook-integration
rook-k8s-yaml
rook-rookflex

Change Logs

* Fri Oct 02 2020 Mike Latimer <mlatimer@suse.com>
- Update to v1.4.5
  * Update the CSI driver to v3.1.1 (#6340)
  * Fix drive group deployment failure (#6267)
  * Fix OBC upgrade from 1.3 to 1.4 external cluster (#6353)
  * Remove user unlink while deleting OBC (#6338)
  * Enable RBAC in the helm chart for enabling monitoring (#6352)
  * Disable encryption keyring parameter not necessary after
    opening block (#6350)
  * Improve reconcile performance in clusters with many OSDs on
    PVCs (#6330)
  * Only one external cluster secret supported in import script (#6343)
  * Allow OSD PVC template name to be set to any value (#6307)
  * OSD prepare job was failing due to low aio-max-nr setting (#6284)
  * During upgrade assume a pod spec changed if diff checking fails (#6272)
  * Merge config from rook-config-override configmap to the default global
    config file (#6252)
- Package all sample yaml files in rook-k8s-yaml
* Tue Sep 29 2020 Mike Latimer <mlatimer@suse.com>
- Update helm chart version to match rook product version plus
  the current release number
* Tue Sep 29 2020 Mike Latimer <mlatimer@suse.com>
- Update to v1.4.4
  * Upgrade to v1.4.3 for cluster-on-pvc hung due to changing label
    selectors on the mons (#6256)
  * Remove osd status configmap for nodes with long names (#6235)
  * Allow running rgw daemons from an external cluster (#6226)
- Create symlinks in /usr/local/bin for toolbox.sh and rook to
  ensure compatibility with upstream sample yamls
* Mon Sep 21 2020 Stefan Haas <stefan.haas@suse.com>
- fixed spec-file:
  * operator.yaml does not get changed to use the SUSE-images
* Thu Sep 17 2020 Mike Latimer <mlatimer@suse.com>
- helm chart, manifests:
  * fixed tolerations
  * Update SUSE documentation URL in NOTES.txt
* Thu Sep 17 2020 Mike Latimer <mlatimer@suse.com>
- ceph: fix drive group deployment failure (bsc#1176170)
- helm chart, manifests:
  * Add tolerations to cluster & CRDs
  * Require kubeVersion >= 1.11
  * Use rbac.authorization.k8s.io/v1
  * Add affinities for label schema
  * Set Rook log level to DEBUG
  * Remove FlexVolume agent
  * Require currentNamespaceOnly=true
  * Replace NOTES.txt with SUSE specific version
* Tue Sep 15 2020 Mike Latimer <mlatimer@suse.com>
- Include operator and common yamls in manifest package
* Sat Sep 12 2020 Mike Latimer <mlatimer@suse.com>
- Update to v1.4.3
  * The Ceph-CSI driver was being unexpectedly removed by the garbage
    collector in some clusters. For more details to apply a fix during
    the upgrade to this patch release, see these steps. (#616)
  * Add storageClassDeviceSet label to osd pods (#6225)
  * DNS suffix issue for OBCs in custom DNS suffix clusters (#6234)
  * Cleanup mon canary pvc if the failover failed (#6224)
  * Only enable mgr init container if the dashboard is enabled (#6198)
  * cephobjectstore monitoring goroutine must be stopped during
    uninstall (#6208)
  * Remove NParts and Cache_Size from MDCACHE block in the NFS
    configuration (#6207)
  * Purge a down osd with a job created by the admin (#6127)
  * Do not use label selector on external mgr service (#6142)
  * Allow uninstall even if volumes still exist with a new CephCluster
    setting (#6145)
* Thu Sep 10 2020 Mike Latimer <mlatimer@suse.com>
- Update to v1.4.2
  - Patch release focusing on small feature additions and bug fixes.
  * Improve check for LVM on the host to allow installing of OSDs (#6175)
  * Set the OSD prepare resource limits (#6118)
  * Allow memory limits below recommended settings (#6116)
  * Use full DNS suffix for object endpoint with OBCs (#6170)
  * Remove the CSI driver lifecycle preStop hook (#6141)
  * External cluster optional settings for provisioners (#6048)
  * Operator watches nodes that match OSD placement rules (#6156)
  * Allow user to add labels to the cluster daemon pods (#6084 #6082)
  * Fix vulnerability in package golang.org/x/text (#6136)
  * Add expansion support for encrypted osd on pvc (#6126)
  * Do not use realPath for OSDs on PVCs (#6120, @leseb)
  * Example object store manifests updated for consistency (#6123)
  * Separate topology spread constrinats for osd prepare jobs and
    osd daemons (#6103)
  * Pass CSI resources as strings in the helm chart (#6104)
  * Improve callCephVolume() for list and prepare (#6059)
  * Improved multus support for the CSI driver configuration (#5740)
  * Object store healthcheck yaml examples (#6090)
  * Add support for wal encrypted device on pvc (#6062)
  * Updated helm usage in documentation (#6086)
  * More details for RBD Mirroring documentation (#6083)
- Build process changes:
  - Set CSI sidecar versions through _service, and set all versions in
    code through a single patch file
    + csi-images-SUSE.patch
  - csi-dummy-images.patch
  - Use github.com/SUSE/rook and suse-release-1.4 tag in update.sh
  - Create module dependencies through _service, and store these dependencies
    in vendor.tar.gz (replacing rook-[version]-vendor.tar.xz)
  - Modify build commands to include "-mod=vendor" to use new vendor tarball
  - Add CSI sidecars as BuildRequires, in order to determine versions through
    _service process
  - Replace %setup of vendor tarball with a simple tar extraction
  - Move registry detection to %prep, and set correct registry through a
    search and replace on the SUSE_REGISTRY string
  - Use variables to track rook, ceph and cephcsi versions
  - Add '#!BuildTag', and 'appVersion' to chart.yaml
  - Add required versioning to helm chart
  - Leave ceph-csi templates in /etc, and include them in main rook package.
  - csi-template-paths.patch
  - Include only designated yaml examples in rook-k8s-yaml package
* Mon Aug 10 2020 Stefan Haas <stefan.haas@suse.com>
- Update to v1.4.0:
  * Ceph-CSI 3.0 is deployed by default
  * Multi Architecture docker images are published (amd64 and arm64)
  * Create/Delete beta snapshot for RBD, while support for Alpha snapshots is removed.
  * Create PVCs from RBD snapshots and PVCs
  * Support ROX volumes for RBD and CephFS
  * The dashboard for the ceph object store will be enabled if the dashboard module is enabled.
  * An admission controller enhances CRD validations (Experimental)
  * The admission controller is not enabled by default.
  * Support for Ceph CRDs is provided. Some validations for CephClusters are included and a framework for additional validations is in place for other CRDs.
  * RGW Multisite is available through new CRDs for zones, zone groups, and realms. (Experimental)
  * CephObjectStore CRD changes:
  * Health displayed in the Status field
  * Run health checks on the object store endpoint by creating a bucket and writing to it periodically.
  * The endpoint is stored for reference in the Status field
  * OSD changes:
  * OSDs on PVC now support multipath and crypt device types.
  * OSDs on PVC can now be encrypted by setting encrypted: true on the storageClassDeviceSet.
  * OSDs can now be provisioned using Ceph's Drive Groups definitions for Ceph Octopus v15.2.5+.
  * OSDs can be provisioned on the device path such as /dev/disk/by-path/pci-HHHH:HH:HH.H with colons (:)
  * A new CephRBDMirror CR will configure the RBD mirroring daemons. The RBD mirror settings were previously included in the CephCluster CR.
  * Multus support is improved, though still in experimental mode
  * Added support for the Whereabouts IPAM
  * CephCluster CRD changes:
  * Converted to use the controller-runtime framework
  * Added settings to configure health checks as well as pod liveness probes.
  * CephBlockPool CRD has a new field called parameters which allows to set any Ceph pool property on a given pool
  * OBC changes:
  * Updated the lib bucket provisioner version to support multithreading
  * Added support for quota, have options for object count and total size.
  * Prometheus monitoring for external clusters is now possible, refer to the external cluster section
  * The operator will check for the presence of the lvm2 package on the host where OSDs will run. If not available, the prepare job will fail. This will prevent issues of OSDs not restarting on node reboot.
  * Added a new label ceph_daemon_type to Ceph daemon pods.
  * Added a toolbox job example for running a script with Ceph commands, similar to running commands in the Rook toolbox.
* Wed May 27 2020 Stefan Haas <stefan.haas@suse.com>
- Update to v1.3.4:
  * Finalizer for OBC cleanup (#5436)
  * Remove invalid MDS deactivate command during upgrade (#5278)
  * Enable verbose logging for LVM commands (#5515)
  * Set external creds if admin key is available (#5507)
  * Fail more gracefully for an unsupported Ceph version (#5503)
  * Set pg_num_min on new rgw metadata pools (#5489)
  * Object store deployment failed to start on openshift (#5468)
  * Relax OBC error handling and user deletion (#5465)
  * Create missing secret on external cluster (#5450)
  * Python script to generate needed external cluster resources (#5388)
  * Docs: clarify required version of helm for upgrades (#5445)
  * CSI priority class example update (#5443)
  * Set test default pool size to one (#5428)
  * Remove invalid verbose params from lv activate (#5438)
* Wed Apr 22 2020 Stefan Haas <stefan.haas@suse.com>
- Update to v1.3.1:
  * Stop the pool controller from staying in a reconcile loop (#5173)
  * Update the rgw service port during upgrade (#5228)
- Removed orchestrator-cli-rename.patch as it got merged
* Mon Apr 20 2020 Stefan Haas <stefan.haas@suse.com>
- Update to v1.3.0:
  * Ceph: revert mgr to minimal privilege (#5183)
  * Enable the Ceph CSI v2.0.1 driver by default in Rook (#5162)
  * ceph: add liveness probe to mon, mds and osd daemons (#5128)
  * Ceph: prevent pre-existing lvms from wipe (#4966)
* Fri Jan 31 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Package helm charts for the rook operator for ceph (SES-799)
Version: 1.2.7+git0.g1acfd182-bp152.1.58
* Tue Mar 31 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.7 (bsc#1168160):
  * Apply the expected lower PG count for rgw metadata pools (#5091)
  * Reject devices smaller than 5GiB for OSDs (#5089)
  * Add extra check for filesystem to skip boot volumes for OSD configuration (#5022)
  * Avoid duplication of mon pod anti-affinity (#4998)
  * Update service monitor definition during upgrade (#5078)
  * Resizer container fix due to misinterpretation of the cephcsi version (#5073-1)
  * Set ResourceVersion for Prometheus rules (#4528)
  * Upgrade doc clarification for RBAC related to the helm chart (#5054)
* Wed Mar 18 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.6:
  * Update default Ceph version to v14.2.8 (#4960)
  * Fix for OSDs on PVCs that were crashing on Ceph v14.2.8 (#4960)
  * Mount /udev so the osds can discover device info (#5001)
  * Query for the topology.kubernetes.io labels in K8s 1.17 or newer for the CRUSH hierarchy (#4989)
  * Log a warning when useAllNodes is true, but nodes are defined in the cluster CR ([commit](https://github.com/rook/rook/pull/4974/commits/69c9ed4206f47644687733396d87022e93d312a3))
* Tue Mar 10 2020 Kristoffer Gronlund <kgronlund@suse.com>
- ceph: orchestrator cli name change
  * Add orchestrator-cli-rename.patch
* Thu Feb 20 2020 Kristoffer Gronlund <kgronlund@suse.com>
- ceph: populate CSI configmap for external cluster
* Tue Feb 18 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.4:
  * Stop garbage collector from deleting the CSI driver unexpectedly (#4820)
  * Upgrade legacy OSDs created with partitions created by Rook (#4799)
  * Ability to set the pool target_size_ratio (#4803)
  * Improve detection of drain-canaries and log significant nodedrain scheduling events (#4679)
  * Sort flexvolume docs and update for kubespray (#4747)
  * Add OpenShift common issues documentation (#4764)
  * Improved integration test when cleaning devices (#4796)
* Mon Jan 27 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Package helm charts for the rook operator for ceph (SES-799)
* Mon Jan 27 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.2:
  * Allow multiple clusters to set useAllDevices (#4692)
  * Operator start all mons before checking quorum if they are all down (#4531)
  * Ability to disable the crash controller (#4533)
  * Document monitoring options for the cluster CR (#4698)
  * Apply node topology labels to PV-backed OSDs in upgrade from v1.1 (#4616)
  * Update examples to Ceph version v14.2.6 (#4653)
  * Allow integration tests in minimal config to run on multiple K8s versions (#4674)
  * Wrong pod name and hostname shown in alert CephMonHighNumberOfLeaderChanges (#4665)
  * Set hostname properly in the CRUSH map for non-portable OSDs on PVCs (#4658)
  * Update OpenShift example manifest to watch all namespaces for clusters (#4668)
  * Use min_size defaults set by Ceph instead of overriding with Rook's defaults (#4638)
  * CSI driver handling of upgrade from OCP 4.2 to OCP 4.3 (#4650-1)
  * Add support for the k8s 1.17 failure domain labels (#4626)
  * Add option to the cluster CR to continue upgrade even with unclean PGs (#4617)
  * Add K8s 1.11 back to the integration tests as the minimum version (#4673)
  * Fixed replication factor flag and the master addresses (#4625)
* Wed Jan 08 2020 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.1:
  * Add missing env var  `ROOK_CEPH_MON_HOST` for OSDs (#4589)
  * Avoid logging sensitive info when debug logging is enabled (#4568)
  * Add missing vol mount for encrypted osds (#4583)
  * Bumping ceph-operator memory limit to 256Mi (#4561)
  * Fix object bucket provisioner when rgw not on port 80 (#4508)
* Fri Dec 20 2019 Kristoffer Gronlund <kgronlund@suse.com>
- Update to v1.2.0:
  * Security audit completed by Trail of Bits found no major concerns
  * Ceph: Added a new "crash collector" daemon to send crash telemetry
    to the Ceph dashboard, support for priority classes, and a new
    CephClient resource to create user credentials
  * The minimum version of Kubernetes supported by Rook changed from
    1.11 to 1.12.
  * Device filtering is now configurable for the user by adding an
    environment variable
    + A new environment variable DISCOVER_DAEMON_UDEV_BLACKLIST is
    added through which the user can blacklist the devices
    + If no device is specified, the default values will be used to
    blacklist the devices
  * The topology setting has been removed from the CephCluster CR. To
    configure the OSD topology, node labels must be applied.
  * See the OSD topology topic. This setting only affects OSDs when
    they are first created, thus OSDs will not be impacted during
    upgrade.
  * The topology settings only apply to bluestore OSDs on raw devices.
    The topology labels are not applied to directory-based OSDs.
  * Creation of new Filestore OSDs on disks is now deprecated.
    Filestore is in sustaining mode in Ceph.
    + The storeType storage config setting is now ignored
    + New OSDs created in directories are always Filestore type
    + New OSDs created on disks are always Bluestore type
    + Preexisting disks provisioned as Filestore OSDs will remain as
    Filestore OSDs
  * Rook will no longer automatically remove OSDs if nodes are removed
    from the cluster CR to avoid the risk of destroying OSDs
    unintentionally. To remove OSDs manually, see the new doc on OSD
    Management
- Update csi-dummy-images.patch
- Update flexvolume-dir.patch
- Drop outdated patch 0001-bsc-1152690-ceph-csi-Driver-will-fail-with-error.patch
* Tue Dec 03 2019 Kristoffer Gronlund <kgronlund@suse.com>
- Update rook to v1.1.7:
  * Skip osd prepare job creation if osd daemon exists for the pvc (#4277)
  * Stop osd process more quickly during pod shutdown to reduce IO unresponsiveness (#4328)
  * Add osd anti-affinity to the example of OSDs on PVCs (#4326)
  * Properly set app name on the cmdreporter (#4323)
  * Ensure disruption draining state is set and checked correctly (#4319)
  * Update LVM filter for OSDs on PVCs (#4312)
  * Fix topology logic for disruption drains (#4221)
  * Skip restorecon during ceph-volume configuration (#4260)
  * Added a note around snapshot CRD cleanup (#4302)
  * Storage utilization alert threshold and timing updated (#4286)
  * Silence disruption errors if necessary and add missing errors (#4288)
  * Create csi keys and secrets for external cluster (#4276)
  * Add retry to ObjectUser creation (#4149)
* Wed Nov 06 2019 Kristoffer Gronlund <kgronlund@suse.com>
- Update rook to v1.1.6:
  * Flex driver should not allow attach before detach on a different node (#3582)
  * Properly set the ceph-mgr annotations (#4195)
  * Only trigger an orchestration if the cluster CR changed (#4252)
  * Fix setting rbdGrpcMetricsPort in the helm chart (#4202)
  * Document all helm chart settings (#4202)
  * Support all layers of CRUSH map with node labels (#4236)
  * Skip orchestration restart on device config map update for osd on pvc (#4124)
  * Deduplicate tolerations collected for the drain canary pods (#4220)
  * Role bindings are missing for pod security policies (#3851)
  * Continue with orchestration if a single mon pod fails to start (#4146)
  * OSDs cannot call 'restorecon' when selinux is enabled (#4214)
  * Use the rook image for drain canary pods (#4213)
  * Allow setting of osd prepare resource limits (#4182)
  * Documentation for object bucket provisioning (#3882)
* Tue Nov 05 2019 Kristoffer Gronlund <kgronlund@suse.com>
- Update rook to v1.1.4:
  * OSD config overrides were ignored for some upgraded OSDs (#4161)
  * Enable restoring a cluster after disaster recovery (#4021)
  * Enable upgrade of OSDs configured on PVCs (#3996)
  * Automatically removing OSDs requires setting: removeOSDsIfOutAndSafeToRemove(#4116)
  * Rework csi keys and secrets to use minimal privileges (#4086)
  * Expose OSD prepare pod resource limits (#4083)
  * Minimum K8s version for running OSDs on PVCs is 1.13 (#4009)
  * Add 'rgw.buckets.non-ec' to list of RGW metadataPools (#4087)
  * Hide wrong error for clusterdisruption controller (#4094)
  * Multiple integration test fixes to improve CI stability (#4098)
  * Detect mount fstype more accurately in the flex driver (#4109)
  * Do not override mgr annotations (#4110)
  * Add OSDs to proper buckets in crush hierarchy with topology awareness (#4099)
  * More robust removal of cluster finalizer (#4090)
  * Take activeStandby into account for the CephFileSystem disruption budget (#4075)
  * Update the CSI CephFS registration directory name (#4070)
  * Fix incorrect Ceph CSI doc links (#4081)
  * Remove decimal places for osdMemoryTargetValue monitoring setting (#4046)
  * Relax pre-requisites for external cluster to allow connections to Luminous (#4025)
  * Avoid nodes getting stuck in OrchestrationStatusStarting during OSD config (#3817)
  * Make metrics and liveness port configurable (#4005)
  * Correct system namespace for CSI driver settings during upgrade (#4040)
- Update csi-dummy-images.patch
- Update csi-template-paths.patch
- Update 0001-bsc-1152690-ceph-csi-Driver-will-fail-with-error.patch
* Wed Oct 02 2019 Kristoffer Gronlund <kgronlund@suse.com>
- Force use of ceph kernel client driver (bsc#1152690)
- Add 0001-bsc-1152690-ceph-csi-Driver-will-fail-with-error.patch
* Tue Oct 01 2019 Blaine Gardner <blaine.gardner@suse.com>
- Define build shell as /bin/bash for usage of `=~` conditional (bsc#1152559)
* Mon Sep 30 2019 Blaine Gardner <blaine.gardner@suse.com>
- Fix csi-dummy-images.patch to work with Go linker's -X flag (bsc#1152559)
  + update linker flags themselves to remove comments from flags
  + add test to spec file to verify linker flags are working in future
* Thu Sep 26 2019 Blaine Gardner <blaine.gardner@suse.com>
- Fix 2 improper RPM spec variable references in specfile (bsc#1151909)
* Wed Sep 25 2019 Blaine Gardner <blaine.gardner@suse.com>
- Use lightweight git tags when determining Rook version from source in tarball script (bsc#1151909)
  + Build should now be tagged appropriately as version 1.1.1.0 instead of 1.1.0.x
- Override some Rook defaults with linker flags at build time:
  + CSI image -> SUSE image
  + FlexVolume dir (for Kubic)
- Add patches for:
  + updating CSI image to a dummy value later changed at linker time
  + updating CSI template paths to the ones installed by rook-k8s-manifests
  + update the FlexVolume dir path to be compatible with Kubic
- Remove previously applied SUSE-specific changes that are now taken care of by the above patches
- Add patch: csi-dummy-images.patch
- Add patch: csi-template-paths.patch
- Add patch: flexvolume-dir.patch
* Wed Sep 25 2019 Kristoffer Gronlund <kgronlund@suse.com>
- rook-k8s-yaml: Fix YAML indentation of cephcsi image value (bsc#1152008)
* Wed Sep 25 2019 Blaine Gardner <blaine.gardner@suse.com>
- Update Rook to match upstream version v1.1.1 (bsc#1151909)
  + Disable the flex driver by default in new clusters
  + MDB controller to use namespace for checking ceph status
  + CSI liveness container socket file
  + Add list of unusable directories paths
  + Remove helm incompatible chars from values.yaml
  + Fail NFS-ganesha if CephFS is not configured
  + Make lifecycle hook chown less verbose for OSDs
  + Configure LVM settings for rhel8 base image
  + Make kubelet path configurable in operator for csi (#392
  + OSD pods should always use hostname for node selector
  + Deactivate device from lvm when OSD pods are shutting down
  + Add CephNFS to OLM's CSV
  + Tolerations for drain detection canaries
  + Enable ceph-volume debug logs
  + Add documentation for CSI upgrades from v1.0 (#386
  + Add a new skipUpgradeChecks property to allow forcing upgrades
  + Include CSI image in helm chart values (#385
  + Use HTTP port if SSL is disabled
  + Enable SSL for dashboard by default
  + Enable msgr2 properly during upgrades
  + Nautilus v14.2.4 is the default Ceph image
  + Ensure the ceph-csi secret exists on upgrade
  + Disable the min PG warning if the pg_autoscaler is enabled
  + Disable the warning for bluestore warn on legacy statfs
- add SUSE-specific changes to manifests:
  + uncomment ROOK_CSI_CEPH_IMAGE var
  + set FlexVolume dir path for Kubic
  + add ROOK_CSI_*_TEMPLATE_PATH configs
* Mon Sep 16 2019 Kristoffer Gronlund <kgronlund@suse.com>
- rook-k8s-yaml: Revert to buildrequire for ceph (bsc#1151479)