AArch64 | |
ppc64le | |
s390x | |
x86-64 |
- Update to 1.3.4 * perf: faster speed (especially decoding speed) on recent cpus (haswell+) * perf: much better performance associating --long with multi-threading * perf: better compression at levels 13-15 * cli : asynchronous compression by default, for faster experience (use --single-thread for former behavior) * cli : smoother status report in multi-threading mode * cli : added command --fast=#, for faster compression modes * cli : fix crash when not overwriting existing files * api : `nbThreads` becomes `nbWorkers` : 1 triggers asynchronous mode * api : compression levels can be negative, for even more speed * api : ZSTD_getFrameProgression() : get precise progress status of ZSTDMT anytime * api : ZSTDMT can accept new compression parameters during compression * api : implemented all advanced dictionary decompression prototypes
- build the static library (depency for btrfsprogs-static)
- build the static library (depency for btrfsprogs-static)
- Update to 1.3.3 * perf: improved zstd_opt strategy (levels 16-19) * fix : bug #944 : multithreading with shared ditionary and large data, reported by @gsliepen * cli : fix : content size written in header by default * cli : fix : improved LZ4 format support, by @felixhandte * cli : new : hidden command -b -S, to benchmark multiple files and generate one result per file * api : change : when setting pledgedSrcSize, use ZSTD_CONTENTSIZE_UNKNOWN macro value to mean "unknown" * api : fix : support large skippable frames, by @terrelln * api : fix : re-using context could result in suboptimal block size in some corner case scenarios * api : fix : streaming interface was adding a useless 3-bytes null block to small frames
- update to 1.3.2: * new long range mode, using --long command * new ability to generate and decode magicless frames * changed maximum nb of threads reduced to 200, to avoid address space exhaustion in 32-bits mode * fix multi-threading compression works with custom allocators * fix ZSTD_sizeof_CStream() was over-evaluating memory usage * fix a rare compression bug when compression generates very large distances and bunch of other conditions (only possible at --ultra -22) * fix 32-bits build can now decode large offsets (levels 21+) * cli added LZ4 frame support by default * cli improved --list output * cli now can split input file for dictionary training, using command -B# * cli new clean operation artefact on Ctrl-C interruption * do not change /dev/null permissions when using command -t with root access * cli fix write file size in header in multiple-files mode * api: added macro ZSTD_COMPRESSBOUND() for static allocation * api: new advanced decompression API * api: sizeof_CCtx() used to over-estimate * build: fix : no-multithread variant compiles without pool.c dependency * build: better compatibility with reproducible builds * license: changed /examples license to BSD + GPLv2 * license: fix a few header files to reflect new license
- Update to v1.3.1 * License is now BSD + GPL-2.0 * See https://github.com/facebook/zstd/releases for the complete changelog.
- Update to v1.1.4 See https://github.com/facebook/zstd/releases for details. - Drop zstd-lib-no-rebuild.patch
- Fix group name for the shared library
- Update to version 1.1.1 * New : cli commands -M#, --memory=, --memlimit=, - -memlimit-decompress= to limit allowed memory consumption during decompression * Improved : slightly better compression ratio at --ultra levels * Improved : better memory usage when using streaming compression API * Added : API : ZSTD_initCStream_usingCDict(), ZSTD_initDStream_usingDDict() (experimental section) * Changed : zstd_errors.h is now installed within /include (and replaces errors_public.h) * Fixed : several sanitizer warnings
- Update descriptions
- initial package version 1.1.0 based on https://pbrady.fedorapeople.org/zstd.spec