Package Release Info

openvino-2025.2.0-bp160.1.4

Update Info: Base Release
Available in Package Hub : 16.0

platforms

AArch64
ppc64le
s390x
x86-64

subpackages

libopenvino2520
libopenvino_c2520
libopenvino_ir_frontend2520
libopenvino_onnx_frontend2520
libopenvino_paddle_frontend2520
libopenvino_pytorch_frontend2520
libopenvino_tensorflow_frontend2520
libopenvino_tensorflow_lite_frontend2520
openvino-auto-batch-plugin
openvino-auto-plugin
openvino-devel
openvino-hetero-plugin
openvino-intel-cpu-plugin
openvino-intel-npu-plugin
openvino-sample
python3-openvino

Change Logs

* Wed Jun 25 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- openSUSE Leap 16.0 compatibility
* Tue Jun 24 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Remove openvino-gcc5-compatibility.patch file
* Tue Jun 24 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
Summary of major features and improvements  
- More GenAI coverage and framework integrations to minimize code
  changes
  * New models supported on CPUs & GPUs: Phi-4,
    Mistral-7B-Instruct-v0.3, SD-XL Inpainting 0.1, Stable
    Diffusion 3.5 Large Turbo, Phi-4-reasoning, Qwen3, and
    Qwen2.5-VL-3B-Instruct. Mistral 7B Instruct v0.3 is also
    supported on NPUs.
  * Preview: OpenVINO ™ GenAI introduces a text-to-speech
    pipeline for the SpeechT5 TTS model, while the new RAG
    backend offers developers a simplified API that delivers
    reduced memory usage and improved performance.
  * Preview: OpenVINO™ GenAI offers a GGUF Reader for seamless
    integration of llama.cpp based LLMs, with Python and C++
    pipelines that load GGUF models, build OpenVINO graphs,
    and run GPU inference on-the-fly. Validated for popular models:
    DeepSeek-R1-Distill-Qwen (1.5B, 7B), Qwen2.5 Instruct
    (1.5B, 3B, 7B) & llama-3.2 Instruct (1B, 3B, 8B).
- Broader LLM model support and more model compression
  techniques
  * Further optimization of LoRA adapters in OpenVINO GenAI
    for improved LLM, VLM, and text-to-image model performance
    on built-in GPUs. Developers can use LoRA adapters to
    quickly customize models for specialized tasks.
  * KV cache compression for CPUs is enabled by default for
    INT8, providing a reduced memory footprint while maintaining
    accuracy compared to FP16. Additionally, it delivers
    substantial memory savings for LLMs with INT4 support compared
    to INT8.
  * Optimizations for Intel® Core™ Ultra Processor Series 2
    built-in GPUs and Intel® Arc™ B Series Graphics with the
    Intel® XMX systolic platform to enhance the performance of
    VLM models and hybrid quantized image generation models, as
    well as improve first-token latency for LLMs through dynamic
    quantization.
- More portability and performance to run AI at the edge, in the
  cloud, or locally.
  * Enhanced Linux* support with the latest GPU driver for
    built-in GPUs on Intel® Core™ Ultra Processor Series 2
    (formerly codenamed Arrow Lake H).
  * Support for INT4 data-free weights compression for ONNX
    models implemented in the Neural Network Compression
    Framework (NNCF).
  * NPU support for FP16-NF4 precision on Intel® Core™ 200V
    Series processors for models with up to 8B parameters is
    enabled through symmetrical and channel-wise quantization,
    improving accuracy while maintaining performance efficiency.
  Support Change and Deprecation Notices
- Discontinued in 2025:
  * Runtime components:
    + The OpenVINO property of Affinity API is no longer
    available. It has been replaced with CPU binding
    configurations (ov::hint::enable_cpu_pinning).
    + The openvino-nightly PyPI module has been discontinued.
    End-users should proceed with the Simple PyPI nightly repo
    instead. More information in Release Policy. The
    openvino-nightly PyPI module has been discontinued.
    End-users should proceed with the Simple PyPI nightly repo
    instead. More information in Release Policy.
  * Tools:
    + The OpenVINO™ Development Tools package (pip install
    openvino-dev) is no longer available for OpenVINO releases
    in 2025.
    + Model Optimizer is no longer available. Consider using the
    new conversion methods instead. For more details, see the
    model conversion transition guide.
    + Intel® Streaming SIMD Extensions (Intel® SSE) are currently
    not enabled in the binary package by default. They are
    still supported in the source code form.
    + Legacy prefixes: l_, w_, and m_ have been removed from
    OpenVINO archive names.
  * OpenVINO GenAI:
    + StreamerBase::put(int64_t token)
    + The Bool value for Callback streamer is no longer accepted.
    It must now return one of three values of StreamingStatus
    enum.
    + ChunkStreamerBase is deprecated. Use StreamerBase instead.
  * NNCF create_compressed_model() method is now deprecated.
    nncf.quantize() method is recommended for
    Quantization-Aware Training of PyTorch and TensorFlow models.
  * OpenVINO Model Server (OVMS) benchmark client in C++
    using TensorFlow Serving API.
- Deprecated and to be removed in the future:
  * Python 3.9 is now deprecated and will be unavailable after
    OpenVINO version 2025.4.
  * openvino.Type.undefined is now deprecated and will be removed
    with version 2026.0. openvino.Type.dynamic should be used
    instead.
  * APT & YUM Repositories Restructure: Starting with release
    2025.1, users can switch to the new repository structure
    for APT and YUM, which no longer uses year-based
    subdirectories (like “2025”). The old (legacy) structure
    will still be available until 2026, when the change will
    be finalized. Detailed instructions are available on the
    relevant documentation pages:
    + Installation guide - yum
    + Installation guide - apt
  * OpenCV binaries will be removed from Docker images in 2026.
  * Ubuntu 20.04 support will be deprecated in future OpenVINO
    releases due to the end of standard support.
  * “auto shape” and “auto batch size” (reshaping a model in
    runtime) will be removed in the future. OpenVINO’s dynamic
    shape models are recommended instead.
  * MacOS x86 is no longer recommended for use due to the
    discontinuation of validation. Full support will be removed
    later in 2025.
  * The openvino namespace of the OpenVINO Python API has been
    redesigned, removing the nested openvino.runtime module.
    The old namespace is now considered deprecated and will be
    discontinued in 2026.0.
* Wed May 21 2025 Andreas Schwab <schwab@suse.de>
- Fix file list for riscv64
* Mon May 05 2025 Dominique Leuenberger <dimstar@opensuse.org>
- Do not force GCC15 on Tumblewed just yet: follow the distro
  default compiler, like any other package.
* Sat May 03 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- openvino-gcc5-compatibility.patch to resolve incompatibility
  in gcc5
* Thu May 01 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Added gcc-14
* Mon Apr 14 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to 2025.1.0
- Downgrade from gcc13-c++ to 12 due to incompatibility in tbb
  compilation.  This was due to C++ libraries (using libstdc++)
  resulting in the error: libtbb.so.12: undefined reference to
  `__cxa_call_terminate@CXXABI_1.3.15'
- More GenAI coverage and framework integrations to minimize code
  changes
  * New models supported: Phi-4 Mini, Jina CLIP v1, and Bce
    Embedding Base v1.
  * OpenVINO™ Model Server now supports VLM models, including
    Qwen2-VL, Phi-3.5-Vision, and InternVL2.
  * OpenVINO GenAI now includes image-to-image and inpainting
    features for transformer-based pipelines, such as Flux.1 and
    Stable Diffusion 3 models, enhancing their ability to generate
    more realistic content.
  * Preview: AI Playground now utilizes the OpenVINO Gen AI backend
    to enable highly optimized inferencing performance on AI PCs.
- Broader LLM model support and more model compression techniques
  * Reduced binary size through optimization of the CPU plugin and
    removal of the GEMM kernel.
  * Optimization of new kernels for the GPU plugin significantly
    boosts the performance of Long Short-Term Memory (LSTM) models,
    used in many applications, including speech recognition,
    language modeling, and time series forecasting.
  * Preview: Token Eviction implemented in OpenVINO GenAI to reduce
    the memory consumption of KV Cache by eliminating unimportant
    tokens. This current Token Eviction implementation is
    beneficial for tasks where a long sequence is generated, such
    as chatbots and code generation.
  * NPU acceleration for text generation is now enabled in
    OpenVINO™ Runtime and OpenVINO™ Model Server to support the
    power-efficient deployment of VLM models on NPUs for AI PC use
    cases with low concurrency.
- More portability and performance to run AI at the edge, in the
  cloud, or locally.
  * Additional LLM performance optimizations on Intel® Core™ Ultra
    200H series processors for improved 2nd token latency on
    Windows and Linux.
  * Enhanced performance and efficient resource utilization with
    the implementation of Paged Attention and Continuous Batching
    by default in the GPU plugin.
  * Preview: The new OpenVINO backend for Executorch will enable
    accelerated inference and improved performance on Intel
    hardware, including CPUs, GPUs, and NPUs.
* Tue Mar 04 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Disabled JAX plugin beta.
* Sun Feb 09 2025 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to 2025.0.0
- More GenAI coverage and framework integrations to minimize code
  changes
  * New models supported: Qwen 2.5, Deepseek-R1-Distill-Llama-8B,
    DeepSeek-R1-Distill-Qwen-7B, and DeepSeek-R1-Distill-Qwen-1.5B,
    FLUX.1 Schnell and FLUX.1 Dev
  * Whisper Model: Improved performance on CPUs, built-in GPUs,
    and discrete GPUs with GenAI API.
  * Preview: Introducing NPU support for torch.compile, giving
    developers the ability to use the OpenVINO backend to run the
    PyTorch API on NPUs. 300+ deep learning models enabled from
    the TorchVision, Timm, and TorchBench repositories..
- Broader Large Language Model (LLM) support and more model
  compression techniques.
  * Preview: Addition of Prompt Lookup to GenAI API improves 2nd
    token latency for LLMs by effectively utilizing predefined
    prompts that match the intended use case.
  * Preview: The GenAI API now offers image-to-image inpainting
    functionality. This feature enables models to generate
    realistic content by inpainting specified modifications and
    seamlessly integrating them with the original image.
  * Asymmetric KV Cache compression is now enabled for INT8 on
    CPUs, resulting in lower memory consumption and improved 2nd
    token latency, especially when dealing with long prompts that
    require significant memory. The option should be explicitly
    specified by the user.
- More portability and performance to run AI at the edge, in the
  cloud, or locally.
  * Support for the latest Intel® Core™ Ultra 200H series
    processors (formerly codenamed Arrow Lake-H)
  * Integration of the OpenVINO ™ backend with the Triton
    Inference Server allows developers to utilize the Triton
    server for enhanced model serving performance when deploying
    on Intel CPUs.
  * Preview: A new OpenVINO ™ backend integration allows
    developers to leverage OpenVINO performance optimizations
    directly within Keras 3 workflows for faster AI inference on
    CPUs, built-in GPUs, discrete GPUs, and NPUs. This feature is
    available with the latest Keras 3.8 release.
  * The OpenVINO Model Server now supports native Windows Server
    deployments, allowing developers to leverage better
    performance by eliminating container overhead and simplifying
    GPU deployment.
- Support Change and Deprecation Notices
  * Now deprecated:
    + Legacy prefixes l_, w_, and m_ have been removed from
    OpenVINO archive names.
    + The runtime namespace for Python API has been marked as
    deprecated and designated to be removed for 2026.0. The
    new namespace structure has been delivered, and migration
    is possible  immediately. Details will be communicated
    through warnings andvia documentation.
    + NNCF create_compressed_model() method is deprecated.
    nncf.quantize() method is now recommended for
    Quantization-Aware Training of PyTorch and
    TensorFlow models.