Package Release Info

ollama-0.3.13-bp160.1.34

Update Info: Base Release
Available in Package Hub : 16.0

platforms

AArch64
ppc64le
s390x
x86-64

subpackages

ollama

Change Logs

* Sat Oct 12 2024 eyadlorenzo@gmail.com
- Update to version 0.3.13:
  * New safety models:
    ~ Llama Guard 3: a series of models by Meta, fine-tuned for
    content safety classification of LLM inputs and responses.
    ~ ShieldGemma: ShieldGemma is set of instruction tuned models
    from Google DeepMind for evaluating the safety of text
    prompt input and text output responses against a set of
    defined safety policies.
  * Fixed issue where ollama pull would leave connections when
    encountering an error
  * ollama rm will now stop a model if it is running prior to
    deleting it
* Sat Sep 28 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to version 0.3.12:
  * Llama 3.2: Meta's Llama 3.2 goes small with 1B and 3B
    models.
  * Qwen 2.5 Coder: The latest series of Code-Specific Qwen
    models, with significant improvements in code generation,
    code reasoning, and code fixing.
  * Ollama now supports ARM Windows machines
  * Fixed rare issue where Ollama would report a missing .dll
    file on Windows
  * Fixed performance issue for Windows without GPUs
* Fri Sep 20 2024 adrian@suse.de
- Update to version 0.3.11:
  * llm: add solar pro (preview) (#6846)
  * server: add tool parsing support for nemotron-mini (#6849)
  * make patches git am-able
  * CI: dist directories no longer present (#6834)
  * CI: clean up naming, fix tagging latest (#6832)
  * CI: set platform build build_linux script to keep buildx happy (#6829)
  * readme: add Agents-Flex to community integrations (#6788)
  * fix typo in import docs (#6828)
  * readme: add vim-intelligence-bridge to Terminal section (#6818)
  * readme: add Obsidian Quiz Generator plugin to community integrations (#6789)
  * Fix incremental builds on linux (#6780)
  * Use GOARCH for build dirs (#6779)
  * Optimize container images for startup (#6547)
  * examples: updated requirements.txt for privategpt example
  * examples: polish loganalyzer example (#6744)
  * readme: add ollama_moe to community integrations (#6752)
  * runner: Flush pending responses before returning
  * add "stop" command (#6739)
  * refactor show ouput
  * readme: add QodeAssist to community integrations (#6754)
  * Verify permissions for AMD GPU (#6736)
  * add *_proxy for debugging
  * docs: update examples to use llama3.1 (#6718)
  * Quiet down dockers new lint warnings (#6716)
  * catch when model vocab size is set correctly (#6714)
  * readme: add crewAI to community integrations (#6699)
  * readme: add crewAI with mesop to community integrations
* Tue Sep 17 2024 adrian@suse.de
- Update to version 0.3.10:
  * openai: align chat temperature and frequency_penalty options with completion (#6688)
  * docs: improve linux install documentation (#6683)
  * openai: don't scale temperature or frequency_penalty (#6514)
  * readme: add Archyve to community integrations (#6680)
  * readme: add Plasmoid Ollama Control to community integrations (#6681)
  * Improve logging on GPU too small (#6666)
  * openai: fix "presence_penalty" typo and add test (#6665)
  * Fix gemma2 2b conversion (#6645)
  * Document uninstall on windows (#6663)
  * Revert "Detect running in a container (#6495)" (#6662)
  * llm: make load time stall duration configurable via OLLAMA_LOAD_TIMEOUT
  * Introduce GPU Overhead env var (#5922)
  * Detect running in a container (#6495)
  * readme: add AiLama to the list of community integrations (#4957)
  * Update gpu.md: Add RTX 3050 Ti and RTX 3050 Ti (#5888)
  * server: fix blob download when receiving a 200 response  (#6656)
  * readme: add Gentoo package manager entry to community integrations (#5714)
  * Update install.sh:Replace "command -v" with encapsulated functionality (#6035)
  * readme: include Enchanted for Apple Vision Pro (#4949)
  * readme: add lsp-ai to community integrations (#5063)
  * readme: add ollama-php library to community integrations (#6361)
  * readme: add vnc-lm discord bot community integration (#6644)
  * llm: use json.hpp from common (#6642)
  * readme: add confichat to community integrations (#6378)
  * docs: add group to manual Linux isntructions and verify service is running (#6430)
  * readme: add gollm to the list of community libraries (#6099)
  * readme: add Cherry Studio to community integrations (#6633)
  * readme: add Go fun package (#6421)
  * docs: fix spelling error (#6391)
  * install.sh: update instructions to use WSL2 (#6450)
  * readme: add claude-dev to community integrations (#6630)
  * readme: add PyOllaMx project (#6624)
  * llm: update llama.cpp commit to 8962422 (#6618)
  * Use cuda v11 for driver 525 and older (#6620)
  * Log system memory at info (#6617)
  * readme: add Painting Droid community integration (#5514)
  * readme: update Ollama4j link and add link to Ollama4j Web UI (#6608)
  * Fix sprintf to snprintf (#5664)
  * readme: add PartCAD tool to readme for generating 3D CAD models using Ollama (#6605)
  * Reduce docker image size (#5847)
  * readme: add OllamaFarm project (#6508)
  * readme: add go-crew and Ollamaclient projects (#6583)
  * docs: update faq.md for OLLAMA_MODELS env var permissions (#6587)
  * fix(cmd): show info may have nil ModelInfo (#6579)
  * docs: update GGUF examples and references (#6577)
  * Add findutils to base images (#6581)
  * remove any unneeded build artifacts
  * doc: Add Nix and Flox to package manager listing (#6074)
  * update the openai docs to explain how to set the context size (#6548)
  * fix(test): do not clobber models directory
  * add llama3.1 chat template (#6545)
  * update deprecated warnings
  * validate model path
  * throw an error when encountering unsupport tensor sizes (#6538)
  * Move ollama executable out of bin dir (#6535)
  * update templates to use messages
  * more tokenizer tests
  * add safetensors to the modelfile docs (#6532)
  * Fix import image width (#6528)
  * Update manual instructions with discrete ROCm bundle (#6445)
  * llm: fix typo in comment (#6530)
  * adjust image sizes
  * clean up convert tokenizer
  * detect chat template from configs that contain lists
  * update the import docs (#6104)
  * server: clean up route names for consistency (#6524)
  * Only enable numa on CPUs (#6484)
  * gpu: Group GPU Library sets by variant (#6483)
  * update faq
  * passthrough OLLAMA_HOST path to client
  * convert safetensor adapters into GGUF (#6327)
  * gpu: Ensure driver version set before variant (#6480)
  * llm: Align cmake define for cuda no peer copy (#6455)
  * Fix embeddings memory corruption (#6467)
  * llama3.1
  * convert gemma2
  * create bert models from cli
  * bert
  * Split rocm back out of bundle (#6432)
  * CI: remove directories from dist dir before upload step (#6429)
  * CI: handle directories during checksum (#6427)
  * Fix overlapping artifact name on CI
  * Review comments
  * Adjust layout to bin+lib/ollama
  * Remove Jetpack
  * Add windows cuda v12 + v11 support
  * Enable cuda v12 flags
  * Add cuda v12 variant and selection logic
  * Report GPU variant in log
  * Add Jetson cuda variants for arm
  * Wire up ccache and pigz in the docker based build
  * Refactor linux packaging
  * server: limit upload parts to 16 (#6411)
  * Fix white space.
  * Reset NumCtx.
  * Override numParallel only if unset.
  * fix: chmod new layer to 0o644 when creating it
  * fix: Add tooltip to system tray icon
  * only skip invalid json manifests
  * skip invalid manifest files
  * fix noprune
  * add `CONTRIBUTING.md` (#6349)
  * Fix typo and improve readability (#5964)
  * server: reduce max connections used in download (#6347)
  * update chatml template format to latest in docs (#6344)
  * lint
  * Update openai.md to remove extra checkbox (#6345)
  * llama3.1 memory
* Thu Aug 15 2024 Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.3.6:
  * Fixed issue where /api/embed would return an error instead of
    loading the model when the input field was not provided.
  * ollama create can now import Phi-3 models from Safetensors
  * Added progress information to ollama create when importing GGUF
    files
  * Ollama will now import GGUF files faster by minimizing file
    copies
- Update to version 0.3.6:
  * Fixed issue where temporary files would not be cleaned up
  * Fix rare error when Ollama would start up due to invalid model
    data
* Sun Aug 11 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to version 0.3.4:
  * New embedding models
  - BGE-M3: a large embedding model from BAAI distinguished for
    its versatility in Multi-Functionality, Multi-Linguality, and
    Multi-Granularity.
  - BGE-Large: a large embedding model trained in english.
  - Paraphrase-Multilingual: A multilingual embedding model
    trained on parallel data for 50+ languages.
  * New embedding API with batch support
  - Ollama now supports a new API endpoint /api/embed for
    embedding generation:
  * This API endpoint supports new features:
  - Batches: generate embeddings for several documents in
    one request
  - Normalized embeddings: embeddings are now normalized,
    improving similarity results
  - Truncation: a new truncate parameter that will error if
    set to false
  - Metrics: responses include load_duration, total_duration and
    prompt_eval_count metrics
* Sat Aug 03 2024 eyadlorenzo@gmail.com
- Update to version 0.3.3:
  * The /api/embed endpoint now returns statistics: total_duration,
    load_duration, and prompt_eval_count
  * Added usage metrics to the /v1/embeddings OpenAI compatibility
    API
  * Fixed issue where /api/generate would respond with an empty
    string if provided a context
  * Fixed issue where /api/generate would return an incorrect
    value for context
  * /show modefile will now render MESSAGE commands correctly
- Update to version 0.3.2:
  * Fixed issue where ollama pull would not resume download
    progress
  * Fixed issue where phi3 would report an error on older versions
* Tue Jul 30 2024 Adrian Schröter <adrian@suse.de>
- Update to version 0.3.1:
  * Added support for min_p sampling option
  * Lowered number of requests required when downloading models
    with ollama pull
  * ollama create will now autodetect required stop parameters
    when importing certain models
  * Fixed issue where /save would cause parameters to be saved
    incorrectly.
  * OpenAI-compatible API will now return a finish_reason of
    tool_calls if a tool call occured.
* Mon Jul 29 2024 Adrian Schröter <adrian@suse.de>
- fix build on leap 15.6
- exclude builds on 32bit due to build failures
* Sun Jul 28 2024 Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.3.0:
  * Ollama now supports tool calling with popular models such
    as Llama 3.1. This enables a model to answer a given prompt
    using tool(s) it knows about, making it possible for models to
    perform more complex tasks or interact with the outside world.
  * New models:
    ~ Llama 3.1
    ~ Mistral Large 2
    ~ Firefunction v2
    ~ Llama-3-Groq-Tool-Use
  * Fixed duplicate error message when running ollama create