Releases: containers/ramalama
Releases · containers/ramalama
v0.3.0
What's Changed
- Move man page README.md to full links by @rhatdan in #483
- Allow users to set ctx-size via command line by @rhatdan in #484
- Add --seed option by @rhatdan in #485
- Update install.sh by @jtligon in #493
- Take in fixes from @Churchyard to modernize spec file by @smooge in #494
- Fix up building and pushing OCI Images by @rhatdan in #492
- Fix handling of file_not_found errors by @rhatdan in #499
- Updated nv docs to align with latest WSL2 cuda setup by @bmahabirbu in #503
- Add ramalama convert command by @rhatdan in #500
- Stop checking if command is running in container by @rhatdan in #505
- Add initial CONTRIBUTING.md file by @rhatdan in #507
- Place image name just before command by @ericcurtin in #511
- Simplify install by @ericcurtin in #510
- Fix handling of README.md in docs directory by @rhatdan in #512
- Add installation steps for Podman 5 in CI workflows by @ericcurtin in #508
- Bump to v0.3.0 by @rhatdan in #513
New Contributors
Full Changelog: v0.2.0...v0.3.0
v0.2.0
v0.1.3
What's Changed
- Enable GCC Toolet 12 to support AVX VNNI by @nzwulfin in #473
- Failover to OCI when push fails with default push mechanism by @rhatdan in #476
- Fall back to huggingface-cli when pulling via URL fails by @rhatdan in #475
- Revert "Switch to llama-simple-chat" by @rhatdan in #477
- Add support for http, https and file pulls by @rhatdan in #463
- Bump to v0.1.3 by @rhatdan in #479
New Contributors
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- Bump to v0.1.1 by @rhatdan in #450
- Update ggerganov/whisper.cpp digest to f19463e by @renovate in #453
- Switch to llama-simple-chat by @ericcurtin in #454
- Simplify container image build by @ericcurtin in #451
- Update ggerganov/whisper.cpp digest to 83ac284 by @renovate in #455
- cli.py: remove errant slash preventing the loading of user conf file(s) by @FNGarvin in #457
- Update ggerganov/whisper.cpp digest to f02b40b by @renovate in #456
- Switched DGGML_CUDA to ON in cuda containerfile by @bmahabirbu in #459
- Update ggerganov/whisper.cpp digest to bb12cd9 by @renovate in #460
- Update ggerganov/whisper.cpp digest to 01d3bd7 by @renovate in #461
- Update ggerganov/whisper.cpp digest to d24f981 by @renovate in #462
- Docu by @atarlov in #464
- Update ggerganov/whisper.cpp digest to 6266a9f by @renovate in #466
- Fix handling of ramalama login huggingface by @rhatdan in #467
- Support huggingface-cli older than 0.25.0, like on Fedora 40 and 41 by @debarshiray in #468
- Bump to v0.1.2 by @rhatdan in #470
New Contributors
- @FNGarvin made their first contribution in #457
- @atarlov made their first contribution in #464
- @debarshiray made their first contribution in #468
Full Changelog: v0.1.1...v0.1.2
v0.1.1
Full Changelog: v0.1.0...v0.1.1
Mainly to fix issue in PyPi
v0.1.0
What's Changed
- We can now run models via Kompute in podman-machine by @ericcurtin in #440
- Only do dnf install for cuda images by @ericcurtin in #441
- Add --host=0.0.0.0 if running llama.cpp serve within a container by @rhatdan in #444
- Document the host flag in ramalama.conf file by @rhatdan in #447
- Add granite-8b to shortnames.conf by @rhatdan in #448
- Fix RamaLama container image build by @ericcurtin in #446
- Bump to v0.1.0 by @rhatdan in #449
Full Changelog: v0.0.23...v0.1.0
v0.0.23
What's Changed
- Remove omlmd as a dependency by @ericcurtin in #428
- Check versions match in CI by @ericcurtin in #427
- Fix podman run oci://... by @rhatdan in #429
- Attempt to remove OCI Image if removing as Ollama or Huggingface fails by @rhatdan in #432
- Run does not have generate, so remove it by @rhatdan in #434
- Run the command by default without stderr by @rhatdan in #436
- Closing stderr on podman command is blocking progress information and… by @rhatdan in #438
- Make it easier to test-run manually by @rhatdan in #435
- Install llama-cpp-python[server] by @ericcurtin in #430
Full Changelog: v0.0.22...v0.0.23
v0.0.22
What's Changed
- Bump to v0.0.21 by @rhatdan in #410
- Update ggerganov/whisper.cpp digest to 0377596 by @renovate in #409
- Use subpath for OCI Models by @rhatdan in #411
- Consistency changes by @ericcurtin in #408
- Split out kube.py from model.py by @rhatdan in #412
- Fix mounting of Ollama AI Images into containers. by @rhatdan in #414
- Start an Asahi version by @ericcurtin in #369
- Generate MODEL.yaml file locally rather then just to stdout by @rhatdan in #416
- Bugfix comma by @ericcurtin in #421
- Fix nocontainer mode by @rhatdan in #419
- Update ggerganov/whisper.cpp digest to 31aea56 by @renovate in #425
- Add --generate quadlet/kube to create quadlet and kube.yaml by @rhatdan in #423
- Allow default port to be specified in ramalama.conf file by @rhatdan in #424
- Made run and serve consistent with model exec path. Fixes issue #413 by @bmahabirbu in #426
- Bump to v0.0.22 by @rhatdan in #415
Full Changelog: v0.0.21...v0.0.22
v0.0.21
What's Changed
- Fix rpm build by @rhatdan in #350
- Add environment variables for checksums to ramalama container by @rhatdan in #355
- Change default container name for ROCm container image by @ericcurtin in #360
- Allow removal of models specified as shortnames by @rhatdan in #357
- Added a check to the zsh completions generation step by @ericcurtin in #356
- Add vulkan image and show size by @ericcurtin in #353
- Update ggerganov/whisper.cpp digest to 0fbaac9 by @renovate in #363
- Allow pushing of oci images by @rhatdan in #358
- Fix Makefile to be less stringent on failues of zsh by @smooge in #368
- Add support for --authfile and --tls-verify for login by @rhatdan in #364
- Fix incompatible Ollama paths by @swarajpande5 in #370
- Fix shortname paths by @swarajpande5 in #372
- Change to None instead of "" by @ericcurtin in #371
- Kompute build is warning it is missing this package by @ericcurtin in #366
- Add --debug option to show exec_cmd and run_cmd commands by @rhatdan in #373
- Add support for pushing a file into an OCI Model image by @rhatdan in #374
- Replace
huggingface-cli download
command with simple https client to pull models by @swarajpande5 in #375 - Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.4-1214.1729773476 by @renovate in #380
- Update ggerganov/whisper.cpp digest to c0ea41f by @renovate in #381
- Update ggerganov/whisper.cpp digest to fc49ee4 by @renovate in #382
- Update dependency huggingface/huggingface_hub to v0.26.2 by @renovate in #383
- Update dependency tqdm/tqdm to v4.66.6 - autoclosed by @renovate in #385
- Update ggerganov/whisper.cpp digest to 1626b73 by @renovate in #386
- Support listing and removing newly designed bundled images by @rhatdan in #378
- Fix default conman check by @rhatdan in #389
- Drop in config by @ericcurtin in #379
- Update ggerganov/whisper.cpp digest to 55e4221 by @renovate in #390
- Move run_container to model.py allowing models types to override by @rhatdan in #388
- Update ggerganov/whisper.cpp digest to 19dca2b by @renovate in #392
- Add man page information for ramalama.conf by @rhatdan in #391
- More debug info by @ericcurtin in #394
- Make transport use config by @rhatdan in #395
- Enable containers on macOS to use the GPU by @slp in #397
- chore(deps): update ggerganov/whisper.cpp digest to 4e10afb by @renovate in #398
- Time for removal of huggingface_hub dependancy by @ericcurtin in #400
- Mount model. car volumes into container by @rhatdan in #396
- Remove huggingface-hub references from spec file by @ericcurtin in #401
- Packit: disable osh diff scan by @lsm5 in #403
- Make minimal change to allow for ramalama to build on EL9 by @smooge in #404
- reduced the size of the nvidia containerfile by @bmahabirbu in #407
- Move /run/model to /mnt/models to match k8s model.car definiton by @rhatdan in #402
- Verify pyproject.py and setup.py have same version by @rhatdan in #405
- Make quadlets work with OCI images by @rhatdan in #406
Full Changelog: v0.0.20...v0.0.21
v0.0.20
What's Changed
- Add support for testing with docker by @rhatdan in #320
- chore(deps): update ggerganov/whisper.cpp digest to d3f7137 by @renovate in #321
- Make changes to spec file to better pass Fedora packaging guidelines by @smooge in #318
- Fix erroneous output in CUDA containerfile by @bmahabirbu in #322
- chore(deps): update ggerganov/whisper.cpp digest to a5abfe6 by @renovate in #323
- There's many cases where macOS support is broken by @ericcurtin in #325
- chore(deps): update dependency huggingface/huggingface_hub to v0.26.0 by @renovate in #328
- State Containerfile is available but not built and pushed by @ericcurtin in #329
- split build from validate in Makefile by @rhatdan in #326
- Remove duplicate GitHub Actions workflow runs in PRs by @p5 in #330
- Add hf:// as an alias to huggingface:// by @ericcurtin in #324
- Break up tests: build, bats, bats-nocontainer, docker, mac-nocontainer by @rhatdan in #331
- Update llama.cpp to fix granite3-moe models by @ericcurtin in #340
- Make sure we specify bash here by @ericcurtin in #337
- If installed in /usr/local , ramalama libs cannot be found by @ericcurtin in #333
- Kompute Containerfile by @ericcurtin in #334
- Add kubernetes.YAML support to ramalama serve by @rhatdan in #327
- Add AI Lab models to shortnames by @MichaelClifford in #345
- Build container images only on changes by @p5 in #332
- Make more spec changes to match RPM evaluation by @smooge in #347
- Free up space for docker tests. by @rhatdan in #343
- Allow the removal of one then one models via rm command by @rhatdan in #344
- Fix spelling mistakes in markdown by @rhatdan in #348
- Bump to v0.0.20 by @rhatdan in #349
New Contributors
- @smooge made their first contribution in #318
- @MichaelClifford made their first contribution in #345
Full Changelog: v0.0.19...v0.0.20