Base Package:
mingw-w64-llama.cpp

Description:
Port of Facebook's LLaMA model in C/C++ (mingw-w64)
Group(s):
-
Repo:
ucrt64
Homepage:
https://github.com/ggerganov/llama.cpp
License(s):
MIT
Version:
r2859.7bd4ffb78-1 (b3091.r0.g2b33896-1 in git)
External:
Anitya
llama.cpp
AUR
b4053
Repology
llama.cpp

Installation:
pacman -S mingw-w64-ucrt-x86_64-llama.cpp
File:
https://mirror.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-llama.cpp-r2859.7bd4ffb78-1-any.pkg.tar.zst
SHA256:
b2545b730d733e3d9435fbe039806e4480716e9c619e0a1d30031a5b2cb5de40
Last Packager:
CI (msys2/msys2-autobuild/9c7e8d31/9047279075)
Build Date:
2024-05-12 00:09:52
Package Size:
3.25 MB
Installed Size:
29.44 MB
Source-Only Tarball:
https://mirror.msys2.org/mingw/sources/mingw-w64-llama.cpp-r2859.7bd4ffb78-1.src.tar.zst

Dependencies:
Optional Dependencies:
-
Build Dependencies:
Check Dependencies:
-
Provides:
-
Conflicts:
-
Replaces:
-

Provided By:
-
Required By:
-

Files:
/ucrt64/bin/libggml_shared.dll
/ucrt64/bin/libllama.dll
/ucrt64/bin/libllava_shared.dll
/ucrt64/bin/llama.cpp-baby-llama.exe
/ucrt64/bin/llama.cpp-batched-bench.exe
/ucrt64/bin/llama.cpp-batched.exe
/ucrt64/bin/llama.cpp-beam-search.exe
/ucrt64/bin/llama.cpp-benchmark.exe
/ucrt64/bin/llama.cpp-convert-llama2c-to-ggml.exe
/ucrt64/bin/llama.cpp-convert-lora-to-ggml.py
/ucrt64/bin/llama.cpp-convert.py
/ucrt64/bin/llama.cpp-embedding.exe
/ucrt64/bin/llama.cpp-eval-callback.exe
/ucrt64/bin/llama.cpp-export-lora.exe
/ucrt64/bin/llama.cpp-finetune.exe
/ucrt64/bin/llama.cpp-gguf-split.exe
/ucrt64/bin/llama.cpp-gguf.exe
/ucrt64/bin/llama.cpp-gritlm.exe
/ucrt64/bin/llama.cpp-imatrix.exe
/ucrt64/bin/llama.cpp-infill.exe
/ucrt64/bin/llama.cpp-llama-bench.exe
/ucrt64/bin/llama.cpp-llava-cli.exe
/ucrt64/bin/llama.cpp-lookahead.exe
/ucrt64/bin/llama.cpp-lookup-create.exe
/ucrt64/bin/llama.cpp-lookup-merge.exe
/ucrt64/bin/llama.cpp-lookup-stats.exe
/ucrt64/bin/llama.cpp-lookup.exe
/ucrt64/bin/llama.cpp-parallel.exe
/ucrt64/bin/llama.cpp-passkey.exe
/ucrt64/bin/llama.cpp-perplexity.exe
/ucrt64/bin/llama.cpp-quantize-stats.exe
/ucrt64/bin/llama.cpp-quantize.exe
/ucrt64/bin/llama.cpp-retrieval.exe
/ucrt64/bin/llama.cpp-save-load-state.exe
/ucrt64/bin/llama.cpp-server.exe
/ucrt64/bin/llama.cpp-simple.exe
/ucrt64/bin/llama.cpp-speculative.exe
/ucrt64/bin/llama.cpp-test-autorelease.exe
/ucrt64/bin/llama.cpp-test-backend-ops.exe
/ucrt64/bin/llama.cpp-test-chat-template.exe
/ucrt64/bin/llama.cpp-test-grad0.exe
/ucrt64/bin/llama.cpp-test-grammar-integration.exe
/ucrt64/bin/llama.cpp-test-grammar-parser.exe
/ucrt64/bin/llama.cpp-test-json-schema-to-grammar.exe
/ucrt64/bin/llama.cpp-test-llama-grammar.exe
/ucrt64/bin/llama.cpp-test-model-load-cancel.exe
/ucrt64/bin/llama.cpp-test-quantize-fns.exe
/ucrt64/bin/llama.cpp-test-quantize-perf.exe
/ucrt64/bin/llama.cpp-test-rope.exe
/ucrt64/bin/llama.cpp-test-sampling.exe
/ucrt64/bin/llama.cpp-test-tokenizer-0.exe
/ucrt64/bin/llama.cpp-test-tokenizer-1-bpe.exe
/ucrt64/bin/llama.cpp-test-tokenizer-1-spm.exe
/ucrt64/bin/llama.cpp-tokenize.exe
/ucrt64/bin/llama.cpp-train-text-from-scratch.exe
/ucrt64/bin/llama.cpp.exe
/ucrt64/include/ggml-alloc.h
/ucrt64/include/ggml-backend.h
/ucrt64/include/ggml.h
/ucrt64/include/llama.h
/ucrt64/lib/cmake/Llama/LlamaConfig.cmake
/ucrt64/lib/cmake/Llama/LlamaConfigVersion.cmake
/ucrt64/lib/libggml_shared.dll.a
/ucrt64/lib/libllama.dll.a
/ucrt64/lib/libllava_shared.dll.a
/ucrt64/share/licenses/llama.cpp/LICENSE
Last Update: 2024-11-09 15:55:57 [Request update]