Base Package:
mingw-w64-llama.cpp

Description:
Port of Facebook's LLaMA model in C/C++ (mingw-w64)
Group(s):
-
Repo:
mingw64
Homepage:
https://github.com/ggerganov/llama.cpp
License(s):
MIT
Version:
r2355.e04e04f8f-1
GIT Version:
r2355.e04e04f8f-1
Anitya:
llama.cpp
AUR:
r1110.423db74
Repology:
llama.cpp

Installation:
pacman -S mingw-w64-x86_64-llama.cpp
File:
https://mirror.msys2.org/mingw/mingw64/mingw-w64-x86_64-llama.cpp-r2355.e04e04f8f-1-any.pkg.tar.zst
SHA256:
3d121f3a998514120a2a8f68d26e3539bab0d3e866c1fd6779045538613b2632
Last Packager:
CI (msys2/msys2-autobuild/dad66715/8229697868)
Build Date:
2024-03-11 08:27:34
Package Size:
2.19 MB
Installed Size:
17.40 MB

Dependencies:
Optional Dependencies:
-
Build Dependencies:
Check Dependencies:
-
Provides:
-
Conflicts:
-
Replaces:
-

Provided By:
-
Required By:
-

Files:
/mingw64/bin/libggml_shared.dll
/mingw64/bin/libllama.dll
/mingw64/bin/libllava_shared.dll
/mingw64/bin/llama.cpp-baby-llama.exe
/mingw64/bin/llama.cpp-batched-bench.exe
/mingw64/bin/llama.cpp-batched.exe
/mingw64/bin/llama.cpp-beam-search.exe
/mingw64/bin/llama.cpp-benchmark.exe
/mingw64/bin/llama.cpp-convert-llama2c-to-ggml.exe
/mingw64/bin/llama.cpp-convert-lora-to-ggml.py
/mingw64/bin/llama.cpp-convert.py
/mingw64/bin/llama.cpp-embedding.exe
/mingw64/bin/llama.cpp-export-lora.exe
/mingw64/bin/llama.cpp-finetune.exe
/mingw64/bin/llama.cpp-gguf.exe
/mingw64/bin/llama.cpp-imatrix.exe
/mingw64/bin/llama.cpp-infill.exe
/mingw64/bin/llama.cpp-llama-bench.exe
/mingw64/bin/llama.cpp-llava-cli.exe
/mingw64/bin/llama.cpp-lookahead.exe
/mingw64/bin/llama.cpp-lookup.exe
/mingw64/bin/llama.cpp-parallel.exe
/mingw64/bin/llama.cpp-passkey.exe
/mingw64/bin/llama.cpp-perplexity.exe
/mingw64/bin/llama.cpp-quantize-stats.exe
/mingw64/bin/llama.cpp-quantize.exe
/mingw64/bin/llama.cpp-save-load-state.exe
/mingw64/bin/llama.cpp-server.exe
/mingw64/bin/llama.cpp-simple.exe
/mingw64/bin/llama.cpp-speculative.exe
/mingw64/bin/llama.cpp-test-autorelease.exe
/mingw64/bin/llama.cpp-test-backend-ops.exe
/mingw64/bin/llama.cpp-test-chat-template.exe
/mingw64/bin/llama.cpp-test-grad0.exe
/mingw64/bin/llama.cpp-test-grammar-parser.exe
/mingw64/bin/llama.cpp-test-llama-grammar.exe
/mingw64/bin/llama.cpp-test-model-load-cancel.exe
/mingw64/bin/llama.cpp-test-quantize-fns.exe
/mingw64/bin/llama.cpp-test-quantize-perf.exe
/mingw64/bin/llama.cpp-test-rope.exe
/mingw64/bin/llama.cpp-test-sampling.exe
/mingw64/bin/llama.cpp-test-tokenizer-0-falcon.exe
/mingw64/bin/llama.cpp-test-tokenizer-0-llama.exe
/mingw64/bin/llama.cpp-test-tokenizer-1-bpe.exe
/mingw64/bin/llama.cpp-test-tokenizer-1-llama.exe
/mingw64/bin/llama.cpp-tokenize.exe
/mingw64/bin/llama.cpp-train-text-from-scratch.exe
/mingw64/bin/llama.cpp.exe
/mingw64/include/ggml-alloc.h
/mingw64/include/ggml-backend.h
/mingw64/include/ggml.h
/mingw64/include/llama.h
/mingw64/lib/cmake/Llama/LlamaConfig.cmake
/mingw64/lib/cmake/Llama/LlamaConfigVersion.cmake
/mingw64/lib/libggml_shared.dll.a
/mingw64/lib/libllama.dll.a
/mingw64/lib/libllava_shared.dll.a
/mingw64/share/licenses/llama.cpp/LICENSE
Last Update: 2024-05-02 09:55:33 [Request update]