guix-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

06/07: gnu: llama-cpp: Use OpenBLAS.


From: guix-commits
Subject: 06/07: gnu: llama-cpp: Use OpenBLAS.
Date: Fri, 5 Apr 2024 07:08:44 -0400 (EDT)

cbaines pushed a commit to branch master
in repository guix.

commit d8a63bbcee616f224c10462dbfb117ec009c50d8
Author: John Fremlin <john@fremlin.org>
AuthorDate: Wed Apr 3 23:46:25 2024 -0400

    gnu: llama-cpp: Use OpenBLAS.
    
    For faster prompt processing, OpenBLAS is recommended by
    https://github.com/ggerganov/llama.cpp
    
    * gnu/packages/machine-learning.scm (llama-cpp)[arguments]: Add
     #:configure-flags.
    [native-inputs]: Add pkg-config.
    [propagated-inputs]: Add openblas.
    
    Change-Id: Iaf6f22252da13e2d6f503992878b35b0da7de0aa
    Signed-off-by: Christopher Baines <mail@cbaines.net>
---
 gnu/packages/machine-learning.scm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/gnu/packages/machine-learning.scm 
b/gnu/packages/machine-learning.scm
index e38d93ea05..e61299a5db 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -541,6 +541,8 @@ Performance is achieved by using the LLVM JIT compiler.")
       (build-system cmake-build-system)
       (arguments
        (list
+        #:configure-flags
+        '(list "-DLLAMA_BLAS=ON" "-DLLAMA_BLAS_VENDOR=OpenBLAS")
         #:modules '((ice-9 textual-ports)
                     (guix build utils)
                     ((guix build python-build-system) #:prefix python:)
@@ -575,8 +577,9 @@ Performance is achieved by using the LLVM JIT compiler.")
               (lambda _
                 (copy-file "bin/main" (string-append #$output 
"/bin/llama")))))))
       (inputs (list python))
+      (native-inputs (list pkg-config))
       (propagated-inputs
-       (list python-numpy python-pytorch python-sentencepiece))
+       (list python-numpy python-pytorch python-sentencepiece openblas))
       (home-page "https://github.com/ggerganov/llama.cpp";)
       (synopsis "Port of Facebook's LLaMA model in C/C++")
       (description "This package provides a port to Facebook's LLaMA collection



reply via email to

[Prev in Thread] Current Thread [Next in Thread]