[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[elpa] externals/llm 037b00e81b 2/2: Add a new Ollama provider which tak
From: |
ELPA Syncer |
Subject: |
[elpa] externals/llm 037b00e81b 2/2: Add a new Ollama provider which takes a key (#183) |
Date: |
Sun, 6 Apr 2025 18:59:13 -0400 (EDT) |
branch: externals/llm
commit 037b00e81bd470ba1f1cf77ea521cf231ffcb8f7
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: GitHub <noreply@github.com>
Add a new Ollama provider which takes a key (#183)
This will fix
https://github.com/ahyatt/llm/issues/50#issuecomment-2769598351.
---
NEWS.org | 3 ++-
README.org | 7 ++++++-
llm-fake.el | 2 +-
llm-ollama.el | 9 ++++++++-
4 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/NEWS.org b/NEWS.org
index 4de046378c..beab8d54a4 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,4 +1,5 @@
-* Version 0.24.3
+* Version 0.25.0
+- Add =llm-ollama-authed= provider, which is like Ollama but takes a key.
- Set Gemini 2.5 Pro to be the default Gemini model
* Version 0.24.2
- Fix issue with some Open AI compatible providers needing models to be passed
by giving a non-nil default.
diff --git a/README.org b/README.org
index 40b8e0b7d4..a510937688 100644
--- a/README.org
+++ b/README.org
@@ -101,7 +101,12 @@ In addition to the provider, which you may want multiple
of (for example, to cha
- ~:host~: The host that ollama is run on. This is optional and will default
to localhost.
- ~:port~: The port that ollama is run on. This is optional and will default
to the default ollama port.
- ~:chat-model~: The model name to use for chat. This is not optional for
chat use, since there is no default.
-- ~:embedding-model~: The model name to use for embeddings. Only
[[https://ollama.com/search?q=&c=embedding][some models]] can be used for
embeddings. This is not optional for embedding use, since there is no default.
+- ~:embedding-model~: The model name to use for embeddings. Only
[[https://ollama.com/search?q=&c=embedding][some models]] can be used for
embeddings. This is not optional for embedding use, since there is no default.
+** Ollama (authed)
+This is a variant of the Ollama provider, which is set up with the same
parameters plus:
+- ~:key~: The authentication key of the provider.
+
+The key is used to send a standard =Authentication= header.
** Deepseek
[[https://deepseek.com][Deepseek]] is a company offers both reasoning and chat
high-quality models. This provider connects to their server. It is also
possible to run their model locally as a free model via Ollama. To use the
service, you can set it up with the following parameters:
diff --git a/llm-fake.el b/llm-fake.el
index 0c81319da1..b20845f3ce 100644
--- a/llm-fake.el
+++ b/llm-fake.el
@@ -92,7 +92,7 @@ message cons. If nil, the response will be a simple vector."
(mapc (lambda (word)
(setq accum (concat accum word " "))
(funcall partial-callback (if multi-output `(:text ,accum)
accum))
- (sleep-for 0 100))
+ (sleep-for 0.1))
(split-string text))
(setf (llm-chat-prompt-interactions prompt)
(append (llm-chat-prompt-interactions prompt)
diff --git a/llm-ollama.el b/llm-ollama.el
index 40975e093b..a3558a10b4 100644
--- a/llm-ollama.el
+++ b/llm-ollama.el
@@ -1,4 +1,4 @@
-;;; llm-ollama.el --- llm module for integrating with Ollama. -*-
lexical-binding: t; package-lint-main-file: "llm.el"; -*-
+;;; llm-ollama.el --- llm module for integrating with Ollama. -*-
lexical-binding: t; package-lint-main-file: "llm.el";
byte-compile-docstring-max-column: 200-*-
;; Copyright (c) 2023-2025 Free Software Foundation, Inc.
@@ -63,6 +63,13 @@ CHAT-MODEL is the model to use for chat queries. It is
required.
EMBEDDING-MODEL is the model to use for embeddings. It is required."
(scheme "http") (host "localhost") (port 11434) chat-model embedding-model)
+(cl-defstruct (llm-ollama-authed (:include llm-ollama))
+ "Similar to llm-ollama, but also with a key."
+ key)
+
+(cl-defmethod llm-provider-headers ((provider llm-ollama-authed))
+ `(("Authorization" . ,(format "Bearer %s" (encode-coding-string
(llm-ollama-authed-key provider) 'utf-8)))))
+
;; Ollama's models may or may not be free, we have no way of knowing. There's
no
;; way to tell, and no ToS to point out here.
(cl-defmethod llm-nonfree-message-info ((provider llm-ollama))