emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm 36f54afad6: Switch Open AI default embedding model


From: ELPA Syncer
Subject: [elpa] externals/llm 36f54afad6: Switch Open AI default embedding model to text-embedding-3-small
Date: Sat, 27 Jan 2024 21:58:15 -0500 (EST)

branch: externals/llm
commit 36f54afad634a865ebd0bac480a5fa5a7de671fb
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: Andrew Hyatt <ahyatt@gmail.com>

    Switch Open AI default embedding model to text-embedding-3-small
    
    Important: Anyone who has stored embeddings should either regenerate
    embeddings (recommended) or hard-code the old embedding
    model ("text-embedding-ada-002").
---
 NEWS.org      | 2 ++
 README.org    | 4 +++-
 llm-openai.el | 4 ++--
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/NEWS.org b/NEWS.org
index b1a25d4add..891c1c1025 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,3 +1,5 @@
+* Version 0.9.1
+- Default to the new "text-embedding-3-small" model for Open AI.  *Important*: 
Anyone who has stored embeddings should either regenerate embeddings 
(recommended) or hard-code the old embedding model ("text-embedding-ada-002").
 * Version 0.9
 - Add =llm-chat-token-limit= to find the token limit based on the model.
 - Add request timeout customization.
diff --git a/README.org b/README.org
index 2c0ce50801..6a2d211e48 100644
--- a/README.org
+++ b/README.org
@@ -9,7 +9,7 @@ Certain functionalities might not be available in some LLMs. 
Any such unsupporte
 
 This package is still in its early stages but will continue to develop as LLMs 
and functionality are introduced.
 * Setting up providers
-Users of an application that uses this package should not need to install it 
themselves. The llm module should be installed as a dependency when you install 
the package that uses it. However, you do need to require the llm module and 
set up the provider you will be using. Typically, applications will have a 
variable you can set. For example, let's say there's a package called 
"llm-refactoring", which has a variable ~llm-refactoring-provider~. You would 
set it up like so:
+Users of an application that uses this package should not need to install it 
themselves. The llm package should be installed as a dependency when you 
install the package that uses it. However, you do need to require the llm 
module and set up the provider you will be using. Typically, applications will 
have a variable you can set. For example, let's say there's a package called 
"llm-refactoring", which has a variable ~llm-refactoring-provider~. You would 
set it up like so:
 
 #+begin_src emacs-lisp
 (use-package llm-refactoring
@@ -19,6 +19,8 @@ Users of an application that uses this package should not 
need to install it the
 #+end_src
 
 Here ~my-openai-key~ would be a variable you set up before with your OpenAI 
key. Or, just substitute the key itself as a string. It's important to remember 
never to check your key into a public repository such as GitHub, because your 
key must be kept private. Anyone with your key can use the API, and you will be 
charged.
+
+For embedding users. if you store the embeddings, you *must* set the embedding 
model.  Even though there's no way for the llm package to tell whether you are 
storing it, if the default model changes, you may find yourself storing 
incompatible embeddings.
 ** Open AI
 You can set up with ~make-llm-openai~, with the following parameters:
 - ~:key~, the Open AI key that you get when you sign up to use Open AI's APIs. 
 Remember to keep this private.  This is non-optional.
diff --git a/llm-openai.el b/llm-openai.el
index 341275c9c8..fd57d0bd93 100644
--- a/llm-openai.el
+++ b/llm-openai.el
@@ -69,7 +69,7 @@ https://api.example.com/v1/chat, then URL should be
   "Return the request to the server for the embedding of STRING.
 MODEL is the embedding model to use, or nil to use the default.."
   `(("input" . ,string)
-    ("model" . ,(or model "text-embedding-ada-002"))))
+    ("model" . ,(or model "text-embedding-3-small"))))
 
 (defun llm-openai--embedding-extract-response (response)
   "Return the embedding from the server RESPONSE."
@@ -113,7 +113,7 @@ This is just the key, if it exists."
             "/") command))
 
 (cl-defmethod llm-embedding-async ((provider llm-openai) string 
vector-callback error-callback)
-  (llm-openai--check-key provider)
+  (llm-openai--check-key provider)  
   (let ((buf (current-buffer)))
     (llm-request-async (llm-openai--url provider "embeddings")
                        :headers (llm-openai--headers provider)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]