gnu-emacs-sources
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNU ELPA] Llm version 0.17.0


From: ELPA update
Subject: [GNU ELPA] Llm version 0.17.0
Date: Sun, 14 Jul 2024 05:03:23 -0400

Version 0.17.0 of package Llm has just been released in GNU ELPA.
You can now find it in M-x list-packages RET.

Llm describes itself as:

  ===================================
  Interface to pluggable llm backends
  ===================================

More at https://elpa.gnu.org/packages/llm.html

## Summary:

                          ━━━━━━━━━━━━━━━━━━━━━━━
                           LLM PACKAGE FOR EMACS
                          ━━━━━━━━━━━━━━━━━━━━━━━





  1 Introduction
  ══════════════

    This library provides an interface for interacting with Large Language
    Models (LLMs). It allows elisp code to use LLMs while also giving
    end-users the choice to select their preferred LLM. This is
    particularly beneficial when working with LLMs since various
    high-quality models exist, some of which have paid API access, while
    others are locally installed and free but offer medium
    quality. Applications using LLMs can utilize this library to ensure
    compatibility regardless of whether the user has a local LLM or is
    paying for API access.

## Recent NEWS:

1 Version 0.17.0
════════════════

  • Introduced `llm-prompt' for prompt management and creation from
    generators.
  • Removed Gemini and Vertex token counting, because `llm-prompt' uses
    token counting often and it's best to have a quick estimate than a
    more expensive more accurate count.


2 Version 0.16.2
════════════════

  • Fix Open AI's gpt4-o context length, which is lower for most paying
    users than the max.


3 Version 0.16.1
════════════════

  • Add support for HTTP / HTTPS proxies.


4 Version 0.16.0
════════════════

  • Add "non-standard params" to set per-provider options.
  • Add default parameters for chat providers.


5 Version 0.15.0
════════════════

  • Move to `plz' backend, which uses `curl'.  This helps move this
    package to a stronger foundation backed by parsing to spec.  Thanks
    to Roman Scherer for contributing the `plz' extensions that enable
    this, which are currently bundled in this package but will
    eventually become their own separate package.
  • Add model context information for Open AI's GPT 4-o.
  • Add model context information for Gemini's 1.5 models.


6 Version 0.14.2
════════════════

  • Fix mangled copyright line (needed to get ELPA version unstuck).
  • Fix Vertex response handling bug.


7 Version 0.14.1
════════════════

  • Fix various issues with the 0.14 release


8 Version 0.14
══════════════

  • Introduce new way of creating prompts: llm-make-chat-prompt,
    deprecating the older ways.
  • Improve Vertex error handling


9 Version 0.13
══════════════

  • Add Claude's new support for function calling.
  • Refactor of providers to centralize embedding and chat logic.
  • Remove connection buffers after use.
  • Fixes to provider more specific error messages for most providers.


10 Verson 0.12.3
════════════════

  • Refactor of warn-non-nonfree methods.
  • Add non-free warnings for Gemini and Claude.


11 Version 0.12.2
═════════════════

  • Send connection issues to error callbacks, and fix an error handling
    issue in Ollama.
  • Fix issue where, in some cases, streaming does not work the first
    time attempted.


12 Version 0.12.1
═════════════════

  • Fix issue in `llm-ollama' with not using provider host for sync
    embeddings.
  • Fix issue in `llm-openai' where were incompatible with some Open
    AI-compatible backends due to assumptions about inconsequential JSON
    details.


13 Version 0.12.0
═════════════════

  • Add provider `llm-claude', for Anthropic's Claude.


14 Version 0.11.0
═════════════════

  • Introduce function calling, now available only in Open AI and
    Gemini.
  • Introduce `llm-capabilities', which returns a list of extra
    capabilities for each backend.
  • Fix issue with logging when we weren't supposed to.


15 Version 0.10.0
═════════════════

  • Introduce llm logging (for help with developing against `llm'), set
    `llm-log' to non-nil to enable logging of all interactions with the
    `llm' package.
  • Change the default interaction with ollama to one more suited for
    converesations (thanks to Thomas Allen).


16 Version 0.9.1
════════════════

  • Default to the new "text-embedding-3-small" model for Open AI.
    *Important*: Anyone who has stored embeddings should either
    regenerate embeddings (recommended) or hard-code the old embedding
    model ("text-embedding-ada-002").
  • Fix response breaking when prompts run afoul of Gemini / Vertex's
    safety checks.
  • Change Gemini streaming to be the correct URL.  This doesn't seem to
    have an effect on behavior.


17 Version 0.9
══════════════

  • Add `llm-chat-token-limit' to find the token limit based on the
    model.
  • Add request timeout customization.


18 Version 0.8
══════════════

  • Allow users to change the Open AI URL, to allow for proxies and
    other services that re-use the API.
  • Add `llm-name' and `llm-cancel-request' to the API.
  • Standardize handling of how context, examples and history are folded
    into `llm-chat-prompt-interactions'.


19 Version 0.7
══════════════

  • Upgrade Google Cloud Vertex to Gemini - previous models are no
    longer available.
  • Added `gemini' provider, which is an alternate endpoint with
    alternate (and easier) authentication and setup compared to Cloud
    Vertex.
  • Provide default for `llm-chat-async' to fall back to streaming if
    not defined for a provider.


20 Version 0.6
══════════════

  • Add provider `llm-llamacpp'.
  • Fix issue with Google Cloud Vertex not responding to messages with a
    system interaction.
  • Fix use of `(pos-eol)' which is not compatible with Emacs 28.1.
  …  …

reply via email to

[Prev in Thread] Current Thread [Next in Thread]