guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guidelines for pre-trained ML model weight binaries (Was re: Where s


From: Nicolas Graves
Subject: Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?)
Date: Mon, 03 Apr 2023 22:48:41 +0200

On 2023-04-03 18:07, Ryan Prior wrote:

> Hi there FSF Licensing! (CC: Guix devel, Nicholas Graves) This morning I read 
> through the FSDG to see if it gives any guidance on when machine learning 
> model weights are appropriate for inclusion in a free system. It does not 
> seem to offer much.
>
> Many ML models are advertising themselves as "open source", including the 
> llama model that Nicholas (quoted below) is interested in including into 
> Guix. However, according to what I can find in Meta's announcement 
> (https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) and the 
> project's documentation 
> (https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) the model 
> itself is not covered by the GPLv3 but rather "a noncommercial license 
> focused on research use cases." I cannot find the full text of this license 
> anywhere in 20 minutes of searching, perhaps others have better ideas how to 
> find it or perhaps the Meta team would provide a copy if we ask.

Just to be precise on llama, what I proposed was to include the port of
Facebook code to CPP, (llama.cpp, see ticket 62443 on guix-patches),
which itself has a license.

The weight themselves indeed do not have a license. You can only
download them through torrents because they were leaked. For this model
in particular, we can't include them in Guix indeed (also because of
their sheer size).

The other case I evoked and one that is more mature is the case of VOSK
audio recognition, which model binaries have an Apache license (you can
find them here: https://alphacephei.com/vosk/models

>
> Free systems will see incentive to include trained models in their 
> distributions to support use cases like automatic live transcription of 
> audio, recognition of objects in photos and video, and natural 
> language-driven help and documentation features. I hope we can update the 
> FSDG to help ensure that any such inclusion fully meets the requirements of 
> freedom for all our users.

Thanks for this email and the question about these guidelines, Ryan. I
would be glad to help if I can.
>
> Cheers,
> Ryan
>
>
> ------- Original Message -------
> On Monday, April 3rd, 2023 at 4:48 PM, Nicolas Graves via "Development of GNU 
> Guix and the GNU System distribution." <guix-devel@gnu.org> wrote:
>
>
>>
>>
>>
>> Hi Guix!
>>
>> I've recently contributed a few tools that make a few OSS machine
>> learning programs usable for Guix, namely nerd-dictation for dictation
>> and llama-cpp as a converstional bot.
>>
>> In the first case, I would also like to contribute parameters of some
>> localized models so that they can be used more easily through Guix. I've
>> already discussed this subject when submitting these patches, without a
>> clear answer.
>>
>> In the case of nerd-dictation, the model parameters that can be used
>> are listed here : https://alphacephei.com/vosk/models
>>
>> One caveat is that using all these models can take a lot of space on the
>> servers, a burden which is not useful because no build step are really
>> needed (except an unzip step). In this case, we can use the
>> #:substitutable? #f flag. You can find an example of some of these
>> packages right here :
>> https://git.sr.ht/~ngraves/dotfiles/tree/main/item/packages.scm
>>
>> So my question is: Should we add this type of models in packages for
>> Guix? If yes, where should we put them? In machine-learning.scm? In a
>> new file machine-learning-models.scm (such a file would never need new
>> modules, and it might avoid some confusion between the tools and the
>> parameters needed to use the tools)?
>>
>>
>> --
>> Best regards,
>> Nicolas Graves

--
Best regards,
Nicolas Graves



reply via email to

[Prev in Thread] Current Thread [Next in Thread]