guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guidelines for pre-trained ML model weight binaries (Was re: Where s


From: zamfofex
Subject: Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?)
Date: Tue, 4 Jul 2023 10:05:01 -0300 (BRT)

> On 07/03/2023 6:39 AM -03 Simon Tournier <zimon.toutoune@gmail.com> wrote:
> 
> Well, I do not see any difference between pre-trained weights and icons
> or sound or good fitted-parameters (e.g., the package
> python-scikit-learn has a lot ;-)).  As I said elsewhere, I do not see
> the difference between pre-trained neural network weights and genomic
> references (e.g., the package r-bsgenome-hsapiens-1000genomes-hs37d5).

I feel like, although this might (arguably) not be the case for leela-zero nor 
Lc0 specifically, for certain machine learning projects, a pretrained network 
can affect the program’s behavior so deeply that it might be considered a 
program itself! Such networks usually approximate an arbitrary function. The 
more complex the model is, the more complex the behavior of this function can 
be, and thus the closer to being an arbitrary program it is.

But this “program” has no source code, it is effectively created in this binary 
form that is difficult to analyse.

In any case, I feel like the issue Ludovic was talking about “user autonomy” is 
fairly relevant (as I understand it). For icons, images, and other similar 
kinds of assets, it is easy enough for the user to replace them, or create 
their own if they want. But for pretrained networks, even if they are under a 
free license, the user might not be able to easily create their own network 
that suits their purposes.

For example, for an image recognition software, there might be data provided by 
the maintainers of the program that is able to recognise a specific set of 
objects in input images, but the user might want to use it to recognise a 
different kind of object. If it is too costly for the user to train a new 
network for their purposes (in terms of hardware and time required), the user 
is effectively entirely bound by the decisions of the maintainers of the 
software, and they can’t change it to suit their purposes.

In that sense, there *might* be room for the maintainers to intentionally and 
maliciously bind the user to the kinds of data they want to provide. And 
perhaps even more likely (and even more dangerously), when the data is opaque 
enough, there is room for the maintainers to bias the networks in obscure ways 
without telling the user. You can imagine this being used in the context of, 
say, text generation or translation, for the developers to embed a certain 
opinion they have into the network in order to bias people towards it.

But even when not done maliciously, this can still be limiting to the user if 
they are unable to easily train their own networks as a replacement.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]