gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] update mechanism


From: Martin Uecker
Subject: Re: [GNUnet-developers] update mechanism
Date: Sun, 18 Aug 2002 18:42:12 +0200
User-agent: Mutt/1.4i

Hi Igor,

On Sat, Aug 17, 2002 at 10:45:21PM +0300, Igor Wronsky wrote:
> On Sat, 17 Aug 2002, Martin Uecker wrote:
> 
> > IMHO the solution to the spam problem are trust metrics.
> > This basically solved the problem on the web (google, page rank).
> 
> How do you propose to do this on gnunet? I admit I don't
> know page rank by heart, but I think it was based on
> reasoning how the web pages link to one another.
> Is there any natural measure now that can be used?

We can start rating down submitters who provide false
meta data which can be automatically checked
(SHA-1, MD5 and other). This is more protection against
fraud than spam prevention but still a good idea.

Then a node could search for files it has locally stored
and rate people high who submitted the same files.

> I don't see how content-level trust metrics could be done 
> without forcing the users to explicitly rank some 
> content/submitter as good or bad.

The user has advantages from doing so because he might get
much better search results if his client rates the results
based on his personal trust database.

> If the trust
> was entirely local (and not published), it would
> have to be possible to query only for the content
> inserted by the trusted people (trusted by the user) 
> because otherwise the spam gets propagated again
> (even if locally filtered out in the end). 

The actual data would expire some time because it would never
be requested. The problem are the R-blocks which would be
propagated. This is a problem unique to the gnunet design
where a node can not know if the data for a certain R-block
is ever requested.

There are two thinks which must be done to get rid of those
blocks:

* The node storing those blocks should not have an economic
  advantage (earn reputation) by returning them as search
  results.

* The node must somehow learn this fact.

This could be done with trust metrics because a node can
evaluate the trust of the submitter and discard the R-block
if he is not trusted by the community to provide good
(meta)-data.

Another way which is much easier to implement are feedback
messages. But those are problematic because they might
be misused.

> If e.g. Bob is a person we trust, and we want only stuff
> inserted by Bob, there should be something produceable
> only by Bob, but that could be verified by any
> node right when its passing through and of course queried 
> by us (we need a query like "give me all rootnodes of 
> insertions by Bob and only Bob to the forum Plants on 
> day so-and-so"). I think Christian once tried to explain 
> to me how a similar idea could be done with hashes 
> and public keys, but I didn't quite get it. :(

You could make the hash of the pub key of the submitter a
keyword. It must then be checked that this hash matches the
sig of the R-block. 

> With such a mechanism it would be possible to
> locally decide from whom to request messages,
> and perhaps take a small amount of messages
> inserted by unknown people in addition. Of course people 
> already trusted by us could introduce new pseudonyms 
> (perhaps w/ some certain message type) that we could 
> add to our local trustbook if so desired.
>
> This trust shouldn't be confused with the node trust.
>
> BTW, the same pseudonyms could be included to file
> insertion (voluntarily), we could in a same way
> look only for files inserted by certain pseodonyms.
> This also fits nicely with planned collection/index
> /directory -files, where spamming might be problem.

Why pseudonyms? Individuals are best identified by the hashes
of their pub keys. Those can't by hijacked because the owner
can prove with a signature that he is the owner of his pub
key.


Martin






reply via email to

[Prev in Thread] Current Thread [Next in Thread]