gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Trust Inflation / Deflation


From: Alen Peacock
Subject: Re: [GNUnet-developers] Trust Inflation / Deflation
Date: Tue, 22 Mar 2005 21:40:23 -0700

Christian,

  Thanks for the in-depth answers.  You've filled in some nice gaps in
my understanding and swatted away several of my concerns.  I have a
few followup questions:


On Tue, 22 Mar 2005 16:01:18 -0500, Christian Grothoff
<address@hidden> wrote:
> In
> GNUnet, we limit outgoing priorities to a range within a constant factor of
> the observed average of requests that we receive (something like 2 *
> (1+avg-received)).  This ensures that our outgoing priorities are 'plausible'
> and competitive.  If the average item is sold for X, it does not help to
> offer 100 * X  -- in particular since you trust-score is so bad that you're
> not good for X/2.  GNUnet nodes do not try to "guess" on their popularity
> (trust rating) with other nodes.

  Okay, got it.  So outgoing request priorities are capped at
somewhere just over 2x the average incoming priority.  This seems
absolutely reasonable, but I'm curious if there is some reasoning or
observations from collected data that went into choosing that value?

  Also, how does a gnunet client pick an increment from its previous
(rejected) priority?  Have you observed that one strategy for picking
the increment tends to work better than others?

  I think it would be fascinating to simulate /just/ this part of the
trust system, and use [insert your favorite machine learning technique
(or other search) here] to determine what the best strategy is for
getting requests answered fast, without 'spending' too much trust.  I
wonder particularly about the effects of diversity -- in a
'population' of diverse individuals implementing diverse strategies,
the optimal choices would certainly vary with the overall
characteristics of the population.

  Maybe you have studied some of these issues (which is why I'm
asking).  Or maybe I'm making way too much of this, and the system
really reduces to a very simple behavior no matter what the individual
participants do, but my [limited] experience in financial and futures
markets tells me that strategies have to change as the individuals who
participate in the market change.  At any rate, don't take my
questions as criticism -- I'm just very interested in solutions that
are resilient to "gaming" -- nash equilibria and all that.  Your work
in this area is very appealing.
 
 
> If the existing nodes would actually raise the priorities in the
> network to such extremely high values, the new node would only have to do
> very little to earn trust quickly (since fullfilling any of these requests
> would earn it a lot immediately).

  Ah, I hadn't considered that, though it seems obvious now.


>  However, it is unlikely that the amount of
> trust put into queries ever bubbles up like that, since, as described before,
> nodes try to minimize the amount of trust offered in order to make it just
> high enough to be above the threshold at which responders would drop the
> query to satisfy other requests.

  So are you saying that the system is resilient to 'bidding wars' in general?

  What if I modified a gnunet client so that its sole purpose was to
try to inflate pricing (doesn't care about actually getting its own
requests fulfilled but fulfills as many incoming requests as possible,
etc).  Would its effect be shortlived because it burns through
accumulated trust faster than its peers?  If I had an army of such
clients could I disturb the economy to such a point that nodes end up
spending way too much of their trust, too quickly, resulting in an
economy that is crippled by the loss of knowledge (of trust)?  Or
would it all be futile, my little misbehaving client army never being
able to impact the network significantly before burning itself out (of
trust).

  I don't expect you to provide detailed answers to my imaginary
scenarios -- just your thoughts on resilience to trust-system attacks
in general.  I realize that it is very hard, perhaps impossible, to
design a system that is resilient to every conceivable attack, and I
know that addressing attacks wasn't the goal of the paper.  But I'm
sure there are a good number of attacks, some good and some poor (not
sure which category the preceeding example falls into :) ), which
become possible only because of the economic model.  The above attack
could be considered a specialized DoS that just tries to negate, or at
least lower, the collective knowledge expressed as trust.  I'm
interested in any similar attacks on the trust sytem / countermeasures
that you've considered.


> Well, you assume that when more nodes join the network it has to become more
> busy.  However, more nodes also means more resources.  Now, yes, the network
> may not scale linearly, but the economy cannot solve basic scalability
> issues: if the network does not scale to a certain size, lots of requests
> will have to be dropped once the network grows beyond that size.  And if that
> is the case, having enough trust to guarantee performance in a network where
> most of the time requests are dropped will have to be a scarce occurrence
> (but it can still be earned by answering requests by others, so the economy
> is not disabled per-se).

  I did imply that the increase in traffic was a result of the network
getting larger, and that is generally true, as you point out -- linear
scaling can't be assumed.

  But for the sake of trying to make the argument that I should have
made in the first place :), just assume instead that the network gets
more busy because it gets more busy (same number of nodes).  I'm
imagining a graph, where the x-axis is busy-ness, y-axis #1 is number
of unfulfilled requests, and y-axis #2 is accumulated trust.  Isn't
there a critical crossover point after which trust no longer
accumulates, but diminishes?  Shouldn't this crossover occur somewhere
near the point in which nodes are busy more than 50% of the time?

  I recognize that this is an unfair question in at least one way --
the trust system in GnuNET is premised on the idea that the economy
/is excess-based/, and I'm asking about what happens when (for
whatever reason) resources are no longer in excess.

  Regards,
  Alen




reply via email to

[Prev in Thread] Current Thread [Next in Thread]