gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] bug: Session key exchange denied etc


From: I. Wronsky
Subject: Re: [GNUnet-developers] bug: Session key exchange denied etc
Date: Thu, 9 May 2002 01:17:08 +0300 (EEST)

On Wed, 8 May 2002, Christian Grothoff wrote:

> The first issue you mention is a bug, I've fixed it in CVS. 

See, didn't even need contrib/report.sh to recognize that it
was really there and not just my imagination. Anyway, I'll
humour the other guy and post the specs to the end of this
message. ;)

> The second is a 
> bit more intricate. '128' is the default maximum number of nodes that a GNUnet
> node connects to at the same time. 128 is the size of the hashtable that is
> kept by the connection module. If a host sends us a request to form a 
> connection, we first check if we know that host, if not, we print
> "Session key exchange denied, host XXX unknown!". Unless the other side 
> violates the protocol, this should not happen (error). 

I see it like this: you have over 300 files in data/hosts directory. You
have max of 128, an upper limit after which the loading of hosts to the
table is stopped by storage.c/scanDirectory(). Most of the 300 are 
unresponsive. You try to contact some of the *ones that were loaded*. 
Some of the others try to contact you. Is there a chance that just the 
host that wants to contact you wasn't loaded from the data/hosts/ 
and a rejection happens? Now we reject some good, active hosts, 
because we happened to load some other, inactive nodes instead?

I admit I do not exactly know how it should behave, but 
this is how it happened here: either [increasing MAXNODES to 
next 512 > 300] or [limiting the files in hosts/ to a 
couple of currently working hosts (a number certainly under 
128) and keeping MAXNODES at 128] makes it work, the rejection 
messages disappear, content is transmitted and it starts 
to write out credit files. Why? A good question. 

> connections with 'bad' ones. We MUST limit the number of concurrent 
> connections because each connection slot in the hashtable costs about 2k of 
> memory, so 128 connections are about 256k. If the user wants to 'donate' more 
> memory to gnunetd, it is possible, but this is already a lot.

You call it a connection too if there's 128 not responding
nodes loaded to the table (note that readdir() always loads
the files in the same order. No amount of waiting or redoing
will change that order unless files in host/ are deleted by 
someone), whereas active nodes try to connect you and are 
turned down as 'bad'? :) Certainly loading the same 128
files on every cronScanForHosts() doesn't do any good.

> but note that it is not a bug, it was intended like this.

I don't try to deny this. Just, most humbly, pointing out
that the system should scale down as well as up, and the
current situation with "seednodes" (or hosts.tar.gz) 
containing mostly unoperative nodes amounting to over
the maxnode limit should also be handled gracefully. This
makes me wonder if even every separate node loads the same 
128 hosts, the first ones that were extracted from the .tar.gz, 
or is there some randomness caused by the filesystem? Atleast
there's none programmed to unistd.h readdir() which
the scanDirectory() uses.

> > ps. these bugs might cause severe problems to the
> > network in case everyone else too gets the 300 nodes
> > and has her max set to 128. Atleast here the whole
> > credit thing went down the drain and queries were
> > exchanged only with a couple of nodes out of the
> > several that sent HELOs.
> HELOs are NOT equivalent to nodes that want to establish a connection. SKEYs 
> are used to establish connections. I don't quite see how this entire issue 
> has any impact on the credibility system, you may want to elaborate.

Ah. My only evidence is that after I "corrected" the
previous problem, the system also started to write files
to data/credit/, which it somehow didn't do in the case 
that #hosts in data/hosts/ exceeded #MAXHOSTS. Even in
that case there was some query traffic between a couple of nodes, 
my node just didn't write out credit/ for them. I don't 
think the respective function even got called as there was 
no debug output of that (the one stating credit and liveness). 

Oh well. ;) Take it easy, don't lose hair on behalf
of my bugreports. And if I have a mistake, or have
made false assumptions, rtfm is enough of a reply. ;)


--------------------------------------------------------------
OS             : Linux
OS RELEASE     : 2.4.18
HARDWARE       : i586
OpenSSL Version: OpenSSL 0.9.6b [engine] 9 Jul 2001
gcc version    : gcc (GCC) 3.1 20020314 (Red Hat Linux Rawhide 3.1-0.23.1)
gcc version    : Copyright (C) 2002 Free Software Foundation, Inc.
gcc version    : This is free software; see the source for copying 
conditions.  There is NO
gcc version    : warranty; not even for MERCHANTABILITY or FITNESS FOR A 
PARTICULAR PURPOSE.
gcc version    : 
Gnu gmake      : 3.79.1
autoconf       : 
automake       : 1.4-p5
--------------------------------------------------------------

Yeah, blame it all on rawhide gcc. ;)






reply via email to

[Prev in Thread] Current Thread [Next in Thread]