gzz-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gzz] hemppah's research problems document


From: Tuomas Lukka
Subject: Re: [Gzz] hemppah's research problems document
Date: Mon, 16 Dec 2002 17:05:09 +0200
User-agent: Mutt/1.4i

On Mon, Dec 16, 2002 at 04:18:26PM +0200, address@hidden wrote:
> Quoting Tuomas Lukka <address@hidden>:
> 
> > > > > 
> > > > > Is this kind of model possible in Gzz p2p ?
> > > > 
> > > > Do you see any reason it would not be?
> > > 
> > > My initial reactions is that this is possible.
> > > 
> > > However, I'm not 100% sure about this, because you and Benja have
> > mentioned
> > > sometimes something about fixed block lengths and I don't *really* know,
> > in
> > > practice, what do you mean by it (yes, it's a fixed size..but), and how
> > it
> > > differs from Overnet's scenario (?).
> > 
> > I have no idea what you're talking about ;)
> > 
> 
> Let's put this in another way: Is it possible to fragment Storm blocks (e.g.
> each block consists of number of fixed-size "miniblocks", say A, B and C, and
> they each have a unique hash, like in Tiger-tree hashing) to different
> "mini-blocks" so that we are able to fetch these "mini-blocks" from different
> sources (for more efficient download when fetching *big* blocks) ? If the 
> answer
> is yes, I don't see any reasons why we are not able to implement Overnet-like
> block fetching in Gzz.
> 
> Comments ?

The problem is finding the hashes of the miniblocks from the block id: if 
someone
gets to do this in a hostile way you could download a lot of miniblocks to 
discover
in the end that you got a lot of garbage, because one of the miniblocks is 
wrong.

But aside from that, this should be no problem.

Of course, we're probably hoping that storm blocks are not terribly big.

        Tuomas



reply via email to

[Prev in Thread] Current Thread [Next in Thread]