mldonkey-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Mldonkey-users] Disk space preallocation


From: Markus Hitter
Subject: Re: [Mldonkey-users] Disk space preallocation
Date: Tue, 6 May 2003 13:03:35 +0200


Am Dienstag, 06.05.03 um 09:09 Uhr schrieb Christian Lange:

I don't know which filesystem OS X uses, but it obviously supports
sparse files by default.

That's UFS, some not too recent version of the FreeBSD UFS which comes with OS X.

Since this behaviour can lead to heavy fragmentation mldonkey allocates disk space by 9.5MB chunks.

Let me guess how my files got corrupted:

1) I put some downloads into the queue which whouldn't fit, fully downloaded, on disk.

2) Either by copying the files around or with progessing downloads the disk ran out of space.

3) After deleting .ini files and a temp recovery MLDonkey now thinks these files, broken due to short disk space, are valid and re-calculated all those md4's. Worse, it starts to send out those bad chunks as being valid[1].

Currently, I get errors like "BAD BAD BAD: number of chunks is different 15/9 for D2FB464884C3D8003A083807B3B9D9DC:77828096 on peer" and "217.126.70.178 is suspicious of level 1; client 86: block 17 corrupted"

While there might be bad packets out in the net, if I get five times the same "bad" packet, I'd prefer to think my own's client checksumming mechanism isn't working. MLDonkey seems to be unable to handle trouble with bad chunks on the own disk. "Verify chunks" seems to be a no-op.


I'm willing to add some code to MLDonkey to make the protocol more robust. Some ideas:

- On user request, or if a file gets lots of errors, discard all info about partial downloaded files (number and md4's of chunks) and re-fetch it from the net.

- If a download get's canceled, remove all local information about it. E.G., I had a hard time to find out how to get a file a second time (because the first got bad locally). Well, there's "force_download", but I neither found out how to get it work, nor I'm sure wether this command re-get's all the meta-info about the file, too.

- Ask more than one other peer about partial checksums. Especially, if a bad packet arrived. The own info about the chunk could be bad as well.

- If something goes wrong, the download should be halted and there should be some user feedback. Dumbly re-downloading bad chunks doesn't seem to be sufficient.

- Don't calculate any info from partial downloads on the own disk.

- Check downloads with some external tool before they are commited, e.g. "unzip -t ..." This whould help to sort out entirely bad files a little.


Cheers,
Markus


[1] One time I noticed a single chunk of a file which couldn't be found for several days. After some starts and stops and probably some other actions on the file system, this chunk was all of a sudden completed and sent to the other peers waiting for it.
- - - - - - - - - - - - - - - - - - -
Dipl. Ing. Markus Hitter
http://www.jump-ing.de/







reply via email to

[Prev in Thread] Current Thread [Next in Thread]