pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Pan-users] Re: The "no-break-space" problem, aka "blank" posts, that s


From: Duncan
Subject: [Pan-users] Re: The "no-break-space" problem, aka "blank" posts, that shouldn't be
Date: Sat, 12 Dec 2009 19:53:22 +0000 (UTC)
User-agent: Pan/0.133 (House of Butterflies)

walt posted on Sat, 12 Dec 2009 10:48:49 -0800 as excerpted:

> Yes I update every day, stubborn as a mule.  I'm lucky that I've never
> suffered disk corruption, I suppose, but I *don't* use experimental
> filesystems like ext4.  I leave that to the big kids :o)

I would have used to have done that (but not every day as I don't 
/reboot/ every day), but something like 3-5 kernels ago, there was a 
period during the merge window when md-raid was doing strange things, and 
another around the same time (maybe the previous or next kernel), a 
similar merge-window period when reiserfs was known not working.  I run 
both of those!

As I don't follow lkml (I've wanted to but I understand the volume is 
very high, and unfortunately I'm interested in too many technical details 
in too many areas so I'd probably get frustrated pretty quickly... but 
I've always wanted to... someday...), I'd have no way of knowing about 
such things, no way of knowing the timing for various multi-merge 
sequences (tho with Linus doing git merges, he /usually/ pulls those all 
in one go), etc.  So I've just thought it better not to try anything in 
the merge window... until long enough after it that whatever weird stuff 
like those two things above will hopefully have leaked out... or at least 
I've read about the big merge-window merges.

I suppose eventually linux-next will iron out the worst of that and 
mainline merge-window should be /relatively/ safe to update during, but...

Meanwhile, the de-facto way things go, even early "post-merge-window" 
isn't all that safe, because for particularly hairy merge candidates, 
Linus is known to wait until the relative stability of a day or two post-
window, when everything else has stabilized a bit, to actually do the 
last merges that technically should only be done in the merge-window.  
Because I want to wait a couple days after that to see if any of you 
/brave/ guinea pigs are having your filesystems eaten for breakfast or 
something, and rc2 is only a week from rc1, defacto I wait until rc2.  
But I usually get in on either rc2 or 3, and go from there.  But of 
course that risks issues like this last time, when the tree is so much 
further ahead that a simple revert isn't so simple any more... not to 
mention the bisect itself being several more rounds by then. =:^(

Meanwhile, as long as the bug isn't so bad it starts scribbling on the 
wrong partition or mdraid, I'm /reasonably/ well covered, as my first set 
of backups is on a second set of mdraids, not assembled and of course 
partitions on them unmounted unless I'm either backing up or restoring 
something, on the same set of disks.  For awhile, that was all I had, but 
I recently saw that Fry's had 1 TB external USB disks on sale for $80, 
and snagged one, so that's my second set of backups now, on an external 
drive that while it /is/ usually connected, it's nearly always physically 
turned off.  That's what I used when I tore down my previous set of RAIDs 
and redid what was mostly mdraid-6 as mdraid-1.

In my scenario (lots of parallel reading during boot, and during the 
initial pan load, among other things) that's actually faster anyway, and 
as I have several of them now, all write-intent-bitmapped, recovery in 
the event of an issue like the dropping drive bug I was/am dealing with, 
is now MUCH faster as well, because (1) it's only the active mdraids, and 
(2) the write-intent bitmaps reduce the syncs even there to only a small 
portion of the raid-capacity.

I still have the kernel git on mdraid-0, since it's easily redownloaded 
(Linus: Real men let the Internet be their backup) so I don't need 
redundancy there, and with dual copies of most of my RAID-1s, on only 300 
gig disks, I'm squeezing for space so the 4X multiplier of the 4-spindle 
raid-0 is nice!  But given the fragmentation constantly checking out 
various git sequences will do to the data, I suspect 4X-parallel raid-1 
(with kernel seek scheduling doing several at once) would be as fast or 
faster than 4X-serial raid-0, for that as well.  The same applies to my 
gentoo package tree data and ccache, also on that mdraid-0.  (ccache 
can't be easily redownloaded, of course, but with a dual-dual-core 
system, it's medium-low and entirely transparent rebuild cost.  But since 
the builds themselves are normally on tmpfs and building is more cpu-
intensive than io-intensive, even with ccache, cold-disk-cache-io isn't 
typically a bottleneck there anyway, as it is with git and distro repo 
operations.)

BTW, I don't know if you do kde builds, but I've sure noticed how much 
more parallelizable kde4 is compared to kde3!  Getting off their old 
build system and onto cmake was an extremely good move in /that/ regard, 
that's for SURE!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman





reply via email to

[Prev in Thread] Current Thread [Next in Thread]