[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Workarounds for advanced RAID features in GRUB Legacy?
From: |
Leif W |
Subject: |
Re: Workarounds for advanced RAID features in GRUB Legacy? |
Date: |
Sun, 11 Sep 2005 09:01:20 -0400 |
From: "Tim Woodall" <address@hidden>
Sent: 2005 September 11 Sunday 06:00
I can't really see the point in mixing raid1 with anything else on a
pair of disks but even so, I'd have
Oh yeah, certainly. Sorry for the confusion. I should elaborate.
Though it's veering off the GRUB topic, this background info may apply.
4 discs; 2xPATA @ 300GB, 2xSATA @ 400GB. The RAID 0 partitions are the
300GB partitions, the RAID 1 is the two RAID 0 partitions. That leaves
2xSATA discs with 100GB partitions, where I'll stuff as many OSes as I
can and separate data spaces, RAID 1 when I can. :-)
At the least, I'm looking for two non-raided winxp installs (one on each
disc). If winxp can't play ball with the grown ups then it can go
without redundancy. ;-) Besides it has a higher probability of
breaking than my hardware, so I'd just be mirroring the same broken OS.
They often require a reformat and reinstall after 6-12 months anyways.
Then I'd like two installs of Linux, both RAID1, just incase I really
mess one up and need something ready to go in a hurry. RAID 1 will
mirror the user error. A separate, unmounted partition will not. Two
linux and two windows swap partitions (one on each disc). And two data
partitions per disc, Linux mirrored (2/3 remaining space), and WinXP
just with two separate partitions, so that data (like software projects)
doesn't live and die with the distribution.
Now, there may be things I overlooked, it's a work in progress, a hobby
I've been nudging along for some time now.
I don't run WinXP but this looks like a limitation in WinXP. If WinXP
requires you to raid the entire disk then that is what you are going
to
have to do for WinXP.
Well, I'm just basing my experience on the stock utilities in WinXP, and
the disc management console didn't give me an option to select
partitions to be part of a RAID array. With the drivers and utilities
for the SiI3112 chipset, WinXP can do only RAID 0 or RAID 1, only on
SATA, and only the entire disc. And it will happily attempt to make a
RAID1 array even though it is running from the drive! It's the computer
equivalent of watching something so mind bogglingly stupid like Jack.*s
that you can only laugh. Actually I might end up keeping that OS on
another system with very similar hardware.
I suspect, however, that if you raid the whole
disk but then in linux treat the end of each disk as non-raid and then
run software raid in linux it will still all work correctly. (for
raid1
at least although you definitely want to check that rebuilding the
mirror in windows doesn't break the linux raid)
If I had more time and energy I'd be curious to see the results myself.
Probably worth a good laugh at worst if it fails miserably, and at best
utter astonishment if nothing broke.
It's not unusual when a disk fails, expecially IDE disks, for it to
crash the machine hard so you are going to have to reboot. And it's
also
not unusual for the bios to crash if there is a failed disk meaning
you
can't boot until you unplug the failed disk. (Almost all my
experiences
of failed disks have either been bad sectors (which hopefully raid1
will
compensate for) or failed to start after a shutdown (where bios
hanging
isn't uncommon) - but it's not like I'm in a datacentre (which would
all
My experience was most likely due to thermal issues. Despite being
concerned at the volume (and rotational torque ;) of my case due to the
15 or so fans and specialized heat sinks, several discs died during some
hot weather with no A/C, and I didn't chalk it up to coincidence. The
system didn't crash hard. The discs would start knocking audibly, and
access times crawled. After a while it was recognized by the BIOS but
not the OS, and then no longer recognized by the BIOS, like nothing was
there. Air flow and temperature monitoring are more important to me
now, so is some redundancy and external backups, so I can at least limp
along.
If you are very lucky the machine will stay up, mdadm will flag the
disk
bad and you will get a message. A bit less lucky and the machine will
crash but a power cycle will be sufficient. Worst case, you will have
to
unplug the cable from the failed disk in order to get everything up
again. Once you have the new disk you can install inplace of the
failed
disk and rebuild the raid (or have another spare)
Well if that's the way it is, I can understand. So long as it's not
something that can be configured away.
Thanks again for the commentary.
Leif