[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Removal of Software Raid-1 from disk causes linux system in to grub resc
From: |
rishi narian |
Subject: |
Removal of Software Raid-1 from disk causes linux system in to grub rescue mode |
Date: |
Mon, 20 Mar 2017 11:06:25 -0500 |
Hello,
When i tried to removed the Raid-1 array from the Linux system which
causes Linux system in to *grub resue* mode with lvmid/ not found error.
I am unable to recover or find out the grub configuration from the existing
system due to which i am blocked to boot in to OS.
The following is existing configuration:
address@hidden:~# fdisk -l
Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Device Boot Start End Blocks Id System
/dev/sda1 2048 34613373 17305663 da Non-FS data
/dev/sda2 * 34613374 156248189 60817408 fd Linux raid autodetect
address@hidden:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 16.5G 0 part
└─sda4 8:4 0 58G 0 part
└─md0 9:0 0 58G 0 raid1
├─vg0-swap (dm-0) 252:0 0 1.9G 0 lvm [SWAP]
├─vg0-root (dm-1) 252:1 0 19.6G 0 lvm /
└─vg0-backup (dm-2) 252:2 0 19.6G 0 lvm
sr0 11:0 1 4.1G 0 rom
address@hidden:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid1 sda2[0]
60801024 blocks super 1.2 [2/1] [U_]
unused devices: <none>
address@hidden:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 vg0 lvm2 a-- 57.98g 17.04g
address@hidden:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
backup vg0 -wi-a---- 19.54g
root vg0 -wi-ao--- 19.54g
swap vg0 -wi-ao--- 1.86g
address@hidden:~# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 3 0 wz--n- 57.98g 17.04g
address@hidden:~# cat /etc/mdadm/mdadm.conf
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=8b007464:369201ca:13634910:1d1d4bbf
name=R000001:0
address@hidden:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Sep 23 02:59:04 2015
Raid Level : raid1
Array Size : 60801024 (57.98 GiB 62.26 GB)
Used Dev Size : 60801024 (57.98 GiB 62.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sat Mar 18 11:49:53 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
address@hidden:~# fdisk -l
Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Device Boot Start End Blocks Id System
/dev/sda1 2048 34613373 17305663 da Non-FS data
/dev/sda2 * 34613374 156248189 60817408 fd Linux raid autodetect
address@hidden:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 16.5G 0 part
└─sda4 8:4 0 58G 0 part
└─md0 9:0 0 58G 0 raid1
├─vg0-swap (dm-0) 252:0 0 1.9G 0 lvm [SWAP]
├─vg0-root (dm-1) 252:1 0 19.6G 0 lvm /
└─vg0-backup (dm-2) 252:2 0 19.6G 0 lvm
sr0 11:0 1 4.1G 0 rom
address@hidden:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid1 sda2[0]
60801024 blocks super 1.2 [2/1] [U_]
unused devices: <none>
address@hidden:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 vg0 lvm2 a-- 57.98g 17.04g
address@hidden:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
backup vg0 -wi-a---- 19.54g
root vg0 -wi-ao--- 19.54g
swap vg0 -wi-ao--- 1.86g
address@hidden:~# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 3 0 wz--n- 57.98g 17.04g
address@hidden:~# cat /etc/mdadm/mdadm.conf
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=8b007464:369201ca:13634910:1d1d4bbf
name=R000001:0
address@hidden:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Sep 23 02:59:04 2015
Raid Level : raid1
Array Size : 60801024 (57.98 GiB 62.26 GB)
Used Dev Size : 60801024 (57.98 GiB 62.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sat Mar 18 11:49:53 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
*Removed the RAID-1 array using Knnopix Live CD.*
1.
vgchange -a n vg0 -> Deactivating the volume group
2.
mdadm --stop /dev/md0 -> Successfully stopped
3.
mdadm --zero-superblock /dev/sda2 -> Meta data of mdadm array Super
block is zerod
After doing there is no Logical Volumes and raid-1 array as expected.
*After Booting the Linux machine, the Linux system went to *
***grub rescue>**
getting error as :**lvmid/xse34fffffffffff** is not found*
I tried to update the grub using knnopix Live CD, nothing is worked out.
The below commands are executed on live CD, but the commands are not able
to execute.
Two ways i tried: -> In both the commands are failing to update grub.
Option 1 #
> update-initramfs -u, >> Not updated
> mdadm --detail --scan > /tmp/mdadm.conf
> cp /tmp/mdadm.conf /etc/mdadm/mdadm.conf
> update-grub >> Not updating the grub
Option 2 #
> mount /dev/sda2 /mnt -> **Not able to mount because this is linux raid
> member**
**> error as : Mount unknown filesystem type 'linux_raid_member'**
so i can't mount the below commands:
sudo mount --bind /dev /mnt/dev &&
sudo mount --bind /dev/pts /mnt/dev/pts &&
sudo mount --bind /proc /mnt/proc &&
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt
grub-install /dev/sda2
- Boot record is present in sda2
*Could you please help me, how to boot properly after removing the soft
raid-1 array from the LVM disk (sda2). Also, How to reload the grub
properly.*
*Thanks*
*Rishi*
- Removal of Software Raid-1 from disk causes linux system in to grub rescue mode,
rishi narian <=