lvm2 on RAID1の構築方法。

Linuxにおける、lvm2 on RAID1の構築方法。
忘れてしまうので、メモしておきます。

●ディスク追加(20120723)

/dev/sda  既存の1台目  2000.3 GB
/dev/sdb  既存の2台目  2000.3 GB

/dev/sdc  今回追加1台目  2000.3 GB
/dev/sdd  今回追加2台目  2000.3 GB



1)ディスクの現状確認
# fdisk -l /dev/sda
-----
Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          64          79      128520   fd  Linux raid autodetect
/dev/sda2              80        6159    48837600   fd  Linux raid autodetect
/dev/sda3            6160      243201  1904039865   fd  Linux raid autodetect
-----


# fdisk -l /dev/sdb
-----
Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          64          79      128520   fd  Linux raid autodetect
/dev/sdb2              80        6159    48837600   fd  Linux raid autodetect
/dev/sdb3            6160      243201  1904039865   fd  Linux raid autodetect
-----

※参考まで。



2)fdiskでスライス。
下記のように、fdisk でスライス。
4096バイトセクタを考慮にいれ、スタートを64とする。
id は fd 。


# fdisk -l /dev/sdc
-----
Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              64      243201  1953005985   fd  Linux raid autodetect
-----


# fdisk -l /dev/sdd
-----
Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              64      243201  1953005985   fd  Linux raid autodetect
-----



3)RAID1を追加
# mdadm --create /dev/md3 --level=raid1 --raid-devices=2 /dev/sdc1 /dev/sdd1
-----
mdadm: array /dev/md3 started.
-----


# mdadm --detail /dev/md3
-----
/dev/md3:
        Version : 0.90
  Creation Time : Mon Jul 23 13:45:36 2012
     Raid Level : raid1
     Array Size : 1953005888 (1862.53 GiB 1999.88 GB)
  Used Dev Size : 1953005888 (1862.53 GiB 1999.88 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Mon Jul 23 13:47:55 2012
          State : active, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

 Rebuild Status : 1% complete

           UUID : XXXX
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
-----


# cat /proc/mdstat
-----
Personalities : [raid1]
md3 : active raid1 sdd1[1] sdc1[0]
      1953005888 blocks [2/2] [UU]
      [>....................]  resync =  0.1% (2493440/1953005888) finish=208.6min speed=155840K/sec

md0 : active raid1 sda1[1] sdb1[0]
      128448 blocks [2/2] [UU]

md1 : active raid1 sda2[1] sdb2[0]
      48837504 blocks [2/2] [UU]

md2 : active raid1 sda3[1] sdb3[0]
      1904039744 blocks [2/2] [UU]

unused devices: 
-----



4)mdadm.confに追記
# cd /etc
# cat mdadm.conf
-----
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=XXXX
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=XXXX
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=XXXX
-----


# cp mdadm.conf mdadm.conf-12072301
# mdadm --detail --scan
-----
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=XXXX
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=XXXX
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=XXXX
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=XXXX
-----


# mdadm --detail --scan >> mdadm.conf
# vi mdadm.conf
-----
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=XXXX
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=XXXX
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=XXXX
ARRAY /dev/md3 level=raid1 num-devices=2 uuid=XXXX
-----



5)lvm2を使うのでこちらも追加
・kvm2 ボリューム作成

# pvcreate /dev/md3
  Writing physical volume data to disk "/dev/md3"
  Physical volume "/dev/md3" successfully created


# vgcreate -s 32m VolGroup01-kvm2 /dev/md3
  Volume group "VolGroup01-kvm2" successfully created


# lvcreate -n LogVol00 -l 100%FREE VolGroup01-kvm2
  Logical volume "LogVol00" created


・できたか確認
# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup01-kvm2/LogVol00
  VG Name                VolGroup01-kvm2
  LV UUID                XXXX
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                1.82 TB
  Current LE             59601
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                XXXX
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.77 TB
  Current LE             58106
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0



6)ext3でfsを構築
# mkfs.ext3 /dev/mapper/VolGroup01--kvm2-LogVol00
-----
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
244137984 inodes, 488251392 blocks
24412569 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
14901 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
-----


7)仮マウント
# mount -t ext3 /dev/mapper/VolGroup01--kvm2-LogVol00 /kvm2


# df -k -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3   1844389860 190922896 1558266096  11% /
/dev/md0      ext3      124387     47709     70256  41% /boot
tmpfs        tmpfs    16465284         0  16465284   0% /dev/shm
/dev/mapper/VolGroup01--kvm2-LogVol00
              ext3   1922360144    200164 1824509704   1% /kvm2



8)起動時に自動マウント
# cat /etc/fstab
-----
/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md1                swap                    swap    defaults        0 0
/dev/VolGroup01-kvm2/LogVol00   /kvm2           ext3    defaults        0 0
-----

※/dev/VolGroup01-kvm2/LogVol00 を追加


・マウント確認
# df -k -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3   1844389860 190922896 1558266096  11% /
/dev/md0      ext3      124387     47709     70256  41% /boot
tmpfs        tmpfs    16465284         0  16465284   0% /dev/shm



# mount -a



# df -k -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3   1844389860 190922896 1558266096  11% /
/dev/md0      ext3      124387     47709     70256  41% /boot
tmpfs        tmpfs    16465284         0  16465284   0% /dev/shm
/dev/mapper/VolGroup01--kvm2-LogVol00
              ext3   1922360144    200164 1824509704   1% /kvm2



あとは、リブートしてみて、マウントされていればOK。
RAID1の再構築をひたすら待つ。

完了。

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です

*

日本語が含まれない投稿は無視されますのでご注意ください。(スパム対策)