DRBD Installation
v0.7.4 on RedHat 8.0 (Psyche) and v0.7.10 on SuSE 9.2
Below are the steps that I executed to get the DRBD packages installed and running on the node1 server (RedHat 8.0 kernel 2.4.20-43_41-smp) and node2 (SuSE 9.2, kernel version 2.6.8-24-15-smp). There is some flexibility in these steps, and it doesnt need to be done this way, but this is what worked for me.
There were some packages/rpms that
needed to be brought down, and I found it very difficult to keep going
back and forth to SuSE's site to get the rpm's via the admin13's ftp
client. SuSE's site wouldnt work for me, and it was a problem - so a
small script was written to help pull these things down. Its a script
called node2:/usr/local/bin/get_rpm_udpates.sh
and it will pull rpm's down based on a syntax pattern specified on the
command line - like a very specific and very poor man's apt-get. It
pulls all rpm's with that substring, so dont run it with something like
"rpm" or you will get EVERY rpm available. (By default, it only
takes i586 rpm's from SuSEs 9.2 site because they are the only ones
posted on the site that I need right now - there are so few i686 rpm's
that they really were not worth searching for.).
Another key piece of all this was to have the environments up to par for running DRBD. For the RedHat server, it required the kernel upgrade from 2.4.18 to 2.4.20 because the DRBD rpm's we were using were compiled for 2.4.20. There is a seperate document for the RedHat kernel upgrade process.
On the SuSE side, the entire distribution needs to be updated with the
online update feature in yast2. This required an internet connection,
and that proved challenging also. I had to load proxy software on
admin13 and proxy through there in order to get the required updates
since proxy would not allow node2 to directly go out and get them. This
admin13 proxy software was only up long enough to get the online
updates completed, and to get the drbd rpms using the "get_rpm_updates.sh" script also shown above.
On node2, set the proxy configuration
for the updates and downloads. There are two places this needs to be
done. First is the yast2 update area.
- edit the /etc/sysconfig/proxy file, and configure it to use "http://admin13:8080/" as the proxy.
- run the /sbin/SuSEconfig utility to have that update trigger through all the subcomponents.
- edit the /etc/wgetrc configuration so that wget will use the proxy as well. Use the same value as used in they sysconfig area.
NOTE :the yast2 "sysconfig" editor
could also have been used, but the gui was too slow for me, so I stuck
with command line for that one. The gui automatically runs the
SuSEconfig when done, so that is a nice feature.
That pretty much covers the setup work, so now we have a foundation on which to install the DRBD software. We are going to replicate the bb and web
filesystems from node1 to node2. This is not meant as a cluster, and is
really one-way-replication only. There is no intent here to replicate
backward, so the heartbeat package was not needed.
The partitions that will be
replicated from node1 will be /dev/vg01/bb (the big brother filesystem)
and the /dev/vg01/web (the apache filesystem). They were retrieved from
a simply "vgdisplay -v" command.
On the node1 server :
- I also had to build a new meta partition for drbd.
I chose to create it in the volume group on vg01 because the physical
disks were fully consumed by the lvm structures and I had no choice
really. So I created /dev/vg01/drbd_meta with 393216kB in size (3 x
128Mb partitions).
- We need to ensure that the filesystems are the same sizes
on both nodes before getting to far. Unfortunately that means that we
really need to recreate them to be safe. This is due to the fact that I
have seen issues in testing where just a slight variation in size from
one node to the other will corrupt the superblock on the filesystem
because the filesystem and its volume/container are not the same size.
When replicating, that gets ugly. So the following steps were executed
:
- cd /usr/local/bb
- tar -cvf /export/dvd/bb_07172005.tar ./*
- echo "$?" (only continue if this is "0")
- umount /export/bb
- mkfs
-V -t ext3 -b 1024 -N 350000 /dev/drbd0 1289208 (we knew that the block
count was 1289208 from running fsck on the existing filesystem) (we
need to specify the number of inodes because the untar step kept
filling up the inodes before the space was used)
- Download drbd rpm's from AT RPMS. I downloaded them into /usr/local/Install/drbd/v0.7.10 directory. Download the following rpm's :
- drbd-0.7.10-8.rh8.0.at.i386.rpm
- drbd-kmdl-2.4.20-43_41.rh8.0.at-0.7.10-8.rh8.0.at.i686.rpm
- cd /usr/local/Install/drbd/v0.7.10
- rpm -Uhv *
- Make nodes for the drbd software. For whatever reason, on the RedHat server, this was not automatic.
mknod /dev/drbd0 b 147 0
mknod /dev/drbd1 b 147 1
mknod /dev/drbd2 b 147 2
On the node2 server :
- run the /usr/local/bin/get_rpm_updates.sh drbd to get the latest SuSE 9.2 drbd rpms for the software and the kernel module.
- run /usr/local/bin/get_rpm_updates.sh kernel-default because that is the rpm that contains the real kernel module for drbd.
- cd into the directory where the rpms are - I was still using the /tmp/downloaded_rpms_$$ directory , so I went there.
- run the "rpm -Uhv *" command to get these rpms installed or updated.
- I destroyed the existing bb partition (/dev/sdc1) because it was a Reiser filesystem, and because it was a different size.
- I did the same thing with the web partition (/dev/sdc2) for the same reasons.
Now that the software is loaded, its time to configure it.
- on both nodes, any reference to the original
bb and web filesystems have to be commented out, because from this
point forward, we will no longer be using the old mounts to get to them
- only the drbd device names can be used now.
- copy the current /etc/fstab to an original file, and build a new fstab file in the /etc directory with new devices names for drbd and with comments for the old volumes in case that is ever needed. (example shown is for node1, the node2 is similar).
- copy the new drbd.conf file into the /etc directory
- copy the new haresources file into the /etc/ha.d directory (not needed yet, but just good practice)
- copy the new drbd-force-up.sh file into the /etc/init.d directory. This is what we will use to mount the filesystems since we are not using the heartbeat rpm.
- create all the sublinks for the drbd-force-up.sh script as follows :
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc0.d/K10drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc1.d/K10drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc2.d/K10drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc3.d/S90drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc4.d/S90drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc5.d/S90drbd-force-up
- ln -s /etc/init.d/drbd-force-up.sh /etc/rc6.d/K10drbd-force-up
- check that drbd startup scripts are in place - create if not (on node1 only since that is source device)
- ln -s /etc/init.d/drbd /etc/rc0.d/K08drbd
- ln -s /etc/init.d/drbd /etc/rc1.d/K08drbd
- ln -s /etc/init.d/drbd /etc/rc2.d/K08drbd
- ln -s /etc/init.d/drbd /etc/rc3.d/S50drbd
- ln -s /etc/init.d/drbd /etc/rc4.d/S50drbd
- ln -s /etc/init.d/drbd /etc/rc5.d/S50drbd
- ln -s /etc/init.d/drbd /etc/rc6.d/K08drbd
And now its finally time to bring it all up. Start with the following (on node1) :
- /etc/init.d/drbd start
- drbdsetup /dev/drbd0 primary --do-what-I-say
- /etc/ha.d/resource.d/drbddisk bb start
- mount /dev/drbd0
- drbdsetup /dev/drbd1 primary --do-what-I-say
- /etc/ha.d/resource.d/drbddisk web start
- mount /dev/drbd1
- cd /usr/local/bb
- chown bb:bb .
- tar -xvfl /export/dvd/bb_07172005.tar
Other useful commands :
- drbdsetup /dev/drbd0 primary --do-what-I-say
- drbdsetup /dev/drbd0 primary
- drbdsetup /dev/drbd0 secondary
- drbdsetup /dev/drbd0 state
- drbdsetup /dev/drbd0 cstate
- drbdsetup /dev/drbd0 connect
- drbdsetup /dev/drbd0 on_primary
- cat /proc/drbd
- /etc/init.d/drbd status
- /etc/init.d/drbd reload
- /etc/init.d/drbd restart
- /etc/ha.d/resource.d/drbdddisk bb start
- drbdadm atach bb
- drbdadm connect bb
- ls -la /var/lib/drbd
- /var/adm/bin/watch_drbd.sh
- fdisk /dev/sdc (for meta information)
- Author notes
- DRBD status flag meanings
This page last updated by Paul on 07/18/2005