apt-get install mysql-server mysql-client build-essential
Make sure that mysql will be started by heartbeat, not at boot-time on either server.
update-rc.d -f mysql remove
Verify the partitions we are using are exactly the same on both servers. We will be using sda6
(256MB) and sda7(94GB): Actual partitions should be something like this:
fdisk -l
Device Boot Start End Blocks Id System
/dev/sda1 * 1 24 192748+ 83 Linux
/dev/sda2 25 1969 15623212+ 82 Linux swap / Solaris
/dev/sda3 1970 19320 139371907+ 5 Extended
/dev/sda5 1970 6954 40041981 83 Linux
/dev/sda6 6955 6984 240943+ 83 Linux
/dev/sda7 6985 19320 99088888+ 83 Linux
sda6 will be our meta-disk sda7 will be our distributed replication block device Verify that sda6
and sda7 are not mounted:
mount -l
/dev/sda5 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
Determine the kernel in use to install headers:
uname -r
apt-get install linux-headers-<<what was returned above>>
ETH0 will be the public interface on 10.10.10.XXX, while ETH1 will be on 192.168.1.xxx, a separate
network or crossover cable connected between the two machines. DRBD will run using eth1 of both
machines. On both servers, perform the following:
apt-get install drbd0.7-module-source drbd0.7-utils ipvsadm heartbeat
cd /usr/src/
tar xvfz drbd0.7.tar.gz
cd modules/drbd/drbd
make
make install
mv /etc/drbd.conf /etc/drbd.conf.sample
Edit /etc/drbd.conf and create the following contents:
resource r0 {
protocol C;
incon-degr-cmd "halt -f";
startup {
degr-wfc-timeout 120; # 2 minutes.
}
disk {
on-io-error detach;
}
net {
}
syncer {
rate 10M;
group 1;
al-extents 257;
}
on mysqldb01 { # the hostname of server 1 (uname -n)
device /dev/drbd0; #
disk /dev/sda7; # data partition on server 1
address 192.168.1.10:7788; # ETH1 IP Address
meta-disk /dev/sda6[0]; # 256MB partition for DRBD on server 1
}
on mysqldb02 { # ** EDIT ** the hostname of server 2 (uname -n)
device /dev/drbd0; #
disk /dev/sda7; # ** EDIT ** data partition on server 2
address 192.168.1.11:7788; # ETH1 IP Address
meta-disk /dev/sda6[0]; # 256MB partition for DRBD on server 2
}
}
Start drbd on both servers
/etc/init.d/drbd start
Verify they are running. Both should be in a secondary state.
cat /proc/drbd
version: 0.7.24 (api:79/proto:74)
SVN Revision: 2875 build by root@mysqldb01, 2007-12-14 02:55:51
0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:12096 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured
Make server1 the primary and format the the device. You want to format the drbd device, not the sda
device. The server you are formatting must be the primary. You cannot format the drbd device if it
is in secondary mode. There is no need to perform the format operation on the secondary as it will
be sync'd as well.
drbdsetup /dev/drbd0 primary --do-what-I-say
mkfs.ext3 /dev/drbd0
drbdadm connect all
Verify that they are now sync'ing
watch cat /proc/drbd
version: 0.7.24 (api:79/proto:74)
SVN Revision: 2875 build by root@mysqldb01, 2007-12-14 02:55:51
0: cs:SyncTarget st:Secondary/Primary ld:Inconsistent
ns:0 nr:9660908 dw:9660908 dr:0 al:0 bm:12685 lo:0 pe:0 ua:0 ap:0
[=>..................] sync'ed: 9.5% (87332/96415)M
finish: 1:54:03 speed: 12,968 (10,140) K/sec
1: cs:Unconfigured
Next, setup the mysql datafiles on the drbd device. Stop mysql on both servers
/etc/init.d/mysql stop
Move the mysql data files and test mounting on mysqldb01
mkdir /mnt/move
mount -t ext3 /dev/drbd0 /mnt/move
mv /var/lib/mysql/* /mnt/move/.
umount /mnt/move
mount -t ext3 /dev/drbd0 /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql
Clear databases from mysqldb02 so we can be sure we're starting from the proper set.
rm -R /var/lib/myssql/*
Start Mysql on mysqldb01 and verify that you can connect
/etc/init.d/mysql start
mysql -u root -p
Test drive MySQL on mysqldb01 a little bit, create a new database and a table in the database. The
following is the basic process to manually switch from the primary server to the secondary. We will
use it to verify everything is working as expected before proceeding to the heartbeat. Stop MySQL on
mysqldb01 and unmount drbd0
/etc/init.d/mysql stop
umount /var/lib/mysql
drbdadm secondary r0
On mysqldb02, make it the primary, mount the drbd and then start mysql
drbdadm primary r0
mount -t ext3 /dev/drbd0 /var/lib/mysql
ls /var/lib/mysql
/etc/init.d/mysql start
On mysqldb02, login to mysql and verify that the database and table we created exists. Create some
new records in our test table.
mysql -u root -p
If everything is working as planned, stop mysql on mysqldb02, unmount the drbd and switch the
services back to mysqldb01 to verify that works. On mysqldb02:
/etc/init.d/mysql stop
umound /var/lib/mysql
drbdadm secondary r0
On mysqldb01
drbdadm primary r0
mount -t ext3 /dev/drbd0 /var/lib/mysql
/etc/init.d/mysql start
Test connecting to Mysql as we did before. Verify any changes you made on mysqldb02 exist on
mysqldb01. This will be a good time to set a root password since the server won't be listening on
localhost any longer:
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'some_pass' WITH GRANT
OPTION;
FLUSH PRIVILEGES;
Edit the /etc/mysql/my.cnf file and change the IP address of the "bind-address"
setting to be that which we will use for the heartbeat. Now, since the heartbeat will control both
drbd and MySQL, stop MySQL and unmount the drbd
/etc/init.d/mysql stop
umount /var/lib/mysql
Create entries in /etc/hosts for the two servers which will be in the cluster. Use the ip address
which is bound to the public interface (eth0) which the heartbeat will use.
Create the /etc/ha.d/ha.cf file on both servers as follows:
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node mysqldb01 ## make sure both names are accessible - check /etc/hosts
node mysqldb02
ping 10.10.10.1 ## ETH0, public network
apiauth ipfail gid=haclient uid=hacluster
Create the /etc/ha.d/haresources. This file is the same on both servers. It lists the primary
servername, tells it to use LVS, sets the IP address of the heartbeat, the drbddisk name from
/etc/drbd.conf, the drbd device name and mount point and finally the server to start.
mysqldb01 LVSSyncDaemonSwap::master IPaddr::10.10.10.12/24/eth0 drbddisk::r0
Filesystem::/dev/drbd0::/var/lib/mysql::ext3 mysql
Create authkeys on both servers in /etc/ha.d/authkeys. Heartbeat will complain if you don't
secure the file.
auth 3
3 md5 plaintextpassword
chmod 600 /etc/ha.d/authkeys
Start heartbeat on both servers, starting with mysqldb01:
/etc/init.d/heartbeat start
Errors will be in /var/log/ha-debug and /var/log/ha-log if you have problems. You should now be able
to connect to mysql on the heartbeat's ip address. Shutdown heartbeat on mysqldb01, and you
should notice that /var/lib/mysql is now mounted on mysqldb02 and you can still connect to the
database.
Upgrading Mysql with DRBD and Heartbeat
After recently running into the dilemma of how to upgrade something that doesn't like you to shut
it down, I've resolved it thus: First, upgrade the secondary database server. The MySQL upgrade
will actually fail since Debian cannot start and stop a database that isn't running and whose
file storage isn't mounted. After the secondary is mounted, commence to upgrading the primary.
The data files will now be upgraded, and the database shouldn't be down long enough to create a
fail over on the heartbeat while the containers are upgraded.
References:
http://www.howtoforge.com/high_availability_nfs_drbd_heartbeat_p2http://linuxsutra.blogspot.com/2007/02/howto-mysql-drbd-ha.htmlhttp://marksitblog.blogspot.com/2007/07/mysql-5-high-availability-with-drbd-8.htmlhttp://blog.irwan.name/?p=118