A few notes concerning this.
The tutorial provided on the Linbit site for HA-mysql is totally AWESOME! Highly recommended. It will get you 99% of the way there.
The resource definition for the MySQL server instance on Ubuntu 12.04 varies slightly due to Apparmor's need for things to line up neatly. Specifically the file names for the pid and socket files must be correct. Referencing the original Ubuntu configuration, we have this for a resource:
primitive p_db-mysql0 ocf:heartbeat:mysql \
params binary="/usr/sbin/mysqld" \
config="/etc/mysql/my.cnf" \
datadir="/var/lib/mysql" \
pid="/var/run/mysqld/mysqld.pid" \
socket="/var/run/mysqld/mysqld.sock" \
additional_parameters="--bind-address=127.0.0.1" \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="20s" timeout="30s"
Of course, the bind-address listed here is only for testing and must be changed to the bind address of the virtual IP that will be assigned to the database resource group.
I chose to have the database files stored on iSCSI, since my iSCSI SAN is HA already. I realize that there is still the possibility of network switch failure causing runtime coma, but if that happens then there will be much larger problems at hand, since both database servers (two node cluster) are virtual machines. To that end I must remember to configure them for virtual STONITH.
I'm still not sure virtualized database servers are the best idea; I can think of a few reasons not to love them, but also a few reasons to totally dig them.
Minuses:
- VM is subject to same iSCSI risks as the backing store for the databases right now - dedicated DRBD would be better; in my case, this isn't really applicable because the VMs are actually on DRBD and hosted via iSCSI, so I'd be doing double-duty there.
- A VM migration SHOULDN'T cause any sort of db-cluster failure, but we will have to test to know for certain. Perhaps modifying the corosync timeouts will be a beneficial thing.
- The standard reason: hardware provisioning!! No need to stand up more hard drives to watch die, or use more power than what I'm already using.
- VMs means easy migration to other places, like a redundant VM cluster for instance.
- Provisioning additional cluster nodes should be relatively painless.
- The iSCSI backing store will soon be using ZFS, which will be more difficult to do for standalone nodes unless I spend $$ on drives, and ideally hot-swap cages.
- If one of the VMs dies suddenly, we still won't suffer (hopefully) a major database access outage. I'd like to move all internal database use over to this cluster, ultimately. I am tempted to even put an LDAP server instance on there. Then it can be all things data-access-related.
Concerning MariaDB
This appears to be where the future is going, and more than one distro agrees. So, to that end, I looked at the specs and the ideas behind MariaDB. Satisfied that it was designed to be a literal "drop-in replacement" for MySQL, I immediately transitioned both machines over. Now we will see how well it really works. I had to follow their instructions on adding their repo to my servers. The upgrade was painless, and all I have left now is to set up the virtual IPs and start connecting machines to the database instances.Concerning PostgreSQL
My DB cluster also hosts PostgreSQL 9.1. This was followed to the tee and works, as far as I have tested so far, quite well:http://wiki.postgresql.org/images/0/07/Ha_postgres.pdf
No comments:
Post a Comment