Everything is checked in https://github.com/murf0/docker-mariadb-galera
Im running this cluster on tutum and using their api to do discovery.
In tutum there is something called stacks that groups containers very much like docker-compose. And a weave local network allowing all containers to communicate as if they were on the same local network even if they are on different hosts/networks
MariadbGalera: image: 'tutum.co/murf/mariadb-galera:latest' autoredeploy: true deployment_strategy: high_availability environment: - 'wsrep_sst_auth=root:<MYSQL_ROOT_PW>' expose: - '3306' restart: on-failure roles: - global sequential_deployment: true tags: - prod target_num_containers: 1 volumes: - /var/lib/mysql
When the first node in the cluster starts, no other nodes are seen and the cluster is seeded from this node, Add your data here. Then start up the other nodes/containers and they'll discover the first node automatically and start the synchronization. To connect to your cluster link the container to MariadbGalera and use the mysql://mariadbgalera:3306 as host mariadbgalera will resolv in a round-robin fashion.
I wanted to do away with knowing where the data was stored. Having multiple containers and the ability to spin up more and them joining the cluster automatically. Thus moving the mysql-data automatically where they need it.
Sometimes it seems that tutum reuses old volumes when redeploying even though told not to. This gridlocks the container, causing a failure. The cointainer seems to be up so it is served when doing the dns-query from other linked containers.