replication-manager have to be the best open source database cluster orchestator. Embed the best practices from configurations, deployments, HA operations, maintenance tasks, monitoring, troubleshooting. A toolkit for the best open source solutions. We design features to hide database clustering complexity, keeping Amazon RDS simplicity in mind.
replication-manager also aims to be configurable with nix style, multi tenant, secured via encryption and ACL and user-friendly by offering API, command line and HTML interfaces around familiar databases and proxying softwares.
replication-manager should bring multi cluster sharding and routing solutions to fixe the most difficult issue around databases: SCALABILITY.
replication-manager was initially written with the goal of closing the gap between Galera Cluster and MySQL Master HA. While Galera Cluster is a really good product, it has some shortcomings such as performance and cluster-wide locking. replication-manager was first a tailored solution on top of new features in MySQL and MariaDB such as Global Transaction ID, Semi-Synchronous replication, binary log flashback, that aims to provide High Availability and node switchover without compromising performance by a too large factor.
replication-manager was since day one adapted to manage deployments for his own testing requirements, a full stack cluster have to be provisioned, get tested and unprovisioned and move on other topologies or releases. We extend from a existing deployments, to services managed by external orchestrator. The oldest integration is OpenSVC , with 2 modes: with the collector API and more recently with direct cluster node VIP API. With the popularity of Kubenrnetes we also port our work to takeover. An other integration is currently being workout on SlapOS an open source hyperconverged and Edge computing infrastructure.
To perform switchover, preserving data consistency, replication-manager uses an improved workflow similar to common MySQL failover tools such as MHA:
replication-manager is commonly used as an arbitrator with a layer that routes write database traffic to a single leader database node.
Based on each case one to many strategy can be used depending on feature maturity, security or performance requirements