The Database Configurator is replication-manager's built-in rules engine for generating, delivering, and tracking database and proxy configuration files. It translates a set of cluster tags and hardware resource settings into ready-to-use my.cnf files, directory structures, and bootstrap scripts — packaged as a config.tar.gz archive that an init container or SSH script can unpack directly into the service file system.
The configurator runs on every cluster. It is the sole source of truth for the configuration files deployed to each monitored database server.
prov-db-tags + prov-db-memory / disk / iops / cores
│
▼
Compliance module (embedded opensvc/moduleset_mariadb.svc.mrm.db.json)
│
│ evaluates tag filters → selects matching rulesets
│ substitutes %%ENV:…%% template variables
│
▼
<datadir>/<cluster>/<host_port>/init/
├── etc/mysql/
│ ├── conf.d/ tag-generated .cnf fragments (symlinked)
│ ├── rc.d/ ordered symlinks to active fragments
│ └── custom.d/ user overlay (01_preserved, 02_delta, 03_agreed)
├── init/
│ └── dbjobs_new maintenance job script
└── data/
└── .system/ InnoDB undo/redo/tmp directory skeleton
config.tar.gz ← packaged from init/ tree, served via HTTP API
Each time tags or resource settings change, replication-manager regenerates all config archives. The init container (container mode) or SSH provisioner (osc mode) unpacks the archive into the live service volume on the next apply or rolling restart.
When a cluster is first connected — or when you request it explicitly — replication-manager reads the live database variables and installed plugins and automatically derives the matching tags. This means you can point replication-manager at an existing, hand-tuned MariaDB server and have it reconstruct the tag set that describes that configuration.
Discovery maps variables to tags in the following ways:
| Variable | Tag derived |
|---|---|
INNODB_DOUBLEWRITE=OFF |
nodoublewrite |
INNODB_FLUSH_LOG_AT_TRX_COMMIT≠1 |
nodurable |
INNODB_FLUSH_METHOD≠O_DIRECT |
noodirect |
LOG_BIN_COMPRESS=ON |
compressbinlog |
INNODB_DEFRAGMENT=ON |
autodefrag |
INNODB_COMPRESSION_DEFAULT=ON |
compresstable |
QUERY_CACHE_SIZE=0 |
noquerycache |
SLOW_QUERY_LOG=ON |
slow |
GENERAL_LOG=ON |
general |
PERFORMANCE_SCHEMA=ON |
pfs |
LOG_OUTPUT=TABLE |
logtotable |
HAVE_SSL=YES |
ssl |
READ_ONLY=ON |
readonly |
SKIP_NAME_RESOLVE=OFF |
resolvdns |
LOCAL_INFILE=ON |
localinfile |
LOG_BIN=OFF |
nobinlog |
LOG_SLAVE_UPDATES=OFF |
nologslaveupdates |
RPL_SEMI_SYNC_MASTER_ENABLED=ON |
semisync |
GTID_STRICT_MODE=ON |
gtidstrict |
TX_ISOLATION=READ-COMMITTED |
readcommitted |
LOWER_CASE_TABLE_NAMES=1 |
lowercasetable |
USER_STAT_TABLES=PREFERABLY_FOR_QUERIES |
eits |
BINLOG_FORMAT=STATEMENT |
statement |
BINLOG_FORMAT=ROW |
row |
JOIN_CACHE_LEVEL=8 |
hashjoin |
JOIN_CACHE_LEVEL=6 |
mrrjoin |
JOIN_CACHE_LEVEL=2 |
nestedjoin |
SQL_MODE=ORACLE |
sqlmodeoracle |
SQL_MODE="" |
sqlmodeunstrict |
Plugin BLACKHOLE installed |
blackhole |
Plugin QUERY_RESPONSE_TIME |
userstats |
Plugin SQL_ERROR_LOG |
sqlerror |
Plugin METADATA_LOCK_INFO |
metadatalocks |
Plugin SERVER_AUDIT |
audit |
Plugin CONNECT |
connect |
Plugin SPIDER |
spider |
Plugin MROONGA |
mroonga |
Plugin TOKUDB_CACHE_SIZE variable present |
tokudb |
Plugin ROCKSDB_BLOCK_CACHE_SIZE variable present |
myrocks |
Plugin S3_PAGECACHE_BUFFER_SIZE variable present |
s3 |
Plugin CRACKLIB_PASSWORD_CHECK |
pwdcheckcracklib |
Plugin SIMPLE_PASSWORD_CHECK |
pwdchecksimple |
| wsrep plugin active | wsrep |
Discovery also reads memory values directly from the running server (INNODB_BUFFER_POOL_SIZE, KEY_BUFFER_SIZE, ARIA_PAGECACHE_BUFFER_SIZE, etc.) and sets prov-db-memory to the detected total — so the generated config targets the same memory footprint as the existing server.
The generated archive is served at:
GET /api/clusters/{clusterName}/servers/{host}/{port}/config
An init container that shares the network namespace with the database container fetches this URL at service startup and unpacks it:
# OpenSVC container spec
[container#0002]
type = docker
image = busybox
netns = container#0001
command = sh -c 'wget -qO- http://{env.mrm_api_addr}/api/clusters/{env.mrm_cluster_name}/servers/{env.ip_pod01}/{env.port_pod01}/config | tar xzvf - -C /data'
# Kubernetes init container
initContainers:
- name: install
image: busybox
command:
- sh
- -c
- 'wget -qO- http://replication-manager:10001/api/clusters/my-cluster/servers/db1/3306/config | tar xzf - -C /data'
replication-manager regenerates the archive locally and can push it to database hosts over SSH. The config is unpacked into the server's data directory, and the dbjobs_new script is placed in {datadir}/init/init/dbjobs_new.
By default the config endpoint requires no credentials so init containers can bootstrap without pre-provisioned tokens. To require JWT authentication (protects embedded passwords):
api-credentials-secure-config = true
When enabled, the bootstrap script fetches a session token first using environment variables (REPLICATION_MANAGER_USER, REPLICATION_MANAGER_PASSWORD, REPLICATION_MANAGER_URL) and then requests the config archive with Bearer auth.
A pre-built bootstrap script that handles this flow is served at:
/static/configurator/opensvc/bootstrap
/static/configurator/onpremise/repository/debian/mariadb/bootstrap
/static/configurator/onpremise/repository/redhat/mariadb/bootstrap
The configurator provides two distinct scripts for on-premise (SSH) operation — bootstrap and start — with different behavior around config fetching and data directory initialization.
| Script | When to use | Config fetch | Data directory |
|---|---|---|---|
bootstrap |
First-time provisioning of a new node | Always downloads fresh config | Wipes /var/lib/mysql, copies .system skeleton |
start |
All subsequent restarts | Conditional (see below) | Copies .system with cp -rpn — never overwrites existing files |
Scripts are served by replication-manager at:
/static/configurator/onpremise/repository/debian/mariadb/bootstrap
/static/configurator/onpremise/repository/debian/mariadb/start
/static/configurator/onpremise/repository/redhat/mariadb/start
/static/configurator/onpremise/package/linux/mariadb/start
Both scripts receive all credentials and addressing via injected environment variables (REPLICATION_MANAGER_URL, REPLICATION_MANAGER_USER, REPLICATION_MANAGER_PASSWORD, REPLICATION_MANAGER_CLUSTER_NAME, REPLICATION_MANAGER_HOST_NAME, REPLICATION_MANAGER_HOST_PORT). See Environment Variables in the Maintenance chapter.
The start script does not unconditionally re-fetch the config archive. Before downloading anything it checks:
GET /api/clusters/{clusterName}/servers/{host}/{port}/need-config-fetch
config.tar.gz, unpacks it, applies the .cnf files, and then starts the databaseThis matters in normal restarts: if nothing changed on the replication-manager side since the last start, the server just starts immediately using its local config — no network round-trip to fetch the archive.
The need-config-fetch response is controlled by a per-server cookie that replication-manager manages. The cookie state is driven by:
prov-db-start-fetch-config| Description | When true (default), replication-manager clears the "no-fetch" cookie on each monitoring tick, so the next start will re-fetch config. Set to false to permanently suppress config fetching on start — the database will always start with whatever config is already on disk. |
| Type | Boolean |
| Default | true |
# Never re-fetch config on start; trust the existing files on disk
prov-db-start-fetch-config = false
Use false when you are managing config files externally (e.g., configuration management tools, manual tuning) and do not want replication-manager to overwrite them on restart.
When the start or bootstrap script copies new .cnf files from the archive, it applies a non-destructive rule for my.cnf:
my.cnf already exists and does not start with # Generated by Signal18 replication-manager, it is assumed to be a hand-written file. The script renames the new Signal18-generated file to my.cnf.new and keeps the existing my.cnf untouched.my.cnf starts with # Generated by Signal18 replication-manager, it was placed by a previous run and is replaced normally.To override this and force the Signal18 config to win regardless:
export REPLICATION_MANAGER_FORCE_CONFIG=true
.system DirectoryThe archive always contains a data/.system/ skeleton. This directory holds all files that must live inside the MySQL datadir but are not user data:
/var/lib/mysql/.system/
├── innodb/
│ ├── undo/ InnoDB undo tablespaces (innodb_undo_directory)
│ └── redo/ InnoDB redo log files (innodb_log_group_home_dir)
├── aria/ Aria engine transaction log (aria_log_dir_path)
├── tmp/ Temporary files (tmpdir)
├── repl/ Replication relay logs and position files
├── logs/
│ ├── error.log MariaDB error log
│ ├── slow-query.log Slow query log
│ ├── server_audit.log Audit plugin log
│ └── sql_errors.log SQL error log plugin
└── tokudb/ TokuDB data files
The generated .cnf fragments point InnoDB, Aria, and log variables at these paths using relative syntax (./.system/...). The configurator rewrites these to absolute paths depending on context:
./.system → /var/lib/mysql/.system./.system → {basedir}/var/lib/mysql/.systemOn bootstrap, the entire /var/lib/mysql is removed and the .system tree is copied in fresh — this is intentional for first-time provisioning.
On start, the .system tree is copied with cp -rpn which means no file is ever overwritten. InnoDB undo and redo logs, replication relay logs, and log files already present on disk are left completely intact. Only missing directories and files from the archive skeleton are created.
To force immediate regeneration of all config archives for a cluster (without waiting for the next monitoring tick):
POST /api/clusters/{clusterName}/actions/regenerate-configs
replication-manager re-evaluates tags and resource settings, writes new init/ trees, and rebuilds all config.tar.gz archives. Servers pick up the updated config on their next init-container launch or SSH apply cycle.
Inside the replication-manager working directory:
{working-dir}/{cluster-name}/{host}_{port}/
├── config.tar.gz ← packed archive served to init containers
├── init/
│ ├── etc/mysql/
│ │ ├── conf.d/ tag-generated .cnf fragments
│ │ ├── rc.d/ ordered symlinks (loaded by main my.cnf)
│ │ ├── custom.d/ user overlays (preserved, delta, agreed)
│ │ └── ssl/ TLS certificates
│ └── init/
│ └── dbjobs_new maintenance job script
├── 01_preserved.cnf server-specific locked variables (see Config Tracking)
├── 02_delta.cnf calculated drift between expected and deployed
├── 03_agreed.cnf manually accepted deviations
└── preserved_variables.cnf cluster-wide preserved variables
etc/mysql/rc.d contains numbered symlinks that control load order. etc/mysql/custom.d is read last, so user overlays always win over tag-generated fragments. This is where the three-layer preserved/delta/agreed files land inside the container.
See Config Tracking for a full explanation of 01_preserved.cnf, 02_delta.cnf, and 03_agreed.cnf.
See Tags for the complete tag reference.
See Configuration Guide for all prov-db-* settings.