I have a replica set of two MongoDB nodes running (with an arbiter) and I am now wondering about the best backup and recovery solution. I am aware of the official documentation, but it does not provide me enough pro and con’s for each solution. In particular, our team wants to decide between either a OpsManager-solution or a mongodump. I am not experienced with either of them, so could you be so kind and please help me out with more arguments and experiences? The following arguments I collected so far, feel free to edit them if they are not correct:
Arguments for mongodump and against OpsManager
- simpler, just a command line for a snapshot and another one for the recovery
- no additional servers and/or licenses required which is the suggested solution for OpsManager
Arguments for OpsManager and against mongodump
- tested solution, provided directly by the team who developed MongoDB
- cleaner and sustainable, since it could be configured via GUI and does not require command line access
- easy logging of each backup run possible, which contrasts to mongodump requiring a custom (power?) shell script wrapped around
Related questions on Stackexchange which did not solve my issue:
- Backup with mongoexport or mongodump?
It might not be a full answer (as others will have valid opinion) but I am ok with not getting credit.
Argument in favor of OpsManager:
- Continuous backup, you can recover up to the last committed transaction.
- You can set your retention one time and do not have to worry about it.
- You can automatically make multiple copy of your backup for redundancy.
- Restoring non production environment (DEV/QA/INT) in matter on minutes.
- mongodump excludes the content of the local database in its output which is problematic for sharded cluster and replica set.
- mongodump only captures the documents in the database in its backup data and does not include index data. mongorestore or mongod must then rebuild the indexes after restoring data.
- mongodump can adversely affect performance of the mongod. If your
data is larger than system memory, the mongodump will push the
working set out of memory.
- Mongodump is not suitable for large databases. This is from official documentation.
The mongodump and mongorestore utilities work with BSON data dumps,
and are useful for creating backups of small deployments. For
resilient and non-disruptive backups, use a file system or block-level
disk snapshot function, such as the methods described in the MongoDB
Backup Methods document.
Because mongodump and mongorestore operate by interacting with a
running mongod instance, they can impact the performance of your
running database. Not only do the tools create traffic for a running
database instance, they also force the database to read all data
through memory. When MongoDB reads infrequently used data, it can
evict more frequently accessed data, causing a deterioration in
performance for the database’s regular workload.