Edit: Results tabulated, thanks for all y'alls input!
Results fitting within the listed categories
Just do it live
-
Backup while it is expected to be idle @[email protected] @[email protected] @[email protected]
-
@[email protected] suggested adding a real long-ass-backup-script to run monthly to limit overall downtime
Shut down all database containers
-
Shutdown all containers -> backup @[email protected]
-
Leveraging NixOS impermanence, reboot once a day and backup @[email protected]
Long-ass backup script
- Long-ass backup script leveraging a backup method in series @[email protected] @[email protected]
Mythical database live snapshot command
(it seems pg_dumpall
for Postgres and mysqldump
for mysql (though some images with mysql don't have that command for meeeeee))
-
Dump Postgres via
pg_dumpall
on a schedule, backup normally on another schedule @[email protected] -
Dump mysql via mysqldump and pipe to restic directly @[email protected]
-
Dump Postgres via
pg_dumpall
-> backup -> delete dump @[email protected] @[email protected]
Docker image that includes Mythical database live snapshot command (Postgres only)
-
Make your own docker image (https://gitlab.com/trubeck/postgres-backup) and set to run on a schedule, includes restic so it backs itself up @[email protected] (thanks for uploading your scripts!!)
-
Add docker image
prodrigestivill/postgres-backup-local
and set to run on a schedule, backup those dumps on another schedule @[email protected] @[email protected] (also recommended additionally backing up the running database and trying that first during a restore)
New catagories
Snapshot it, seems to act like a power outage to the database
-
LVM snapshot -> backup that @[email protected]
-
ZFS snapshot -> backup that @[email protected] (real world recovery experience shows that databases act like they're recovering from a power outage and it works)
-
(I assume btrfs snapshot will also work)
One liner self-contained command for crontab
- One-liner crontab that prunes to maintain 7 backups, dump Postgres via
pg_dumpall
, zips, then rclone them @[email protected]
Turns out Borgmatic has database hooks
- Borgmatic with its explicit support for databases via hooks (autorestic has hooks but it looks like you have to make database controls yourself) @[email protected]
I've searched this long and hard and I haven't really seen a good consensus that made sense. The SEO is really slowing me on this one, stuff like "restic backup database" gets me garbage.
I've got databases in docker containers in LXC containers, but that shouldn't matter (I think).
me-me about containers in containers
I've seen:
- Just backup the databases like everything else, they're "transactional" so it's cool
- Some extra docker image to load in with everything else that shuts down the databases in docker so they can be backed up
- Shut down all database containers while the backup happens
- A long ass backup script that shuts down containers, backs them up, and then moves to the next in the script
- Some mythical mentions of "database should have a command to do a live snapshot, git gud"
None seem turnkey except for the first, but since so many other options exist I have a feeling the first option isn't something you can rest easy with.
I'd like to minimize backup down times obviously, like what if the backup for whatever reason takes a long time? I'd denial of service myself trying to backup my service.
I'd also like to avoid a "long ass backup script" cause autorestic/borgmatic seem so nice to use. I could, but I'd be sad.
So, what do y'all do to backup docker databases with backup programs like Borg/Restic?
I tried to find this on DDG but also had trouble so I dug it out of my docker compose
Use this docker container:
prodrigestivill/postgres-backup-local
(I have one of these for every docker compose stack/app)
It connects to your postgres and uses the pg_dump command on a schedule that you set with retention (choose how many to save)
The output then goes to whatever folder you want.
So have a main folder called docker data, this folder is backed up by borgmatic
Inside I have a folder per app, like authentik
In that I have folders like data, database, db-bak etc
Postgres data would be in Database and the output of the above dump would be in the db-bak folder.
So if I need to recover something, first step is to just copy the whole all folder and see if that works, if not I can grab a database dump and restore it into the database and see if that works. If not I can pull a db dump from any of my previous backups until I find one that works.
I don't shutdown or stop the app container to backup the database.
In addition to hourly Borg backups for 24 hrs, I have zfs snapshots every 5 mins for an hour and the pgdump happens every hour as well. For a homelab this is probably more than sufficient
Thorough, thanks! I see you and some others are using "asynchronous" backups where the databases backup on a schedule and the backup program does its thing on its own time. That might actually be the best way!