Many moons ago, I deployed a new database instance inside a Docker container alongside a production application. As these things go, it was only after the database had been deployed for a few weeks, filled up with real user data, did I realize that I'd forgotten to add a physical Docker volume, leaving me with no way to access the access and back up the data folders inside the container.
As an emergency fix, I cooked up a quick script to query the database once every day, dump all of its data to a file, and then encrypt/upload that backup somewhere offsite. That worked, but it was untenable and prone to failure at best; it would be only a matter of time until the database becomes too large and the queries become too slow. I had to think of another solution, but without a Docker volume giving access to the files and folders inside, it was unclear if there was anything that could be done.
Thankfully, after a little bit of research and a lot of insight from this answer on StackOverflow, I learned that it's possible to execute an arbitrary command in an existing Docker container, and then copy files out of it.
Using that knowledge, I wrote a small shell script to attach to the Docker database container, run a SQL backup command using the
sqlcmd binary, and then copy the backup out onto my host machine where it could be squirreled away somewhere safe. Here's how I wrote the script:
- Make sure the script knows which container and which database (e.g. master) is being backed up. My script expects the caller to pass in both of those things, but you can just as easily hardcode it instead.
- Generate an automatic backup file name based on the current date and time. Also make sure the output folder exists on the host machine, or else the
docker cpcommand may fail.
- Connect to the target container with
docker execand use the
sqlcmdbinary to execute a
BACKUP DATABASE [$dbName]SQL query. This command cannot move the file outside of the container, so any paths you're using here are relative to the container, not your own file system.
docker cpto copy the target file out of the container and onto your file system.
Note: this script is written for Unix systems (or anything with a bash shell, which includes the Windows Subsystem for Linux), but you could just as easily write the same backup script in PowerShell with only a few small changes.
#! /usr/bin/env bash CONTAINER_NAME=$1 DATABASE_NAME=$2 if [ -z $CONTAINER_NAME ] then echo "Usage: $0 [container name] [database name]" exit 1 fi if [ -z $DATABASE_NAME ] then echo "Usage: $0 [container name] [database name]" exit 1 fi # Set bash to exit if any further command fails set -e set -o pipefail # Create a file name for the backup based on the current date and time # Example: 2019-01-27_13:42:00.master.bak FILE_NAME=$(date +%Y-%m-%d_%H:%M:%S.$DATABASE_NAME.bak) # Make sure the backups folder exists on the host file system mkdir -p "./backups" echo "Backing up database '$DATABASE_NAME' from container '$CONTAINER_NAME'..." # Create a database backup with sqlcmd docker exec -it "$CONTAINER_NAME" /opt/mssql-tools/bin/sqlcmd -b -V16 -S localhost -U SA -Q "BACKUP DATABASE [$DATABASE_NAME] TO DISK = N'/var/opt/mssql/backups/$FILE_NAME' with NOFORMAT, NOINIT, NAME = '$DATABASE_NAME-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10" echo "" echo "Exporting file from container..." # Copy the created file out of the container to the host filesystem docker cp $CONTAINER_NAME:/var/opt/mssql/backups/$FILE_NAME ./backups/$FILE_NAME echo "Backed up database '$DATABASE_NAME' to ./backups/$FILE_NAME" echo "Done!"
This backup script assumes your SQL administrator username is the default "SA". When you run the script, you will be prompted to enter your administrator password. Assuming it's correct, the
sqlcmd tool will run the backup operation, and the rest of the script will copy that file out onto your host machine.
One thing that's important is that when the script runs
sqlcmd, it's using
-b -V16 options (note the capitalization of "V") to tell the executable it should exit with an error code if the query fails. Without those two options, the executable will always return a successful status code, even if the query itself fails; that would make the script think the backup actually succeeded, and it would print out its remaining messages as such.
Assuming the query doesn't fail, though, this is what you'd see in your terminal after running the script:
$ ./backup.sh sql-db-backup-example Blackbox # Backing up database 'Blackbox' from container 'sql-db-backup-example'... # Password: # 10 percent processed. # 20 percent processed. # 31 percent processed. # 40 percent processed. # 50 percent processed. # 60 percent processed. # 71 percent processed. # 80 percent processed. # 90 percent processed. # Processed 456 pages for database 'Blackbox', file 'Blackbox' on file 1. # 100 percent processed. # Processed 4 pages for database 'Blackbox', file 'Blackbox_log' on file 1. # BACKUP DATABASE successfully processed 460 pages in 0.026 seconds (138.070 MB/sec). # Exporting file from container... # Backed up database 'Blackbox' to ./backups/2020-01-28_18:14:13.Blackbox.bak # Done!
After that, you'd have the backup file conveniently placed in a
./backups folder, where you can do whatever you like with it. In my case, I use a backup service called Tarsnap, which bills itself as an "online backup service for the truly paranoid". However, uploading the file to Dropbox, OneDrive, Azure, AWS, etc. works just as well.
Of course, once you've got backups backing up, you need to think about how you're going to restore from one once the unthinkable inevitably happens. Happily a restore script turns out to be just as simple as a backup script; it's only a matter of reversing the order of commands and figuring out which file to restore from:
#! /usr/bin/env bash CONTAINER_NAME=$1 DATABASE_NAME=$2 BACKUP_NAME=$3 if [ -z $CONTAINER_NAME ] then echo "Usage: $0 [container name] [database name] [backup file name]" exit 1 fi if [ -z $DATABASE_NAME ] then echo "Usage: $0 [container name] [database name] [backup file name]" exit 1 fi if [ -z $BACKUP_NAME ] then echo "Usage: $0 [container name] [database name] [backup file name]" exit 1 fi # Check if the backup file can be found if [ ! -f $BACKUP_NAME ] then echo "Backup file $BACKUP_NAME does not exist." exit 1 fi # Set bash to exit if any command fails set -e set -o pipefail FILE_NAME=$(basename $BACKUP_NAME) echo "Copying backup file $BACKUP_NAME to container '$CONTAINER_NAME'. Note: the container should already be running!" # Copy the file over to a special restore folder in the container, where the sqlcmd binary can access it docker exec $CONTAINER_NAME mkdir -p /var/opt/mssql/restores docker cp $BACKUP_NAME "$CONTAINER_NAME:/var/opt/mssql/restores/$FILE_NAME" echo "Restoring database '$DATABASE_NAME' in container '$CONTAINER_NAME'..." # Restore the database with sqlcmd docker exec -it "$CONTAINER_NAME" /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -Q "RESTORE DATABASE [$DATABASE_NAME] FROM DISK = N'/var/opt/mssql/restores/$FILE_NAME' WITH FILE = 1, NOUNLOAD, REPLACE, RECOVERY, STATS = 5" echo "" echo "Restored database '$DATABASE_NAME' in container '$CONTAINER_NAME' from file $BACKUP_NAME" echo "Done!"
Note: if you're restoring the master database, your SQL Server instance will need to be in "Single User mode" or else the restore operation will fail. Other databases besides master should restore without issue.
Once again, the script uses the
-b -V16 (capitalized "V") to fail and error out if the restoration query fails. But if it's successful, you'll see a message that looks like this in your terminal:
$ ./restore.sh sql-db-backup-example Blackbox ./backups/2020-01-25_15:43:35.BlackboxExample.bak # Copying backup file ./backups/2020-01-25_15:43:35.BlackboxExample.bak to container 'sql-db-backup-example'. Note: the container should already be running! # Restoring database 'Blackbox' in container 'sql-db-backup-example'... # Password: # 6 percent processed. # 11 percent processed. # 15 percent processed. # 20 percent processed. # 25 percent processed. # 31 percent processed. # 36 percent processed. # 41 percent processed. # 45 percent processed. # 50 percent processed. # 57 percent processed. # 61 percent processed. # 66 percent processed. # 70 percent processed. # 75 percent processed. # 82 percent processed. # 86 percent processed. # 91 percent processed. # 95 percent processed. # 100 percent processed. # Processed 344 pages for database 'Blackbox', file 'Blackbox' on file 1. # Processed 7 pages for database 'Blackbox', file 'Blackbox_log' on file 1. # RESTORE DATABASE successfully processed 351 pages in 0.031 seconds (88.331 MB/sec). # Restored database 'Blackbox' in container 'sql-db-backup-example' from file ./backups/2020-01-25_15:43:35.Blackbox.bak # Done!
There you have it, two scripts for backing up and restoring a SQL database running inside a Docker container. You can modify this script to work with any kind of Docker container -- not just SQL Server containers -- with only a few changes. The same principle applies to backing up e.g. a CouchDB container: connect to the container, run your backup command (for CouchDB, I just scoop up the data folder and put it in a gzipped tarball), then copy the relevant files out to the host machine.
Learn how to build rock solid Shopify apps with C# and ASP.NET!
Did you enjoy this article? I wrote a premium course for C# and ASP.NET developers, and it's all about building rock-solid Shopify apps from day one.
Enter your email here and I'll send you a free sample from The Shopify Development Handbook. It'll help you get started with integrating your users' Shopify stores and charging them with the Shopify billing API.