Backing up an SD card on a regular basis?

I know it’s straightforward enough to take the SD card out of my Pi, plug it into another computer and run a backup but that’s time consuming (it took over an hour to do a 16Gb card) and requires user intervention which is too much hassle really.

Is anyone running any kind of automated backup? I’d like it to run a couple of times a week so that I’ve always got a fairly recent backup of my card. I appreciate that rebuilding from scratch isn’t exactly difficult but I’d much rather be able to restore a recent backup.



Great question. I have studied this at length and cant quite get to an answer. Hope someone can help!

Here is a little something.


Each of my Pi systems has a backup script set to run (via cron) twice a week, backing up to the server in the closet. This happens in a few steps:

  1. delete the old backup
  2. create a new compressed (tar) backup with the Pi name and the date in the backup filename on the Pi, skipping a number of things
  3. copy the backup to the server in the closet

I create the tar backup file on the Pi and then copy it to the server for a number of reasons:

  1. it’s (a little) faster creating the backup on the Pi rather than writing over the network
  2. more important, I get a backup even if the server in the closet isn’t cooperating for some reason.

Note that this approach DOES NOT WORK if you use the Raspberry Pi Noobs distro – Noobs does things in interesting (weird to me) ways.

Here’s the script:

# K6RTM 2014/07/17 back up raspberry pi and copy the backup to server volume /sheep/rpibackups
# using the $HOSTNAME for the backup file
# insure we have a server connection, it's in fstab
sudo mount -a
# put us in the backups directory
cd /backups
# remove old backups (dangerous, I know)
echo "Snuff old backups..."
sudo rm *.gz
# capture our startup time
T="$(date +%s)"
# capture date for date-stamping the backup file
_now=$(date +"%Y-%m-%d")
# start doing the backup
echo "Starting backup to $_bfile..."
tar -zcpf "$_bfile" --directory=/ --exclude=proc --exclude=sys --exclude=dev/pts --exclude=backups --exclude=var/swap --exclude=sheep --exclude=pigs .
# done, let 'em know
T="$(($(date +%s)-T))"
printf "Backed up in %3d minutes.
" "$((T/60))"
# capture time again for copying to server
T="$(date +%s)"
# copy the backup file to the server
echo "Copying $_bfile to $_sfile"
sudo cp "$_bfile" "$_sfile"
T="$(($(date +%s)-T))"
printf "Copied to server in %3d minutes.
" "$((T/60))"

The actual backup happens on line 19 – creating a compressed tar (.gz) backup file, excluding a number of things, such as the backup directory, and the usual file server volumes (sheep and pigs).

This does a full file-by-file backup. Why do a file-by-file backup on a system where most of the stuff isn’t changing? Isn’t that a waste of time (and space)?

Well, it happens in the middle of the night so I don’t care how long it takes. Doing a file-by-file backup means you’re running the entire file system, so if there’s a bad spot (which may happen with a failing SD card), the tar backup will probably fail. That tells me to look at logs.

The server in the closet has lots of storage. I tend to keep at least two backups for each Pi, the newest, and an old one. If I’m changing things, I may keep more, so I can back up just a bit.

–bob k6rtm

Thank both of you guys. I will have to learn some more stuff, but with your code I think I can get there. Should have ask sooner. Thanks again.

This is very useful.

Could you please describe the process of restoring a backup to a blank formatted SD card.

Agreed, this is very useful, thanks. It’s taken a few tweaks to change things to suit my config but this has created a tar backup on a windows box on my network that’s about 600Mb. I’ve already got a script on that PC which will delete files older than a certain date so that’s going to work well.

The actual backup took around 12 minutes to create and less than a minute to copy!

If you are running PiAware, what is it that changes week to week? What is wrong with the backup of the card that you made when you first set it up?

I have a few graphs where I would lose some history, but that is it, I think. I’m open to being corrected though.

I should have explained my strategy a bit more. File systems fail in interesting and some times subtle ways, particularly if they’re on SD cards. An incremental backup may not catch a mangled link or other (directory structure) damage in a file or directory that hasn’t been modified. Additionally we tend to run filesystems with the -noatime flag to reduce modifications.

Running a full file system backup traverses (and tries to read) the entire file system (well, except for the parts that are deliberately skipped). Think of it as insurance.

My Pi backups tend to weigh in at around 600mb, with a couple of them that have full UI and desktop around 1.2Gb. They’re going to a dedicated 2Tb server partition, so there’s plenty of room.

Responding to another question, I seldom restore to a blank SD card. I keep bare system cards around, with some key directories created, such as /backups.

When I want to restore a volume, I copy the backup .gz file from the server to the /backups directory, such as:

sudo cp /sheep/rpibackups/wombatbackup-2016-12-05.tar.gz /backups/

then the restore is:

cd /
sudo tar  -zxvpf /backups/wombatbackup-2016-12-05.tar.gz

followed by a reboot when the restore is complete

–bob k6rtm