I haven’t really changed the script since I wrote it. I’ve got it running on a bunch of RPi systems running on my network.
The first time you run it, there aren’t any archives in the directory, so the delete command looking to snuff out old backups will fail – is that what you’re seeing? That delete (sudo rm *.gz) will throw an error message, but the tar backup runs and creates one.
Looking at the script, I made one change – I put a 30 second delay in between the tar step and copying the archive to the server to let things cool off a bit.
Here’s the script I’m running on my systems now:
$ cat /backups/backup.sh
K6RTM 2014/07/17 back up raspberry pi and copy the backup to server volume /sheep/rpibackups
using the $HOSTNAME for the backup file
changed 2018/03/23 to add 30 second cooldown afterTAR
insure we have a server connection, it’s in fstab
sudo mount -a
put us in the backups directory
remove old backups (dangerous, I know)
echo “Snuff old backups…”
sudo rm *.gz
capture our startup time
capture date for date-stamping the backup file
start doing the backup
echo “Starting backup to $_bfile…”
tar -zcpf “$_bfile” --directory=/ --exclude=proc --exclude=run --exclude=tmp --exclude=sys --exclude=dev/pts --exclude=backups --exclude=var/swap --exclude=sheep --exclude=pigs .
done, let 'em know
printf “Backed up in %3d minutes.\n” “$((T/60))”
echo “30 second cool down…”
capture time again for copying to server
copy the backup file to the server
echo “Copying $_bfile to $_sfile”
sudo cp “$_bfile” “$_sfile”
printf “Copied to server in %3d minutes.\n” “$((T/60))”
One thing to watch for – there are kinds of file system damage that fsck won’t detect/fix, but will cause the tar backup to fail.
Let me know what you’re seeing in more detail, I’ll help if I can, but for me it works a treat. Since I have a bunch of systems running this script, I have staggered their start times so they all aren’t trying to send things to the server at the same time.