So I had a thought of redoing my backup scripts, as my existing script file was becoming huge, complicated and difficult to follow.
It was a good way for me to learn Bash scripting however. The idea was to vastly simplify the whole process. I began by thinking the best way, and decided to use each machine to independently backup to a network drive, rather than having a single machine doing the grunt work running a script.
So each computer on the network had the drive mounted, and the script file placed into the main users crontab. I could of used the root crontab to copy the whole /home/ directory of course, but each machine only has one real user so I opted for that.
The file that runs from crontab is very simple
source /mnt/dlink_nfs/backup-script/var-dec rsync -va --delete-after --delete-excluded --exclude-from=$FOLDER_NFS/backup-script/exclude.lst /home/$USER $FOLDER_NFS/backup-test/$DIRNAME
And that’s it. Just uses rsync to copy the contents to the network drive . The referenced source file is just shared variable declarations.
Now, that’s not quite enough for me to be happy with a backup system, so I use a raspberry pi, to run a second set of scripts from its crontab. Those files are responsible for uploading to Amazon S3, and also copying to a secondary NAS.
#!/bin/bash source /mnt/dlink_nfs/backup-script/var-dec echo "Script Started: $(date)" >> uploads3.log if pidof -x "$script_name" -o $$ >/dev/null;then echo "An another instance of this script is already running" echo "Script Already running, exiting" >> uploads3.log echo "-----------------------" exit 1 fi if [[ $1 == 'clean' ]] then echo "clean command passed" >> uploads3.log rsync -vruO --delete-after $FOLDER_NFS/backup-test /mnt/samba/ echo "Clean compleated $(date)" exit 1 else if mountpoint -q /mnt/samba then echo "Samba share mounted, started RSYNC" >> uploads3.log rsync -vruO $FOLDER_NFS/backup-test /mnt/samba/ fi cd $FOLDER_NFS/backup-test/ echo "Starting S3 uploads" >> /home/pi/uploads3.log shopt -s dotglob shopt -s nullglob array=(*/) echo runing s3 for dir in "${array[@]}" do echo "Currently Running S3 on $dir" >> /home/pi/uploads3.log dir=${dir%/} timeout 30m s3cmd $s3_cmd $dir $s3_bucket echo "Compleated uploading $dir" >> /home/pi/uploads3.log done echo "Finished Script: $(date)" >> /home/pi/uploads3.log echo "--------------------" >> /home/pi/uploads3.log fi
And that file basically, ensures the script isn’t already running, copies the backup to another NAS, then iterates through each directory uploading to S3. I use timeout to limit each upload to 30mins to prevent overruns. Once the initial upload has completed, this limit can be removed.
You can view the most up to date git repository at my github site https://github.com/mikethompson/new-backup