Performing push backups – Part 1: rdiff-backup

Posted by | Comments (2) | Trackbacks (0)

Backups are a very vital part of every computer system, be it a corporate PC network or simply your local workstation. Unfortunately, they are often neglected, although everyone knows how important they are. The “I haven't had any bad incidences yet, but I know I really should… guess what… I'll do it next week” attitude is only too well known by everybody, including myself.

Performing backups is a tedious process if done wrong. Thus backups need to done automatically in the background without any user intervention. As soon as someone feels the need to do something in order to get his stuff backed up, he will ultimately end up with no backup at all (and probably a bad conscience he only forgets too fast).

Not surprisingly, there are a bunch of tools you can use to crate your backups, but seldom they are just perfect. Often they are simply what I called them: “tools”. You have to know how to use them in order to build a working backup infrastructure on them. There are some plug'n'play backup programs out there such as Apple's Time Machine, but if you're like me and want to have a little more control over your backups then you have to think a little more about it.

In this little two-part article series (Part 2) I will present two tools I've been playing around with a lot and I'll show you how you can use them to set up your own personal NAS with a spare piece of hardware such as a Raspberry Pi. No need for any expensive special storage system.

The two tools are in order: rdiff-backup and rsnapshot. Both have their strengths and weaknesses. Let me give you a list of their pros and cons.

The two candidates

rdiff-backup:

Pros:

  • Very small backup sizes since it only stores reverse deltas
  • Has a snapshot file system with rdiff-backup-fs making it totally transparent to the user
  • Saves all file permissions/attributes separately by default

Cons:

  • Very CPU demanding and therefore pretty slow
  • Without the rdiff-backup-fs FUSE only the last increment is accessible directly via the file system
  • Aborted backups can't be resumed (rdiff-backup will revert to the last successful backup)
  • Increments are a little fragile and easily corrupted
  • rdiff-backup depends on the very same rdiff-backup version installed on the remote host

rsnapshot:

Pros:

  • Stores snapshots in different directories and hardlinks them, so you don't need any special tools to restore older increments
  • Very robust, aborted or failed backups can be resumed without further consequences
  • A lot faster with less resource consumption due to simpler calculations
  • Actually uses rsync making it very flexible

Cons:

  • Files that only changed slightly are copied completely resulting in larger backup sizes
  • More complex to set up (okay, I admit, it's just rdiff-backup being even simpler)

You see: rdiff-backup has quite a few drawbacks. Additionally to those mentioned above I should also say that rdiff-backup hasn't experienced any visible development since 2009 whereas rsnapshot is still being worked on, which may or may not be an advantage (although the latest official release is from 2008).

To say it directly: I don't use rdiff-backup anymore. I had my experiments with it, but finally I ended up using rsnapshot for reasons I'll explain in the next part. But still I think there are valid uses cases when rdiff-backup is just the right tool for you. For instance, if disk space is a very crucial constraint, you might prefer it over rsnapshot. One thing I also quite liked is that you don't have just files but real timestamped increments. rdiff-backup enables you to restore files based on their backup date, not only the file modification date (which might be wrong). rsnapshot, however, doesn't store this information. The only thing there which lets you guess the backup date is the folder name of the increment.

Push vs. pull backups

In computer networks whose nodes are always on you usually see pull backup systems where the backup servers actively “pull” the backups over from the machines which are to be backed up. This has a major advantage and that is simplicity and therefore reliability and maintainability. You only have one (or a couple) of machines implementing the logic for running backups. All the other machines are simply passive in that regard and can concentrate their resources on their real tasks. That also makes changing things such as the backup destination easier since you don't need to change the settings on each and every machine but only on the backup storage system. It also gives the backup system the opportunity to coordinate its own resources because it can decide itself when it has enough free capacities to process yet another backup in parallel.

But pull backups also have their disadvantages, especially when the machines that should be backed up are not always on. Here it's very difficult for the backup system to get proper backups since some nodes might simply be down when it tries to run a backup. Therefore most consumer-targeted backup systems are “push” backup systems.

Another consideration in favor of push backups is privilege separation. For performing pull backups of non-publicly accessible files and folders the remote backup system needs to log into my machine as root. Although I can still restrict the commands it can run, I don't particularly like the idea of another machine logging into my PC as root in an automatic fashion. When doing push backups, however, I don't need to log in as root anywhere. I still need root privileges on my own machine, but on the remote side I can simply log in as an unprivileged user. Call me paranoid if you don't agree.

To put it in a nutshell: I will concentrate on push backups throughout this series for both rdiff-backup and rsnapshot (which is a little more fiddly in the beginning, but it can be done pretty well).

Getting your hands dirty

Let's start with some basics about how rdiff-backup basically works. Similar to rsync it connects to a remote machine and starts a daemon process (both can of course also perform local backups, but that's not within the scope of this article). The client then sends the file signatures to the daemon process which compares them to the local files. If differences are found that can't be reconstructed from the data already available in the local file, they are transferred over the wire. This requires rdiff-backup to be installed on both systems because it needs to operate on the native file system without having an additional wrapper in between such as SFTP (NFS would work, though). That being said: we can divide our backup model into two parts: the client part (being the machine to be backed up) and the server part (being the backup storage). Let's start with the server part since the client won't work without it.

From now on I will call the client altair and the server bellatrix (which are simply the hostnames I use in my local network for my main workstation and the NAS).

The server part

I'm using a Raspberry Pi as the NAS running Arch Linux for ARM on it, but you can also use any other piece of hardware that can run rdiff-backup and preferably a Linux system.

We could now follow two different approaches: we could either use one directory for each client to be backed up and let the local root push the whole system backup to that folder or we could let each user back up his own files individually. Although the second approach might look more error-prone and less reliable, it's the way I decided to go because it enables me to keep the global system backup and the backup of my personal data separate. That makes it possible that I can simply restore my own data without the need of having root access and I can also run manual backups myself, e.g. when I have worked on something important and don't want to wait until the cron daemon starts the next global backup.

That said we need to find a way to keep separate users for each client and each user on those clients. Therefore I decided to use usernames consisting of the client hostname and the client username. For instance, the backup user for the user janek on the client host altair would be altair-janek. Similarly the global system backup of configuration files and shared resources would be altair-root (both being plainly unprivileged, of course).

The storage (i.e. the backup hard drive or maybe even a RAID and/or LVM system) is mounted to /mnt/storage and contains a folder bkp which is bind mounted to /bkp (mount -o bind /mnt/storage/bkp /bkp) . This is were the home directories of the backup users will go to.

First of all let's create the backup user (we're working on bellatrix):

useradd -b /bkp -m -p '*' -s /bin/sh altair-janek

Next we need to add an SSH key to allow passwordless login to that account (login with a password has been disabled by setting the password hash to * in the step before, but you might also want to disable it entirely in your /etc/ssh/sshd_config). After you created an SSH key on your client, transfer the public part to the server and add it to /bkp/altair-janek/.ssh/authorized_keys.

A word of warning: Don't use your normal SSH key! Create a new one you only use for the backup because it must not be encrypted. Then add a Host entry in ~/.ssh/config to use the newly generated key for your backup host.

This already enables our client to log in, but we don't want to give him full shell access, so we need to restrict the commands he can run. He should be able to log in via SSH, but then only push his backup using rdiff-backup. To make this work, modify the public key entry in /bkp/altair-janek/.ssh/authorized_keys as follows:

command="rdiff-backup --server --restrict '/bkp/altair-janek/files'" ssh-rsa AAAAB3NzaC1yc...

This will allow the client to only start the rdiff-backup daemon which restricts it to push files to and pull them from /bkp/altair-janek/files (that folder needs to exist, of course). Additionally you might want to chroot the user into his home directory or at least into /bkp.

The last thing we need to do is to protect the SSH configuration by assigning ownership to root:

chown -R root:root /bkp/altair-janek/.ssh

Do all this for each client user you want to enable backups for.

The client part

Now that we have set up our server, we need to perform the actual backup, which is quite simple. The client doesn't need to do anything else than running the proper rdiff-backup command. But we want to do it in a little more sophisticated way. We need a script that can be run as either root (and then performs a backup of the whole system, including the home directories) or as an unprivileged user in which case only the own home directory is backed up. Additionally, the backup of a home directory when run as root should be performed under the account of that user (otherwise we would store the backup twice on the server and in a location not accessible by him).

To specify which files should be included in a backup I'll use files containing file matching patterns. All files and directories that are not specified inside those files will not be backed up. For the global system backup I use the file /etc/default/rdiff-backup-filelist which contains something like this:

/etc
/usr/etc
/usr/local
- /srv/http/virtual/**
/srv/http
/root

This will include all files in /usr/etc, /usr/local, /srv/http and /root, but explicitly excludes all files below /srv/http/virtual. For a more detailed description of the format of this file have a look at the rdiff-backup examples page.

Additionally each user has a .rdiff-backup-filelist file inside his home directory which specifies the files which should be backed up under this user. That file could look like this, for instance:

- **.tmp
- **.swp
- **/.directory
- /home/janek/.cache
/home/janek
/srv/http/virtual/janek

Both files will be read from top to bottom and are “short-circuited”, i.e. the first match that is encountered will be used. Therefore /home/janek will be backed up, but without /home/janek/.cache.

Now we need to actually use these files. I wrote four shell functions for this:

BACKUP_HOST="bellatrix"
BACKUP_ROOT="/bkp"

# Back up selected system files
backup_system() {
        cd /root
        rdiff-backup \
                --exclude-other-filesystems \
                --include-symbolic-links \
                --exclude-special-files \
                --create-full-path \
                --ssh-no-compression \
                --include-globbing-filelist /etc/default/rdiff-backup-filelist \
                --exclude '**' \
                / \
                "$(hostname)-root@${BACKUP_HOST}::${BACKUP_ROOT}/$(hostname)-root/files"
}

# Back up single home directory
# Expects the home directory as parameter
backup_single_home_dir() {
        local home_dir=$1
        local passwd_entry
        local username
        local backup_cmd
        
        # Don't create a backup if home directory doesn't belong to a "real" user
        passwd_entry=$(grep ":${home_dir}:[^:]*$" /etc/passwd)
        if [ "$passwd_entry" == "" ]; then
                return
        fi
        
        username=$(echo "${passwd_entry}" | cut -d ':' -f 1)
        
        # Don't back up home directory either, if no files are marked for backup
        if [ ! -e "${home_dir}/.rdiff-backup-filelist" ]; then
                return
        fi
        
        # Also don't create a backup if no SSH key exists
        if [ ! -e "${home_dir}/.ssh/id_rsa" ] && [ ! -e "${home_dir}/.ssh/config" ]; then
                return
        fi
        
        cd "${home_dir}"
        
        backup_cmd="rdiff-backup \
                --create-full-path \
                --ssh-no-compression \
                --include-globbing-filelist \"${home_dir}/.rdiff-backup-filelist\" \
                --exclude '**' \
                / \
                \"$(hostname)-${username}@${BACKUP_HOST}::${BACKUP_ROOT}/$(hostname)-${username}/files\""

        # Lower privileges if running as root
        if [ $(id -u) -eq 0 ]; then
                su - "${username}" -c "${backup_cmd}"
        elif [ "$(id -u ${username})" == "$(id -u)" ]; then
                sh -c "${backup_cmd}"
        fi
}

# Back up all home dirs
backup_home_dirs() {
        local home_dir
        
        for home_dir in /home/*; do
                backup_single_home_dir "${home_dir}"
        done
}

All we still need to do is to trigger the various backup functions depending on whether the user is root or not:

if [ $(id -u) -eq 0 ]; then
        backup_system
        backup_home_dirs
else
        if [ "${HOME}" != "" ]; then
                backup_single_home_dir "${HOME}" "$(id -nu)"
        fi
fi

Now the script backs up all files and folders that are specified in ~/.rdiff-backup-filelist to the corresponding account on the backup host if run as a normal user. If the script is invoked as root, it will first back up all specified system files and then run backups for the home directories under the appropriate user accounts if a ~/.rdiff-backup-filelist file exists.

I have created a tar archive with a little more polished version of the script above (download link at the end of the article). Feel free to use and modify it. The archive also contains a server directory containing scripts to automate the creation of backup users. It also contains a script rb-remove-old-increments which you can run as a cron job on the server from time to time to clean up old increments. Basically it does nothing more than running

rdiff-backup --force --remove-older-than <time> <folder>

for each home directory. The server scripts also have a global configuration file to avoid hardcoding of path names etc. The config file is located in /usr/local/etc/default/rb-server-config. A sample file is included in the tarball.

Last adjustments: spinning down the hard drive

The backup system is basically set up, but there is one thing left you might want to do. Especially when you're performing backups of your local system (which I suppose), you'll probably have the hard drive somewhere near you or in another room you use more often. Unfortunately, hard drives make noise and consume a lot of power, but you won't use it all the time, only a few times a day. So it doesn't need to be running all the time.

Some hard drives have an automatic spin-down timeout configured inside the firmware. If not, you can configure one with hdparm -S timeout /dev/device, but oftentimes that doesn't work. But you can use another tool: sdparm (probably not installed by default). You can use this little script and run it periodically via cron to spin down your hard drive after a given amount of time:

#!/bin/sh
# Check if disk has been used since last check and spin it down if not

if [ "${1}" == "" ]; then
        echo "Usage: $(basename ${0}) "
        exit
fi

last_state_file="/tmp/storage-state-${1}"

touch $last_state_file
chmod 600 $last_state_file

new_storage_state=$(cat /proc/diskstats | grep "$1")
old_storage_state=$(cat $last_state_file)

if [ "$new_storage_state" = "$old_storage_state" ]; then
        sync
        sdparm --flexible --readonly --command=stop /dev/$1 2>&1 > /dev/null
fi

echo "$new_storage_state" > $last_state_file

Use it like this: spin-down-storage sda. It will spin down the hard drive if it has been idle since you last ran the script.

Another option (which I tend to) is to use the hd-idle tool, which does basically the same, only more elegantly (a little short note to my fellow Arch users: I had to modify the systemd unit file as described in the comments on the AUR package page to get it to work). One thing I noticed though: after switching the hard drive off and on again by hand, you sometimes seem to have to restart hd-idle.

Downloads:

Trackbacks

No Trackbacks for this entry.

Comments

There have been 2 comments submitted yet. Add one as well!
Thomas Harold
Thomas Harold wrote on : (permalink)
You should also look at the use of 'autofs' under Linux. It will automatically mount a volume as soon as it is accessed, then dismount it after N seconds of inactivity. We use this for our USB backup drives so that the users can safely swap out the drives without worrying (too much) about the file system being corrupted.
Janek Bevendorff
Janek Bevendorff wrote on : (permalink)
Hi Thomas, thanks for your addition. That's indeed mentionable. I don't swap the hard drive on the fly, so I didn't include autofs in this guide, but it surely is useful for certain use cases such as yours. :-)

Write a comment:

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

By submitting a comment, you agree to our privacy policy.

Design and Code Copyright © 2010-2024 Janek Bevendorff Content on this site is published under the terms of the GNU Free Documentation License (GFDL). You may redistribute content only in compliance with these terms.