Monthly ArchiveApril 2007
Tools 26 Apr 2007 02:14:02
Incremental Backup
A long time ago when I started wanting good incremental backups for my data, I looked around the net and found many good tools, but none that did exactly what I wanted. Either the process to restore from the backup was slow, or they didn’t support SSH, or any number of other small annoyances.
So I wrote my own wrapper around rsync to do what I wanted: incremental.sh. And now I’m releasing that for anyone else to use. Pretty much as long as the source and target machine can run rsync, they can use the wrapper. Instructions are in the script itself.
It makes use of rsync’s –link-dest option to minimize space usage for increments by hard linking unchanged files to the previous increment’s version. I run it nightly, and have between 100x to 1000x speed improvements over doing a full backup, plus even though I keep several days’ worth of changes, the used space overhead is limited to only what has changed between current and oldest version.
Another plus is how restoring from any of the increments is as easy as copying the file out of that folder. And a full restore from that point in time is an rsync away. I have done several full restores from backups made with this script, and I so far haven’t found a better format for backups.
One minus is that currently this is only a pull script. I have no push version, nor need of one, as I prefer running a read-only rsync daemon firewalled down to only the IPs that will be pulling data from it.
URLs
Tech Guides 25 Apr 2007 23:18:56
Seamless Transition with Rsync and SSH
One of the ways to seamlessly transition services from one machine to another is by setting the Time To Live of the domains to something very low a few days before, then switch their IP to the new, and setting TTL back to normal when you can be sure it has propagated.
While that works, I couldn’t do that; I did not have access to change the TTL of all the domains attached to the machine, and didn’t want to carefully coordinate with all users…it’s bound to mess up somewhere.
So, I turned to my old friends rsync and OpenSSH.
The Transition Process
- Set up and configure the basics of the new machine (see my previous post).
- Create all desired users on the new machine.
- Create the applicable /home skeletons (~/mail/ ~/public_html/ and such)
- Configure services: Apache vhosts, Dovecot maildir, etc.
- On the old machine, disable SSH logins for the users to be transitioned. If you allow FTP (why would you? SSH provides.), disable that too.
- rsync -avzrltSpP /home and /var/spool/mail from old machine to new machine
- Make sure everything works; can do this by temporarily changing your own hosts file to point to the new IP for the domains.
- rsync -avzrltSpP /home and /var/spool/mail from old machine to new machine again, to make sure there are only minor changes for next step.
- Stop services on the old machine
- rsync -avzrltSpP /home and /var/spool/mail from old machine to new machine again, final time. Data transferred here is the data that will be live in a sec.
- Start services on the new machine
- SSH forward applicable ports from the old machine to the new machine; in my case ports 25/SMTP, 80/HTTP, 110/POP3, 143/IMAP, 587/Submission. Remember to allow remote hosts to use the forwarded ports (cmdline option -g). Has to be done as root as the ports are below 1024.
- Change domains to point to the new IP, and notify owners of the domains you don’t control to do the same.
- Once all domains are over and the IP is propagated, kill the SSH tunnels.
If executed properly in the off-hours, this should cause downtime of a few minutes. Majority of the downtime is for rsync.
Should be noted there is no real need to transition everything in one go. Mail can be done seperately, if handled properly; in my case I had users store mail in ~/mail/ and sites in ~/public_html/ so it was simply easier to rsync the whole /home over and do all services at once.
Problems and Gotchas
- Due to the nature of SSH tunnels all requests that pass through them will appear on the target machine as coming from localhost. This may cause problems with some services and scripts.
In my case, Sendmail was the problem: localhost is a trusted sender, so suddenly spam was being sent since it was being blindly tunneled from the old machine. This is easily avoidable by using Sendmail’s built-in relaying; I simply hadn’t thought about it.
- This method will not work for anything SSL (HTTPS, IMAPS, POP3S, etc), and that is a security feature.
- I did not have the need to transition databases; I already had MySQL and PostgreSQL running on a different machine. Those can and should be transitioned seperately before anything else, though, as they have quite different methods of doing so, and will require more downtime than other services.
Tech Guides 15 Apr 2007 17:40:35
How I Prep A Server…
Edited 2008-10-31: Refinements, further optimizations, Fedora 9.
This week I got a fresh machine from ServerBeach to play with, and thought it would be interesting to jot down what I do with a server before I consider it usable. The preinstalled OS is Fedora Core 6.
The order here is not chronological; more a general overview of steps.
Updating existing packages
- Easy step: yum upgrade
Replacement of some default packages
- Uninstalled the httpd package and all dependencies in favour of compiling the Apache HTTP Daemon myself. I never understood why Red Hat decided on the scattered structure with their package, so I install Apache from source to make sure it is all self-contained in /usr/local/apache2. Also for this machine I added the ITK MPM to run each vhost as a seperate user.
- Replaced the existing version of MySQL with the vendor RPMs.
- Ditto for PostgreSQL.
- Installed PHP from source.
- Installed Subversion from source.
- Replaced the existing version of Webmin with the vendor RPM.
Installation of new packages
- Midnight Commander from FC6 repository. Cannot live without this.
- Enabled Bind and set it up to serve as resolver for the machine by forwarding to existing resolvers. This greatly helps with lookup speeds when doing lots of lookups for the same hostname, such as Sendmail and Apache will be doing.
- Enabled Dovecot for IMAP.
File System
- Disable updating of last access time (option noatime in /etc/fstab).
- “tune2fs -o journal_data_writeback” to speed up even more, at the cost of crash recovery.
- “tune2fs -m1” to lower reserved space from 5% to 1%. On non-system partitions I set it to 0%.
PHP
- Installed the Alternative PHP Cache.
- Set up session and upload folders elsewhere than /tmp.
- Set up sessions to use multi-level folders, normally 3-levels deep. This prevents the single folder from becoming unusably huge.
Sendmail
- Configure to only allow user mail sending via authenticated submission (port 587).
- Add DNS based blacklists:
SpamCop
Distributed Sender Blackhole List
Spamhaus SBL + XBL
Other stuff
- Edited /etc/sysctl.conf to include:
kernel.shmmax = 536870912
net.ipv4.tcp_fin_timeout = 10 - Configured logrotate to keep 30 days worth of logs instead of 4, and to compress rotated logs.
- Enabled logrotate for the root mailbox (I always forget to delete mails in it).
- Enabled logrotate for Apache access and error log.
- Added nightly incremental backup of /home, /etc, and /var/spool/mail to a remote server.
- Added nightly time synchronization to pool.ntp.org.
- Added nightly cleanup of old files in /tmp and other temporary folders.
- Enabled the firewall.
That should about cover it…