Results tagged “sysadmin”

I have been playing with Linux containers for a while, and finally I decided to take a plunge and migrate one of my servers from OpenVZ to lxc. It worked quite well for testing until I noticed lack of support for automatic startup or shutdown. lxc-users mailing list was helpful in providing useful hints and I found 5 OpenSUSE scripts but decided it's too complicated for task at hand.

I wanted single script, for easy deployment on any Debian box, with following features:

  • reboot and halt from inside container should work as expected
  • cleanly shutdown or reboot container from host (as opposed to lxc-stop which is equivalent to turning power off)
  • init script to start and stop containers on host boot and shutdown

Result is lxc-watchdog.sh. It can start containers (and remember to start it on next host reboot), reboot or halt containers from host (using signals to trigger container's init) and automatically configure your container when you start it for the first time. Here is quick overview:

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 STOPPED boot /virtual/koha-241
koha-242 STOPPED      /virtual/koha-242

root@prod:/srv# ./lxc-watchdog.sh start
# start koha-240
'koha-240' is RUNNING
# start koha-241
2010-03-16T23:44:16 koha-241 start
'koha-241' is RUNNING
# skip start koha-242

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 STOPPED      /virtual/koha-242

root@prod:/srv# ls -al /var/lib/lxc/*/on_boot
-rw-r--r-- 1 root root 9 2010-03-16 21:40 /var/lib/lxc/koha-240/on_boot
-rw-r--r-- 1 root root 9 2010-03-16 21:40 /var/lib/lxc/koha-241/on_boot
-rw-r--r-- 1 root root 0 2010-03-16 22:58 /var/lib/lxc/koha-242/on_boot
As you can see, I used file /var/lib/lxc/name/on_boot to record which machines to bring up. When container is started for the first time, it will have boot enabled (just in case this is production application which you will reboot in 6 months and then wonder why it doesn't work). You can change boot status using:
root@prod:/srv# ./lxc-watchdog.sh boot koha-242
# boot koha-242

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 STOPPED boot /virtual/koha-242

root@prod:/srv# ./lxc-watchdog.sh disable koha-242
# disable koha-242
Installation as init script /etc/init.d/lxc-watchdog is easy:
root@prod:/srv# ln -s /srv/lxc-watchdog.sh /etc/init.d/lxc-watchdog

root@prod:/srv# update-rc.d lxc-watchdog defaults
update-rc.d: using dependency based boot sequencing
And finally, it can also be used to manually start, halt or reboot containers:
root@prod:/srv# /etc/init.d/lxc-watchdog start koha-242
# start koha-242
2010-03-16T23:47:46 koha-242 start
'koha-242' is RUNNING

root@prod:/srv# /etc/init.d/lxc-watchdog status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 RUNNING      /virtual/koha-242

root@prod:/srv# /etc/init.d/lxc-watchdog restart koha-242
# restart koha-242
2010-03-16T23:48:46 koha-242 kill -SIGINT 24838

root@prod:/srv# /etc/init.d/lxc-watchdog status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 RUNNING      /virtual/koha-242

root@prod:/srv# /etc/init.d/lxc-watchdog stop koha-242
# stop koha-242
2010-03-16T23:49:55 koha-242 stop
2010-03-16T23:49:55 koha-242 kill -SIGPWR 26086
2010-03-16T23:50:11 koha-242 stoped
In fact, you can use halt or reboot if you don't like stop and restart, just to keep one mapping less in your brain when working with it.

Log files are created for each container in /tmp/name.log. They include lxc-start output with boot messages and any output that started scripts might create which is useful for debugging container installations. Output from watchdog monitoring /var/run/utmp in container is also included, and it reports number of tasks (processes) in container, and here is example of stopping container:

root@prod:/srv# tail -5 /tmp/koha-242.log
2010-03-16T23:49:56 koha-242 66 tasks
2010-03-16T23:50:04 koha-242 22 tasks
2010-03-16T23:50:11 koha-242 runlevel 2 0
2010-03-16T23:50:11 koha-242 halt
2010-03-16T23:50:12 koha-242 watchdog exited
Hopefully this will make your switch to Linux Containers and recent kernels easier...

If you are system administrator this will sound familiar: you have to quickly fix something, and you know that you should document it somewhere (or keep backup) but it's so much work. You could install one of existing source control management tools on each box, but they usually come with huge dependencies, and having all files in central location would be so useful to co-relate configuration changes. To add insult to injury, existing SCMs don't do good job in tracking just few files spread across file-system.

So, what would be perfect tool for keeping remote files in central git repository look like?

  • no dependency on non-standard tools on clients allowing easy deployment
  • track individual files and ignore rest
  • central repository, one directory per hostname

I tried to solve this problem several times, writing wrappers around subversion to handle sparse checkouts and installing subversion and ssh authentication all over the place. But, all this should be simpler... Like this:

  1. add new client to track:
    dpavlin@klin:~/klin/bak-git$ ./bak-git-server.pl ~/backup/ 10.60.0.92 --install brr
    install on brr
    # lot of output stripped
    
    This will do several steps:
    • create git repository in ~/backup/ if it doesn't exist already
    • install root ssh authentication to brr using ssh-copy-id
    • install bak shell helper which uses netcat to connect back to 10.60.0.92
    • install rsync on client and use it as root over ssh to sync files
  2. Now we can login into brr and start tracking our files:
    dpavlin@brr:~$ bak add /etc/cron.d/tun0 
    dpavlin@brr:~$ bak add /etc/network/interfaces
    dpavlin@brr:~$ bak commit
    dpavlin@brr:~$ bak log
    commit df09dc5e19ef1d47311d701b4c63f0859b0b81c1
    Author: Dobrica Pavlinusic 
    Date:   Thu Feb 18 19:04:21 2010 +0100
    
        brr [commit] /home/dpavlin/
    
     create mode 100644 brr/etc/cron.d/tun0
     create mode 100644 brr/etc/network/interfaces
    
  3. change some configuration and review changes
    dpavlin@brr:~$ bak diff
    diff --git a/brr/etc/network/interfaces b/brr/etc/network/interfaces
    index 806c08e..c52c646 100644
    --- a/brr/etc/network/interfaces
    +++ b/brr/etc/network/interfaces
    @@ -2,8 +2,6 @@
     # and how to activate them. For more information, see interfaces(5).
     
     # The loopback network interface
    -auto lo
    -iface lo inet loopback
     
     # The primary network interface
     #allow-hotplug eth0
    
  4. Uups!! Where did loopback disappeared?
    dpavlin@brr:~$ bak revert /etc/network/interfaces 
    dpavlin@brr:~$ bak diff
    
  5. If we are content with changes, we can also commit them:
    dpavlin@brr:~$ bak commit /etc/network/interfaces optional note
    
As you guessed by now, it's very similar to git usage (expect revert which is from subversion) but with easy deployment on clients. It implements reduced subset of git commands:
  • bak add /path
  • bak commit [/path [message]]
  • bak diff
  • bak status
  • bak log
  • bak - push all local changes to server (without commit!)
If you need anything more complex, you can use git directly on ~/backup repository (even to commit changes from multiple hosts in one go).

Whole solution seems like ftp protocol, with data channel using ssh and rsync. File transfer should be encrypted (since we are trying to manage configuration files with sensitive information) and if you want to be really secure, just run server on 127.0.0.1 and tunnel port using RemoteForward 9001 localhost:9001 in .ssh/config.

Interesting title, isn't it? It's just what you need when you have classroom full of Windows machines and yet you want to share terminal screen which you are projecting onto every screen for easy copy/paste from your Linux laptop. Can it be done?

So, what do we have?

  • presentation laptop running xterm (with white background) projecting picture to whiteboard
  • Windows PC in front of every student
  • local network
And what do we want?
  • copy/paste from session to own editor on student's computer
  • students session should be view only
At first, VNC came to mind. But, then again I didn't find easy way to relay single session to multiple computers. And I wasn't really sure how well it would work with my copy/paste requirement. And it seemed a bit too complex. And after all, it's not graphic session, but instead just terminal running presentation in vim for syntax highlighting...

Windows

Biggest requirement on Windows in minimal deployment under normal user privileges. Putty seems like perfect fit: single executable, good terminal emulation. No brainier.

Linux

On my laptop, I used screen with session sharing but with read-only twist (giving every student my account on personal laptop just... doesn't seem sane).

First I created new student user which will connect to screen which shares my session. This account has well-known password written on whiteboard together with IP address of my laptop.

student user is starting screen and attaching to my shared perl session using shell script configured in /etc/passwd as default shell.

exec screen -x dpavlin/perl
So, to participate, students just ssh using Putty from Windows machines. So far, so good...

To make session read-only I needed to remove write permission from student user in screen. This proved to be a challenge in this solution, but not something that reading man screen can't solve:

# screenrc
multiuser on
acladd student
aclumask student-w

After starting shared session named perl

screen -S perl -c screenrc
everything is ready. If you can thing of simpler thing to do, I would love to hear it :-)

Unexpected consequence of this setup is my ability to switch to other screen in shared session screen on my laptop which changes projected session on whiteboard (to show table of contents or some random example in middle of presentation), but students will still see perl session with presentation.

Detailed step-by-step instructions how to setup view only screen sharing are available in my Sysadmin Cookbook.

Last few weeks I have been struggling with memory usage on one of machines which run several OpenVZ containers. It was eating whole memory in just few days:

koha-hw-memory-week.png

I was always fond of graphing system counters, and since reboots are never a good thing something had to be done. One of first things that jumps out is that weekends are quiet, and don't generate 1Gb of additional memory usage. So it had something to do with out workload when library was open. But, what?

Even, worse, it all started only two weeks ago!

koha-hw-memory-month.png

Occasional look at ps axvv wasn't really something which is useful in debugging this problem, and I needed more precise information. So, I opted for simplest possible solution: record memory usage using vzps from crontab with following shell script:

#!/bin/sh

cd /srv/ps-trend 
dir=`date +%Y-%m-%d`
test -d $dir || mkdir $dir
COLUMNS=256 vzps -eo veid,pid,ppid,ni,user,rss,sz,vsz,time,stat,f,command > $dir/`date +%H%M`

After collecting several hours of trace, I decided to group them by container, and by all processes which have more than 64Mb of memory usage. Sometimes, it's amazing how solution jumps out by itself if you describe your problem good enough to computer (and draw a graph :-)

ps-hourly.png

After I identified that two Apache instances where eating memory like crazy, I remembered one of fellow sysadmins who complained about threaded Apache installation where some Apache child processes would mysteriously take 256Mb of RAM memory each. Just some, not all. Of course, I had several of them.

My solution to problem was also simple:

# sudo apt-get install apache2-mpm-prefork

It seems that threaded model in current Apache 2 just isn't good for me. Which is strange because application is basically a bunch of CGI scripts.

Result is not bad: 1Gb of additional free memory (which will be used for file-system cache). Not leaking anymore will also save us from hitting swap which was so bad that first reboot was needed. If nothing else, remember that tracing system counters and graphing them is always good investment in time, because pictures can tell different story than raw data. Especially if you have a lot of raw data (20k per minute in this case)

It would be nice to turn this into full monitoring solution. I really like idea of minimal client deployment for monitoring, so something like ps and curl to push data directly to graphing server, and triggered from crontab (with possibility to parse e-mail and insert them for later recovery from network interruptions) might just be a solution which I would recommend. Remember, Linux is operating system. It can do a lot of thing by itself :-)

If you just want light-weight graphs for your machines RRD::Simple Monitoring server has light-weight perl client (single perl script which submits to CGI script on server) which is triggered from cron. This project was inspiration to give RRD::Simple a try.

I'm working on Linux version of Sun storage machines, using commodity hardware, OpenVZ and Fuse-ZFS. I'm do have working system in my Sysadmin Cookbook so I might as well write a little bit of documentation about it.

My basic requirements are:

This makes it self-running system which won't fall over itself, so let's see how does it look:

root@opl:~# zpool status
  pool: opl
 state: ONLINE
 scrub: resilver completed after 1h59m with 0 errors on Wed Jun  3 15:29:50 2009
config:

        NAME        STATE     READ WRITE CKSUM
        opl         ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
root@opl:~# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
opl                            183G  35.7G    21K  /opl
opl/backup                     180G  35.7G    22K  /opl/backup
opl/backup/212052             76.1G  35.7G  8.12G  /opl/backup/212052
opl/backup/212052@2009-05-01  5.69G      -  7.50G  -
opl/backup/212052@2009-05-10  5.69G      -  7.67G  -
opl/backup/212052@2009-05-15  5.57G      -  7.49G  -
opl/backup/212052@2009-05-22  3.54G      -  7.74G  -
opl/backup/212052@2009-05-25  3.99G      -  8.38G  -
opl/backup/212052@2009-05-26  3.99G      -  8.38G  -
...
opl/backup/212052@2009-06-05  3.72G      -  8.09G  -
opl/backup/212052@2009-06-06      0      -  8.12G  -
opl/backup/212056             1.42G  35.7G   674M  /opl/backup/212056
opl/backup/212056@2009-05-30  37.1M      -   688M  -
opl/backup/212056@2009-05-31  47.3M      -   747M  -
opl/backup/212056@2009-06-01  40.9M      -   762M  -
opl/backup/212056@2009-06-02  62.4M      -   787M  -
...
opl/backup/212056@2009-06-05  12.1M      -  1.02G  -
opl/backup/212056@2009-06-06      0      -   674M  -
opl/backup/212226              103G  35.7G  26.8G  /opl/backup/212226
opl/backup/212226@2009-05-05  4.29G      -  26.7G  -
opl/backup/212226@2009-05-10  4.04G      -  26.6G  -
opl/backup/212226@2009-05-15  4.19G      -  26.6G  -
opl/backup/212226@2009-05-22  4.12G      -  26.7G  -
opl/backup/212226@2009-05-23  4.12G      -  26.7G  -
opl/backup/212226@2009-05-24  4.09G      -  26.6G  -
opl/backup/212226@2009-05-25  4.14G      -  26.7G  -
opl/backup/212226@2009-05-26  4.13G      -  26.7G  -
...
opl/backup/212226@2009-06-05  4.20G      -  26.8G  -
opl/backup/212226@2009-06-06      0      -  26.8G  -
opl/clone                      719M  35.7G    25K  /opl/clone
opl/clone/212056-60018         666M  35.7G  1.39G  /opl/clone/212056-60018
opl/clone/212226-60017        53.0M  35.7G  26.7G  /opl/clone/212226-60017
opl/vz                        1.59G  35.7G  43.5K  /opl/vz
opl/vz/private                1.59G  35.7G    22K  /opl/vz/private
opl/vz/private/60014           869M  35.7G   869M  /opl/vz/private/60014
opl/vz/private/60015           488M  35.7G   488M  /opl/vz/private/60015
opl/vz/private/60016           275M  35.7G   275M  /opl/vz/private/60016
There are several conventions here which are useful:
  • pool is named same as machine (borrowing from Debian way of naming LVM volume groups) which makes it easy to export/import pools on different machines (I did run it with mirror over nbd for a while)
  • snapshots names are dates of snapshot for easy overview
  • clones (writable snapshots) are named using combination of backup and new container ID

There are several things which I wouldn't be able to get without zfs:

  • clones can grows as much as they need
  • data is compressed, which increase disk IO as result
  • zfs and zpool commands are really nice and intuitive way to issue commands to filesystem
  • zpool history is great idea of writing all filesystem operations to internal log
  • ability to re-sliver (read/write all data on platters) together with checksums make it robust to disk errors

So, you think that your network is slow. But, how would you test that? You can feel that speed between different hosts is different, but what you need some data to find problem. Here is my take on this...

First, select subset of machines to test network speed on and install netpipe-tcp. Then run NPtcp on target machines and NPtcp -h hostname -u 1048576 -o /tmp/hostname.np on machine from which you are testing bandwidth. Several iterations later, you will have a bunch of *.np files which are ready for analysis.

You can do it by hand, but this handy perl script will convert *.np files into graphviz's dot file. Which looks like this: netpipe-grahviz.png

GraphViz will make it's auto-layout magic and just looking at picture you will immediately notice that there are 100Mbit/s link somewhere in-between machines... Pictures can really replace thousands of words...

For quote some time I wanted to try PXE booting. After all, I did wrote bootp and tftp server for ADSL modems, so how complicated can it be?

I decided to use dnsmasq as server, and added following configuration options to dnsmasq:

enable-tftp
tftp-root=/srv/sysadmin-cookbook/recepies/pxe/tftpboot/
dhcp-boot=pxelinux.0
Then, I created tftpboot from upstream Debian netboot:
wget -nc ftp.hr.debian.org/debian/dists/lenny/main/installer-i386/current/images/netboot/netboot.tar.gz \
&& mkdir tftpboot && cd tftpboot && tar xvfz ../netboot.tar.gz
It seemed all nice and well, so I decided to try it using Eee PC 701. And it didn't work. I didn't have any network link, tshark -i eth0 didn't reported any network traffic and all suggested that BIOS didn't turn power on network card.

I even tried lastest bios upgrade but it didn't help. I was quite sure that configuration is correct (it's so simple after all) and tried to boot ThinkPad. Which worked...

So, I had a PXE environment which worked, just not with Eee PC. Fortunately, there is alternative to buggy PXE boots: gPXE. It comes with bootable USB version which to my amazement worked perfectly on Eee PC. If you want to know all glory defailes about gPXE watch this video. It well worth your time...

I rarely use X11 for system administration. There is one tool, however, which was always invaluable to me: xlax. Sure, there are other solutions but somehow, I got addicted to xlax ever since I was introduced to it by fellow sysadmin.

Until today, that is. I always have source on the disk since it was such a hard thing to find. I run quick xmkmf on it, and... it didn't work! However, since xlax now has a home page which explains everything about XTerm*allowSendEvents I was on right track.

But, now so fast, grasshopper!
dpavlin@llin:/rest/unix/x11/xlax2.4$ make
gcc -m32 -o xlax -g -O2 -fno-strict-aliasing       xlax.o -lXaw -lXmu -lXt -lSM -lICE -lXpm  -lXext -lX11      
xlax.o: In function `SetupInterface':
/rest/unix/x11/xlax2.4/xlax.c:173: undefined reference to `strlcpy'
collect2: ld returned 1 exit status
make: *** [xlax] Error 1
Argh. I started with Digital UNIX (called OSF/1 back then) and part of my brain which deals with minor adjustments hasn't died yet, so I decided to do quick google search for strlcpy which which has handy link to strlcpy implementation in OpenBSD. Licensing terms aside, I decided to give it a try.

After applying following patch:

diff -urw xlax2.4/Imakefile xlax2.4.strlcpy/Imakefile
--- xlax2.4/Imakefile   2008-07-31 22:18:25.000000000 +0200
+++ xlax2.4.strlcpy/Imakefile   2009-04-30 21:32:15.000000000 +0200
@@ -5,8 +5,8 @@
 #            DEFINES = -DDEBUG
             DEPLIBS = XawClientDepLibs
     LOCAL_LIBRARIES = XawClientLibs
-               SRCS = xlax.c
-               OBJS = xlax.o
+               SRCS = xlax.c strlcpy.c
+               OBJS = xlax.o strlcpy.c
 
 ComplexProgramTarget(xlax)

and quick xmkmf && make and I got it compiled.

Another trivial change was to implement automatic ssh to each host in mkxlax by adding -e 'ssh $ARGV[$i]' to system xterm line so I will have remote reminals opened by default.

And now, back to real work :-)

I'm using Subversion for most of my work as all of you well know by now. All this hype about git persuaded me to give it another try. I don't really have anything to gain doing this, since I'm using svk when I need distributed VCS, but somehow I though that git might be right solution to keep all my system configuration so I can debootstrap system checkout configuration and I'm ready to go.

I could use etckeeper to do some of this stuff, but I really didn't want integration with apt. I just wanted single (network connected and backed up) place. I already tried this with git on single machine with local repository, and it worked pretty well.

This time I tried to use git branches to track different machines. I really want a single repository, so I can merge common changes all around. However, today I got this:

root@syslog:/# git push
Counting objects: 16, done.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (10/10), 1.16 KiB, done.
Total 10 (delta 0), reused 0 (delta 0)
To ssh://backup/srv/backup/
   66a2f9b..8f195f5  syslog -> syslog
 ! [rejected]        koha-dev -> koha-dev (non-fast forward)
 ! [rejected]        master -> master (non-fast forward)
error: failed to push some refs to 'ssh://backup/srv/backup/'
I have no idea why would two branches which have nothing to do with current one would disable distributed part of git. If you can't read git output, message above means that I wasn't able to commit my changes to central repository.

This is a huge show stopper for me. Half a day of googling didn't find answer for this particular git question. This makes my whole setup a big useless overhead.

This is all well and fun, but since this is second time that git ate my data I'm falling back to good old friend Subversion. At least, when in breaks, I have error messages which are somewhat useful, Subversion book which explains most operations with it (so I don't have to google for every little bit, like pushing single branch from one repository to another with git).

Don't get me wrong: git is c00l, we all know that, but it's just immature if you don't want to be git developer. If you think that I'm just old-timer which can't join all this new-age DVCS mumble-mumble, read why Google picked Mercurial instead of git as DVCS. Different story, but helpful to see that git isn't only solution to every problem

Now I just need to convert my existing git branches back into subversion. It seems that git-svn dcommit is answer, but how to really push four different git branches back into subversion I still don't know. I will probably just re-add all tracked files to clean Subversion and start all over again.

I have written about data migration from disk to disk before, but moving data off the laptop is really painful (at least for me). This time, I didn't have enough time to move files with filesystem copy since it was painfully slow and transferred only 20Gb from 80Gb of data in two hours. I seen 3M/s transfers because of seeking in filesystem, so something had to be done!

So, let's move filesystem images. Limiting factor should be speed of disk read. Have in mind that 100Mb/s full-duplex network (over direct cable) will bring you just 12Mb/s which is much less than normal laptop 5400 rpm disk can deliver (around 30Mb/s of sequential read -- you can check that with hdparm -t).

Since I'm coping data to RAID5 array, I can assume that any operation on images will be much quicker than seek times on 5400 rps disk which is in laptop.
Let's see which partition we have on laptop:

llin:~# mount -t ext3,reiserfs
/dev/mapper/vg-root on / type ext3 (rw,noatime,errors=remount-ro)
/dev/sda1 on /boot type ext3 (rw)
/dev/mapper/vg-rest on /rest type reiserfs (rw,noatime,user_xattr)

First start netcat (on machine with RAID array) which will receive image. I'm using dd_rescue to display useful info while transfering data... If you don't have it, just remove it from pipe.

root@brr:/backup/llin-2008-01-12# nc -l -p 8888 | dd_rescue - - -y 0 > boot.img

Then start transfer on laptop:

llin:~# nc 192.168.2.20 8888 < /dev/sda1

Repeat that for all filesystems like this:

root@brr:/backup/llin-2008-01-12# nc -l -p 8888 | dd_rescue - - -y 0 > root.img
llin:~# nc -w 1 192.168.2.20 8888 < /dev/vg/root

root@brr:/backup/llin-2008-01-12# nc -l -p 8888 | dd_rescue - - -y 0 > rest.img
llin:~# nc -w 1 192.168.2.20 8888 < /dev/vg/rest

While you are waiting for copy to finish, you can use dstat, atop, vmstat or any similar command to monitor progress, but I came with this snippet:

root@brr:~# watch "df | egrep ' (/backup|/mnt/llin)' ; echo ; ls -al /backup/llin*/*.img"

which produce something like this:

Every 2.0s: df | egrep ' (/backup|/mnt/llin)' ; echo ; ls -al /bac...  Mon Jan 12 23:30:21 2009

125825276 112508688 13316588 90% /backup
82569904 22477768 60092136 28% /mnt/llin
25803068 24800808 740116 98% /mnt/llin.img/root

-rw-r--r-- 1 root root 45843283968 2009-01-12 23:30 /backup/llin-2008-01-12/rest.img
-rw-r--r-- 1 root root 26843545600 2009-01-12 23:07 /backup/llin-2008-01-12/root.img

I decided to rsync files, since I already had 20Gb of them in file system. First you have to recover journal (since we where coping live devices, journal isn't finished and loopback device won't allow us to mount filesystem):

root@brr:/backup# e2fsck llin-2008-01-12/root.img 
e2fsck 1.41.3 (12-Oct-2008)
llin-2008-01-12/root.img: recovering journal
llin-2008-01-12/root.img: clean, 689967/3276800 files, 6303035/6553600 blocks (check in 4 mounts)

root@brr:~# e2fsck /backup/llin-2008-01-12/rest.img

root@brr:~# reiserfsck /backup/llin-2008-01-12/rest.img

If you don't want to wait for reiserfs to finish check, you can abort it when it starts checking of filesystem because we only care about first step which will recover journal.
Next, we will mount images (read-only) using loopback:

root@brr:~# mkdir /mnt/llin.img
root@brr:~# mount /backup/llin-2008-01-12/root.img /mnt/llin.img/ -o ro,loop
root@brr:~# mount /backup/llin-2008-01-12/boot.img /mnt/llin.img/boot/ -o ro,loop
root@brr:~# mount /backup/llin-2008-01-12/rest.img /mnt/llin.img/rest/ -o ro,loop

And finally rsync rest of changes to directory

root@brr:~# rsync -ravHS /mnt/llin.img/ /mnt/llin

After that, I used files to start OpenVZ virtual machine which will replace my laptop for some time (if you know good second-hand laptop for sale, drop me an e-mail).