Results tagged “linux”

I must admit that Linux administration is getting better with years. I was configuring IPMI serial console on old machines (but with recent Debian) so I decided to find out which is optimal way to configure serial console using systemd.

First, let's inspect ipmi and check it's configuration to figure out baud-rate for serial port:

root@lib10:~# ipmitool sol info 1
Info: SOL parameter 'Payload Channel (7)' not supported - defaulting to 0x01
Set in progress                 : set-complete
Enabled                         : true
Force Encryption                : true
Force Authentication            : false
Privilege Level                 : ADMINISTRATOR
Character Accumulate Level (ms) : 50
Character Send Threshold        : 220
Retry Count                     : 7
Retry Interval (ms)             : 1000
Volatile Bit Rate (kbps)        : 57.6
Non-Volatile Bit Rate (kbps)    : 57.6
Payload Channel                 : 1 (0x01)
Payload Port                    : 623
Notice that there is 1 after info. This is serial port which is sol console. If you run ipmitool without this parameter or with zero, you will get error:
root@alfa:~# ipmitool sol info 0
Error requesting SOL parameter 'Set In Progress (0)': Invalid data field in request
Don't panic! There is ipmi sol console, but on ttyS1!

To configure serial console for Linux kernel we need to add something like console=ttyS1,57600 to kernel command-line in grub, and configuring correct serial port and speed:

GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --speed=57600 --unit=1 --word=8 --parity=no --stop=1"
All required changes to default configuration are below:
root@lib10:/etc# git diff
diff --git a/default/grub b/default/grub
index b8a096d..2b855fb 100644
--- a/default/grub
+++ b/default/grub
@@ -6,7 +6,8 @@
 GRUB_DEFAULT=0
 GRUB_TIMEOUT=5
 GRUB_DISTRIBUTOR=`lsb_release -i -s 2< /dev/null || echo Debian`
-GRUB_CMDLINE_LINUX_DEFAULT="boot=zfs rpool=lib10 bootfs=lib10/ROOT/debian-1"
+# serial console speed from ipmitool sol info 1
+GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS1,57600 root=ZFS=lib10/ROOT/debian-1"
 GRUB_CMDLINE_LINUX=""

 # Uncomment to enable BadRAM filtering, modify to suit your needs
@@ -16,6 +17,8 @@ GRUB_CMDLINE_LINUX=""

 # Uncomment to disable graphical terminal (grub-pc only)
 #GRUB_TERMINAL=console
+GRUB_TERMINAL=serial
+GRUB_SERIAL_COMMAND="serial --speed=57600 --unit=1 --word=8 --parity=no --stop=1"

 # The resolution used on graphical terminal
 # note that you can use only modes which your graphic card supports via VBE
So in the end, there is noting to configure on systemd side. If you want to know why, read man 8 systemd-getty-generator

For a long time, I was looking for PowerPC machine to run Linux on it. I was planning to buy some PowerBook when I get good offer, but that opportunity never really came.

Playstation 3 hardware

Meantime, Sony has been reducing price of PlayStation 3 up to point where they don't include hyper-visor which is a shame. (also known as Other OS support). If I wanted to run Linux on it, I was running out of time. So, I decided to take a leap of fate and buy one 40Gb (CECHGxx) model for ~240 € which seemed like a good price for PowerPC at 3.2GHz with 256 Mb RAM and slowish disk:

dpavlin@ps3:~$ cat /proc/cpuinfo 
processor       : 0
cpu             : Cell Broadband Engine, altivec supported
clock           : 3192.000000MHz
revision        : 16.0 (pvr 0070 1000)

processor       : 1
cpu             : Cell Broadband Engine, altivec supported
clock           : 3192.000000MHz
revision        : 16.0 (pvr 0070 1000)

timebase        : 79800000
platform        : PS3
model           : SonyPS3
dpavlin@ps3:~$ free
             total       used       free     shared    buffers     cached
Mem:        213728      63372     150356          0       2608      25212
-/+ buffers/cache:      35552     178176
Swap:       586332          0     586332
root@ps3:~# hdparm -tT /dev/ps3da

/dev/ps3da:
 Timing cached reads:   1640 MB in  2.00 seconds = 819.90 MB/sec
 Timing buffered disk reads:   82 MB in  3.02 seconds =  27.17 MB/sec
You will not get access to graphic cards, but you can use it's 256 Mb of VRAM (DDR3) as block device, but it's also not very fast:
root@ps3:~# hdparm -tT /dev/ps3vram 

/dev/ps3vram:
 Timing cached reads:   1682 MB in  2.00 seconds = 841.64 MB/sec
 Timing buffered disk reads:   96 MB in  3.06 seconds =  31.40 MB/sec

root@ps3:~# dd_rescue -m 236M /dev/zero /dev/ps3vram
Summary for /dev/zero -> /dev/ps3vram:
dd_rescue: (info): ipos:    241664.0k, opos:    241664.0k, xferd:    241664.0k
                   errs:      0, errxfer:         0.0k, succxfer:    241664.0k
             +curr.rate:    15911kB/s, avg.rate:    15669kB/s, avg.load:  7.4%

root@ps3:~# dd_rescue -m 236M /dev/ps3vram /dev/zero
Summary for /dev/ps3vram -> /dev/zero:
dd_rescue: (warning): /dev/zero (241664.0k): Invalid argument!    
dd_rescue: (info): ipos:    241664.0k, opos:    241664.0k, xferd:    241664.0k
                   errs:      0, errxfer:         0.0k, succxfer:    241664.0k
             +curr.rate:    32621kB/s, avg.rate:    31898kB/s, avg.load: 14.8%
Normal usage is as swap (with higher priority than disk), but this basically just saves disk from spinning because disk is faster at writing (which you really care about if you are swapping to media) but slower at reading:
root@ps3:~# dd_rescue -m 236M /dev/zero /tmp/disk.speed.test
Summary for /dev/zero -> /tmp/disk.speed.test:
dd_rescue: (info): ipos:    241664.0k, opos:    241664.0k, xferd:    241664.0k
                   errs:      0, errxfer:         0.0k, succxfer:    241664.0k
             +curr.rate:    18400kB/s, avg.rate:    18392kB/s, avg.load: 19.2%

root@ps3:~# dd_rescue -m 236M /dev/zero /tmp/disk.speed.test
Summary for /tmp/disk.speed.test -> /dev/zero:
dd_rescue: (warning): /dev/zero (241664.0k): Invalid argument!    
dd_rescue: (info): ipos:    241664.0k, opos:    241664.0k, xferd:    241664.0k
                   errs:      0, errxfer:         0.0k, succxfer:    241664.0k
             +curr.rate:    24036kB/s, avg.rate:    22819kB/s, avg.load:  4.7%
Good news is that USB disks are almost as fast as internal drive:
root@ps3:~# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   1740 MB in  2.00 seconds = 870.84 MB/sec
 Timing buffered disk reads:   72 MB in  3.01 seconds =  23.89 MB/sec
Having said that, it's probably one of best machines I saw after my first Alpha :-). It's quiet, all hardware available is supported under Linux and provides great platform for PowerPC development.

Ethernet, on the other hand is fastest way to get data to or from machine:

root@ps3:~# nc -l -p 8888 < /dev/zero

root@t61p:~# nc 192.168.2.103 8888 | dd_rescue -m 1G - /dev/null
Summary for - -> /dev/null:
dd_rescue: (warning): /dev/null (1048576.0k): Invalid argument!    
dd_rescue: (info): ipos:   1048576.0k, opos:   1048576.0k, xferd:   1048576.0k
                   errs:      0, errxfer:         0.0k, succxfer:   1048576.0k
             +curr.rate:        0kB/s, avg.rate:    86905kB/s, avg.load: 12.4%
root@t61p:~# nc -l -p 8889 < /dev/zero

root@ps3:~# nc 192.168.2.61 8889 | dd_rescue -m 1G - /dev/null
Summary for - -> /dev/null:
dd_rescue: (warning): /dev/null (1048576.0k): Invalid argument!    
dd_rescue: (info): ipos:   1048576.0k, opos:   1048576.0k, xferd:   1048576.0k
                   errs:      0, errxfer:         0.0k, succxfer:   1048576.0k
             +curr.rate:        0kB/s, avg.rate:    68914kB/s, avg.load: 21.2%

What about graphics?

Basic support is frame-buffer in various resolutions over HDMI (I can't really test compost output). ps3-video-mode -m 130 is something you will type often if you are sitting away from your 1920x1200 LCD...

There is sample code to run RSX from 2007, but it doesn't seem it had any changes since then. There is also libps3rsx and video of it looks promising. It's also from 2007. And it seems you need firmware older than 2.10 which I don't have. Forum thread PS3 Development seems to be dead, but RSX page on wiki is nice overview.

Different approach is to use 6 available SPU cores for YUV to ARGB conversions with scaling. There is also patch for console mplayer -vo to enable movie decoding usable under Linux using SPUs. There is Debian packages for xserver-xorg-video-spu but for sid and it doesn't work for me (next step: recompile).

What about cells?

I still don't know. But, following projects also look very interesting:

Debian installation

I tried to install Debian following instructions for squeeze but between fighting with Sony menu (which I didn't like when I saw it first time on PSP) and trying to type of wireless keyboard which had incredibly short repeat key I wasn't successful. So I tried Debian Live for PS3 and it worked. apt-get dist-upgrade and few hours later I had 2.6.31.1 from www.kernel.org working.

As you might know by now, I was debugging memory related problems on one of my systems recently and concluded that normal output from Linux commands are more or less inaccurate. If you want to know why, take a look at Matt Mackall presentation at ELC2009: Visualizing Process Memory or watch following video:

Convinced? So, hop at smem page, compile user-land part and start really tracking your memory usage, let's compare:

dpavlin@t61p:/rest/cvs/smem$ free
             total       used       free     shared    buffers     cached
Mem:       4081400    3882476     198924          0     142904    2731480
-/+ buffers/cache:    1008092    3073308
Swap:      8209172       7492    8201680

dpavlin@t61p:/rest/cvs/smem$ ./smem -w -t
Area                           Used      Cache   Noncache 
firmware/hardware                 0          0          0 
kernel image                      0          0          0 
kernel dynamic memory       2927016    2845456      81560 
userspace memory             954900     119368     835532 
free memory                  199484     199484          0 
----------------------------------------------------------
                            4081400    3164308     917092 
Just a few quick notes if you didn't watched whole video carefully:
  • needs kernel 2.6.27 or newer
  • it can work on archived data (from cron in my example usage)
  • userspace cache is backed by file on disk
  • it's a python script which requires matplotlib to create graphs so it's for local reporting

I'm working on Linux version of Sun storage machines, using commodity hardware, OpenVZ and Fuse-ZFS. I'm do have working system in my Sysadmin Cookbook so I might as well write a little bit of documentation about it.

My basic requirements are:

This makes it self-running system which won't fall over itself, so let's see how does it look:

root@opl:~# zpool status
  pool: opl
 state: ONLINE
 scrub: resilver completed after 1h59m with 0 errors on Wed Jun  3 15:29:50 2009
config:

        NAME        STATE     READ WRITE CKSUM
        opl         ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
root@opl:~# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
opl                            183G  35.7G    21K  /opl
opl/backup                     180G  35.7G    22K  /opl/backup
opl/backup/212052             76.1G  35.7G  8.12G  /opl/backup/212052
opl/backup/212052@2009-05-01  5.69G      -  7.50G  -
opl/backup/212052@2009-05-10  5.69G      -  7.67G  -
opl/backup/212052@2009-05-15  5.57G      -  7.49G  -
opl/backup/212052@2009-05-22  3.54G      -  7.74G  -
opl/backup/212052@2009-05-25  3.99G      -  8.38G  -
opl/backup/212052@2009-05-26  3.99G      -  8.38G  -
...
opl/backup/212052@2009-06-05  3.72G      -  8.09G  -
opl/backup/212052@2009-06-06      0      -  8.12G  -
opl/backup/212056             1.42G  35.7G   674M  /opl/backup/212056
opl/backup/212056@2009-05-30  37.1M      -   688M  -
opl/backup/212056@2009-05-31  47.3M      -   747M  -
opl/backup/212056@2009-06-01  40.9M      -   762M  -
opl/backup/212056@2009-06-02  62.4M      -   787M  -
...
opl/backup/212056@2009-06-05  12.1M      -  1.02G  -
opl/backup/212056@2009-06-06      0      -   674M  -
opl/backup/212226              103G  35.7G  26.8G  /opl/backup/212226
opl/backup/212226@2009-05-05  4.29G      -  26.7G  -
opl/backup/212226@2009-05-10  4.04G      -  26.6G  -
opl/backup/212226@2009-05-15  4.19G      -  26.6G  -
opl/backup/212226@2009-05-22  4.12G      -  26.7G  -
opl/backup/212226@2009-05-23  4.12G      -  26.7G  -
opl/backup/212226@2009-05-24  4.09G      -  26.6G  -
opl/backup/212226@2009-05-25  4.14G      -  26.7G  -
opl/backup/212226@2009-05-26  4.13G      -  26.7G  -
...
opl/backup/212226@2009-06-05  4.20G      -  26.8G  -
opl/backup/212226@2009-06-06      0      -  26.8G  -
opl/clone                      719M  35.7G    25K  /opl/clone
opl/clone/212056-60018         666M  35.7G  1.39G  /opl/clone/212056-60018
opl/clone/212226-60017        53.0M  35.7G  26.7G  /opl/clone/212226-60017
opl/vz                        1.59G  35.7G  43.5K  /opl/vz
opl/vz/private                1.59G  35.7G    22K  /opl/vz/private
opl/vz/private/60014           869M  35.7G   869M  /opl/vz/private/60014
opl/vz/private/60015           488M  35.7G   488M  /opl/vz/private/60015
opl/vz/private/60016           275M  35.7G   275M  /opl/vz/private/60016
There are several conventions here which are useful:
  • pool is named same as machine (borrowing from Debian way of naming LVM volume groups) which makes it easy to export/import pools on different machines (I did run it with mirror over nbd for a while)
  • snapshots names are dates of snapshot for easy overview
  • clones (writable snapshots) are named using combination of backup and new container ID

There are several things which I wouldn't be able to get without zfs:

  • clones can grows as much as they need
  • data is compressed, which increase disk IO as result
  • zfs and zpool commands are really nice and intuitive way to issue commands to filesystem
  • zpool history is great idea of writing all filesystem operations to internal log
  • ability to re-sliver (read/write all data on platters) together with checksums make it robust to disk errors

As you might guessed by now, I played with file-systems for backup appliance So, against my good judgment, I decided to try btrfs to see how ready is it to replace zfs-fuse configuration with real in-kernel file-system (zfs-fuse is not slow, because disks are much slower than any peace of software).

So far, I found following annoyances in brtrs:

  1. snapshots can't be removed (I'm doing incremental forever backups, so this is not show-stopper)
    You can remove all files in snapshot directory, but not directory itself. I would guess that removing files would just increase disk space, because it's copy-on-write filesystem, but I didn't test that.
  2. there is no indication which directory is snapshot (if you didn't wrote down in log which is snapshot, you are out of luck)
  3. it seeks quite a lot (there is 40-70% wait time in vmstat while running rsync which I guess is seek, because there is no block input/output operations at same time)
  4. it will oops your (Debian 2.6.29-2-686) kernel:
    Message from syslogd@klin at May 16 00:42:31 ...
     kernel:[ 4057.994566]  [<c0119e0f>] kmap_atomic_prot+0xbd/0xdd
    Message from syslogd@klin at May 16 00:42:31 ...
     kernel:[ 4057.994576]  [<c0119d30>] kunmap_atomic+0x58/0x7a
    Message from syslogd@klin at May 16 00:42:31 ...
     kernel:[ 4057.994586]  [<f83a61a2>] btrfs_cow_block+0x134/0x13d [btrfs]
    Message from syslogd@klin at May 16 00:42:31 ...
     kernel:[ 4057.994608]  [<f83a8b4b>] btrfs_search_slot+0x1f0/0x622 [btrfs]
    Messag./pull-snapshot-backup.sh: line 8:  4316 Segmentation fault      rsync -ravHC --numeric-ids --delete $from:/mnt/vz-backup/private/$1/ /$pool/$1/
    
    dmesg-btrfs-bug.txt

After that I concluded that warning about alpha state of btrfs is there with a reason. I didn't fully appreciate Theodore Ts's warning about development status of btrfs until I got kernel oops.

My point of view

First, let me explain my position. I was working for quite a few years in big corporation, and followed EMC storage systems (one from end of of last century and improvement that Clarion did on our production SAP deployment). I even visited EMC factory in Cork, Ireland, and it was very eye-opening experience. They claim that 95% of customers who visited factory did buy EMC storage, and I believe them (we did upgrade to Clarion btw).

In my Linux based deployments on HP, Compaq and IBM hardware I did various crazy RAID configurations (RAID5 across disks on controller and then stripe across other controller, for example). Those where the easy parts: you got RAID controller with DRAM cache (~256Mb) and some kind of battery backup which greatly improved write performance.

Later on in CARNet we had HP EVA storage which proved quite flaky. I heard from friend in one enterprise deployment that they use them only for testing. And you know, it's just shelf of disks with redundant controllers and fiber interface...

In the mean time, on Linux software RAID front, I used md implementation RAID1 and RAID5 back in the days when Linux distributions couldn't handle that.

Solid state drives

However, solid state drives changed a lot of that. I still haven't had pleasure to use Intel SSD which are supposed to be good, but USB sticks are also flash storage, but with quaky characteristics.

This particular one is ID 0951:1603 Kingston Technology Data Traveler 1GB/2GB Pen Drive as reported by Linux, but in fact 8Gb model which seem to have 128Mb of memory which is writable at about 6Mb/s and after that write speed drops to 45K/s.

On the other hand, there is ZFS on FUSE project which enables some really interesting applications of Sun's (and now Oracle) file-system. I do have to mention Sun at this point. Ever since I heard about Oracle's acquisition of Sun, I have wondered what will happen with ZFS. I might even suspect that ZFS is the main reason why Oracle bought Sun. Let me explain...

Sunshine Oracle

If you look at database market (where Oracle is), the only interesting thing to improve relational databases is to make them extremely fast. And that revolution is already here. Don MacAskill from SmugMug makes compelling case about performance of SSD storage. If you don't believe words, watch this video from 24:50 to see solutions to MySQL storage performance problem: hardware!. Sun's hardware. Do you think that Oracle didn't noticed that?

Enterprise storage cheaply

Did you watched the video? I really don't agree that it's hardware. Common! It's Opteron boxes with custom built SSD disks optimized for write speed. SSD with super-capacitors instead of batteries in old RAID controller.

But, to make it really fun, I will try to re-create at least some of those abilities using commodity hardware in my university environment. I have Dell's OptiPlex boxes which come loaded with a lot of goodies to put together a commodity storage cluster:

  • Intel
  • Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz
  • 3Gb RAM
  • 2 SATA disks with ~80Mb/s of read/write performance
  • multi-card reader and 8 USB slots
  • fake software RAID on Intel chipset (supported by dmraid but even it's documentation suggests not to use it)

ZFS

Why ZFS? Isn't btrfs way to go? For this particular application, I don't think so. Let me list features of ZFS which excite me:

  • ability to store log to separate (mirror) device (SSD, USB sticks if that helps)
  • scrub: read all bytes on disk and rewrite it (beats smartctl -t long because it also re-allocates bad blocks, I've seen 80Mb/s scrub)
  • balancing of IO over devices (I will use this over nbd to split mirror between machines for fail-over)
  • arbitrary number of copies (nice for bigger clusters of storage machines)
  • nice snapshots which display it's size and can be cloned to writable ones
  • snapshot send/receive to make off-site backup copies
  • L2ARC - balance read and write cache over SSD devices with different characteristics (USB sticks have fast read and slow write, so they might be good fit)

You might think of it as git with POSIX file-system semantics.

But, it's in user space, you say, it must be slow! It isn't. Really. Linux user-space is much faster than disk speed and having separate process is nice for monitoring purposes. File-system overhead gets counted into user time, not system, so system time is clear indicator of driver (hardware) activity and not file-system overhead.

I have most parts of this setup ready, and I'm using it to backup OpenVZ containers. So, I'm running OpenVZ kernel and I can even make virtual machines from backup snapshots to recover into some point in time. After I finish this setup, expect a detailed guide (it will probably be part of my upcoming virtualization workshop as alternative to LVM).

I just read about LinuxDNA project to compile kernel with Intel's ICC compiler. For a start it's 2.6.22. So forget about recent hardware. Oh, did I mentioned that binary drivers don't work?

But, it has 40% speed improvement. I have said once or twice that Itanium never really worked with Linux because of poor gcc complier. I really hope to see end of x86 architecture it Itanium history has anything to teach us. :-)

Still reading?

O.K, let me elaborate a bit: 40% speed increase available only with propriatory compiler is just enough for me not to buy another x86 processor if I had any realistic alternative.

With low power AMD solutions like SheevaPlug and AMD netbooks this might be possible in future, but not right now.

What is my problem with ICC? Did you know that you have to change binary of ICC to make it support AMD processors? I did, although this page vanished from Internet.
Do you really want to compile your kernel with compiler like that?
If you are still thinking something like: hell, yes! 40% faster! (imagine dark chorus going geeeentooooo in background while reading this) Please explain to me how do you intend to compile binary for any x86 processor, not just Intel, for example for LiveCD?

I wanted to automatically configure second monitor plugged into my laptop as above internal LCD, and I was prepared to allocate few days for this task. But, in recent X.org servers this has gotten much better, so it's enough just to add following to .xinitrc:

xrandr --output DVI-0 --auto --above LVDS
xrandr --output VGA-0 --above LVDS

There is also grandr (with package in Debian) if you want point-and-click functionality.

Don't work on single monitor, having dual monitor again is such productivity boost :-)

Update: this works only on cards which report more than one output connected with xrandr -q:

Screen 0: minimum 320 x 200, current 1920 x 2250, maximum 1920 x 2250
VGA-0 disconnected (normal left inverted right x axis y axis)
LVDS connected 1400x1050+0+1200 (normal left inverted right x axis y axis) 305mm x 228mm
DVI-0 connected 1920x1200+0+0 (normal left inverted right x axis y axis) 474mm x 296mm
Im my case, it's ATI Technologies Inc M52 [Mobility Radeon X1300] with radeon driver. To make this work, I also needed to create virtual desktop which is large enough to accomodate both screens by adding Virtual to /etc/X11/xorg.conf:
Section "Screen"
        Identifier "MyScreen"
        Device     "MyCard"
        DefaultDepth     24
        SubSection "Display"
                Virtual 1920 2250
        EndSubSection
EndSection

So It has been a week from time when borrowed OLPC entered my family of computers. I have Thinkpad T60 with Atheros AR5212 (which works with atk5k driver from 2.6.25, nice work!) and Eee PC with Atheros (which works with special madwifi patch).

Since 802.11s just landed into upstream kernel git, I was eager to take a look at this mash network thing. Oh, how ignorant I was. OLPC uses 802.11s protocol which is different from official implementation of 802.11s and with good reason: they are using embedded processor in wifi card do to mash protocol for them (saving power and enabling mash to work when laptop is suspended). I could have installed olsr on OLPC, but I'm really trying to have bigger mash which is compatible with unmodified OLPCs.

Because my time is limited, I would like to work in user-land if at all possible, and since wpa_supplicant can work on unmodified kernels, it would be nice to have that level of support for OLPC mash also. After a lot of browsing (and reading few really great wifi hacking sites), I concluded that only hope is radiotap which is more-or-less supported on every pcmcia wifi card that I have (prism based 802.11b card and rt2500). I had also found simpliest possible code which uses radiotap to start with.

Now, I would just need another OLPC to save some network traces and start experimenting :-)

Aside from that, I switched totally to OLPC for this week, and amazingly enough, I didn't miss my Eee PC one tiny bit. Although a bit slower than Eee, OLPC screen is bigger (and better in black and write mode on sunlight) which helps a lot with web pages. Browser performance is amazing, so I have little doubt that we will be able to support most of web sites on OLPC without much problem. OOH, I did notice a couple of excessive round-tips on one of my web sites, while surfing on it, but that's for best anyway :-)

Update: According to message on libertas-dev mail list there is effort to use kernel's 802.11s implementation which makes my effort in supporting OLPC variant obsolete.

GNU fdisk broken?

I have been backing up whole disk image from Eee PC, and mounting it using loop file system to access partition in it. However, I have problems with GNU fdisk which reports 4Gb image as:

Disk /backup/eee/hda: 3 GB, 3997486080 bytes
255 heads, 63 sectors/track, 486 cylinders, total 7807590 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/backup/eee/hda1 63 4803435 2409718 83 Linux
/backup/eee/hda2 4819563 7759395 1469947 83 Linux
/backup/eee/hda3 7775523 7775460 0 c FAT32 LBA
/backup/eee/hda4 7791588 7791525 0 ef EFI FAT

For a start, disk size is wrong:

$ ls -al hda
-rwxrwxrwx 1 dpavlin root 4001292288 2008-01-20 00:59 hda

And then, even more wrong, offsets of partition seem to be wrong. When same image is examined using fdisk from util-linux, sectors are reported like this:

Disk hda: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x332b332a

Device Boot Start End Blocks Id System
hda1 63 4819499 2409718+ 83 Linux
hda2 4819500 7775459 1477980 83 Linux
hda3 7775460 7791524 8032+ c W95 FAT32 (LBA)
hda4 7791525 7807589 8032+ ef EFI (FAT-12/16/32)

And this is correct (let's ignore size for now). I can verify this by mounting second file system as:

sudo mount hda 1 -o loop,offset=`expr 4819500 \* 512`

This seems to be off-by-one error. There is bug reported against Debian package which seems related, but than again, in my case I'm examining same disk image.