Results tagged “virtualization”

Linux containers have network configuration similar to plain Linux. In fact, it is plain Linux network configuration! But, user-space lxc tools hides some of steps from you, but allow all reconfigurability you would need. So, let's have a look how to configure two distinct network interfaces, and some options you really want to set...

Let's take a look at configuration for container which uses dual homed network, one internal (on br0 bridge) and another external one (on br1 bridge):
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 10.60.0.84/23
lxc.network.mtu = 1500
lxc.network.hwaddr = AC:DE:48:00:00:54
lxc.network.veth.pair = veth84

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br1
lxc.network.name = eth1
lxc.network.mtu = 1500
lxc.network.ipv4 = 193.198.212.253/22
lxc.network.hwaddr = AC:DE:48:00:D4:FD
lxc.network.veth.pair = veth212253
Names of some options are somewhat confusing, so let's take a look what lxc-start does for us:
  • create interface for container named lxc.network.veth.pair with mac lxc.network.hwaddr (both parameters are optional, but if you don't specify them you will get randomly generated values which won't be very useful for debugging or monitoring)
  • join that interface into bridge lxc.network.link
  • inside container
    • interface is named lxc.network.name
    • ip address from lxc.network.ipv4 (this allows external IP address configuration of container, with only default route inside container)
    • interface status to lxc.network.flags

You will notice that I'm using prefix AC:DE:48 prefix for my mac addresses, which, to best of my knowledge range for private use and used all over IEEE docs (something like private IP range, but for mac addresses). I also have habit of naming interfaces with last octet of IP adress for internal ones, and last two for external one and same notation for mac addresses. Our brain is good at spotting patterns, and if you can read hex this seems just natural...

I have been playing with Linux containers for a while, and finally I decided to take a plunge and migrate one of my servers from OpenVZ to lxc. It worked quite well for testing until I noticed lack of support for automatic startup or shutdown. lxc-users mailing list was helpful in providing useful hints and I found 5 OpenSUSE scripts but decided it's too complicated for task at hand.

I wanted single script, for easy deployment on any Debian box, with following features:

  • reboot and halt from inside container should work as expected
  • cleanly shutdown or reboot container from host (as opposed to lxc-stop which is equivalent to turning power off)
  • init script to start and stop containers on host boot and shutdown

Result is lxc-watchdog.sh. It can start containers (and remember to start it on next host reboot), reboot or halt containers from host (using signals to trigger container's init) and automatically configure your container when you start it for the first time. Here is quick overview:

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 STOPPED boot /virtual/koha-241
koha-242 STOPPED      /virtual/koha-242

root@prod:/srv# ./lxc-watchdog.sh start
# start koha-240
'koha-240' is RUNNING
# start koha-241
2010-03-16T23:44:16 koha-241 start
'koha-241' is RUNNING
# skip start koha-242

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 STOPPED      /virtual/koha-242

root@prod:/srv# ls -al /var/lib/lxc/*/on_boot
-rw-r--r-- 1 root root 9 2010-03-16 21:40 /var/lib/lxc/koha-240/on_boot
-rw-r--r-- 1 root root 9 2010-03-16 21:40 /var/lib/lxc/koha-241/on_boot
-rw-r--r-- 1 root root 0 2010-03-16 22:58 /var/lib/lxc/koha-242/on_boot
As you can see, I used file /var/lib/lxc/name/on_boot to record which machines to bring up. When container is started for the first time, it will have boot enabled (just in case this is production application which you will reboot in 6 months and then wonder why it doesn't work). You can change boot status using:
root@prod:/srv# ./lxc-watchdog.sh boot koha-242
# boot koha-242

root@prod:/srv# ./lxc-watchdog.sh status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 STOPPED boot /virtual/koha-242

root@prod:/srv# ./lxc-watchdog.sh disable koha-242
# disable koha-242
Installation as init script /etc/init.d/lxc-watchdog is easy:
root@prod:/srv# ln -s /srv/lxc-watchdog.sh /etc/init.d/lxc-watchdog

root@prod:/srv# update-rc.d lxc-watchdog defaults
update-rc.d: using dependency based boot sequencing
And finally, it can also be used to manually start, halt or reboot containers:
root@prod:/srv# /etc/init.d/lxc-watchdog start koha-242
# start koha-242
2010-03-16T23:47:46 koha-242 start
'koha-242' is RUNNING

root@prod:/srv# /etc/init.d/lxc-watchdog status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 RUNNING      /virtual/koha-242

root@prod:/srv# /etc/init.d/lxc-watchdog restart koha-242
# restart koha-242
2010-03-16T23:48:46 koha-242 kill -SIGINT 24838

root@prod:/srv# /etc/init.d/lxc-watchdog status
koha-240 RUNNING boot /virtual/koha-240
koha-241 RUNNING boot /virtual/koha-241
koha-242 RUNNING      /virtual/koha-242

root@prod:/srv# /etc/init.d/lxc-watchdog stop koha-242
# stop koha-242
2010-03-16T23:49:55 koha-242 stop
2010-03-16T23:49:55 koha-242 kill -SIGPWR 26086
2010-03-16T23:50:11 koha-242 stoped
In fact, you can use halt or reboot if you don't like stop and restart, just to keep one mapping less in your brain when working with it.

Log files are created for each container in /tmp/name.log. They include lxc-start output with boot messages and any output that started scripts might create which is useful for debugging container installations. Output from watchdog monitoring /var/run/utmp in container is also included, and it reports number of tasks (processes) in container, and here is example of stopping container:

root@prod:/srv# tail -5 /tmp/koha-242.log
2010-03-16T23:49:56 koha-242 66 tasks
2010-03-16T23:50:04 koha-242 22 tasks
2010-03-16T23:50:11 koha-242 runlevel 2 0
2010-03-16T23:50:11 koha-242 halt
2010-03-16T23:50:12 koha-242 watchdog exited
Hopefully this will make your switch to Linux Containers and recent kernels easier...

Last few weeks I have been struggling with memory usage on one of machines which run several OpenVZ containers. It was eating whole memory in just few days:

koha-hw-memory-week.png

I was always fond of graphing system counters, and since reboots are never a good thing something had to be done. One of first things that jumps out is that weekends are quiet, and don't generate 1Gb of additional memory usage. So it had something to do with out workload when library was open. But, what?

Even, worse, it all started only two weeks ago!

koha-hw-memory-month.png

Occasional look at ps axvv wasn't really something which is useful in debugging this problem, and I needed more precise information. So, I opted for simplest possible solution: record memory usage using vzps from crontab with following shell script:

#!/bin/sh

cd /srv/ps-trend 
dir=`date +%Y-%m-%d`
test -d $dir || mkdir $dir
COLUMNS=256 vzps -eo veid,pid,ppid,ni,user,rss,sz,vsz,time,stat,f,command > $dir/`date +%H%M`

After collecting several hours of trace, I decided to group them by container, and by all processes which have more than 64Mb of memory usage. Sometimes, it's amazing how solution jumps out by itself if you describe your problem good enough to computer (and draw a graph :-)

ps-hourly.png

After I identified that two Apache instances where eating memory like crazy, I remembered one of fellow sysadmins who complained about threaded Apache installation where some Apache child processes would mysteriously take 256Mb of RAM memory each. Just some, not all. Of course, I had several of them.

My solution to problem was also simple:

# sudo apt-get install apache2-mpm-prefork

It seems that threaded model in current Apache 2 just isn't good for me. Which is strange because application is basically a bunch of CGI scripts.

Result is not bad: 1Gb of additional free memory (which will be used for file-system cache). Not leaking anymore will also save us from hitting swap which was so bad that first reboot was needed. If nothing else, remember that tracing system counters and graphing them is always good investment in time, because pictures can tell different story than raw data. Especially if you have a lot of raw data (20k per minute in this case)

It would be nice to turn this into full monitoring solution. I really like idea of minimal client deployment for monitoring, so something like ps and curl to push data directly to graphing server, and triggered from crontab (with possibility to parse e-mail and insert them for later recovery from network interruptions) might just be a solution which I would recommend. Remember, Linux is operating system. It can do a lot of thing by itself :-)

If you just want light-weight graphs for your machines RRD::Simple Monitoring server has light-weight perl client (single perl script which submits to CGI script on server) which is triggered from cron. This project was inspiration to give RRD::Simple a try.

It seems that I wasn't the first one to have idea of sharing MySQL installation between OpenVZ containers. However, simple hardlink didn't work for me:

root@koha-hw:~# ln /vz/root/212052/var/run/mysqld/mysqld.sock \
     /vz/root/212056/var/run/mysqld/
ln: creating hard link `/vz/root/212056/var/run/mysqld/mysqld.sock' to
    `/vz/root/212052/var/run/mysqld/mysqld.sock': Invalid cross-device link
So I tried something a little bit different:
root@koha-hw:~# mount --bind /vz/root/212052/var/run/mysqld/ \
     /vz/root/212056/var/run/mysqld/
Which worked for me on old etch Debian kernel:
root@koha-hw:~# uname -a
Linux koha-hw 2.6.18-openvz-18-53.5d1-amd64 #1 SMP Sat Mar 22 23:58:13 MSK 2008 x86_64 GNU/Linux
I have wrong permissions, but socket is world-writable anyway... There is also related bug for kernels 2.6.24 and 2.6.26.

VRač - virtualno računalo

Really funny name of post, isn't it? It should mean something to people who understand Croatian: it's virtual computer. This fun toy begin it's existence as Orao emulator. In the process, I wrote 6502 and Z80 emulator (actually only perl xs bindings for existing cpu emulators) and implemented Orao, Galaksija and Galeb using it.

Various machines are in different stage of usability. Orao emulation is working (without tape support), Galaksija is too slow to be useful (also doesn't have tape) and Galeb doesn't have keyboard or tape support.

I would be very grateful for hardware information about those machines if you can spare some time writing them. I'm somewhat hope that source code of this emulators might serve as historic reference how this machines where constructed.

Lately, I have also found mess project, but soon after that I also noticed that I need about six times as much to implement changes in it. C is just not my language, so actually only hope for preservation of those computers is good documentation (which is something mess community is striving also).

On the other hand, currently CPU emulation library is not license compatible with GPL so I won't probably push this to CPAN any time soon but I would love to find another implementation (PowerPC anyone?) and basically create thin layer of perl code which can express different architectures.

Having written all that now it's time to show you how to compile dam thing!

svn co svn://svn.rot13.org/VRac/
cd VRac
perl Makefile.PL
make
make orao

And you will have working Orao emulator in a second. You can also try make galaksija or make galeb.


This post was originally written on 2007-09-05 when I wanted to announce this project after playing with it over the summer. Since I never finished emulation of Galaksija or Galeb, I left it in draft mode for too long. Now that first Croatian Perl Workshop is coming along I will talk about this project, so it's just fair to announce it, so you can have preview about what my lecture will be.

vz-tools

For quite some time, I have been happy user of OpenVZ virtualization. And for almost same amount of time, I have been writing vz-tools, a set of few utilities to make work with OpenVZ a joy:

  • vz-optimize.pl was written first, and it tries to intelligently increase beancounter limits for VE until there is some requires left for virtual machine.
    Best usage would be to run some kind of stress-test on your VE, and then re-run vz-optimize.pl afterwards to allocate enough resources for VE.
  • vz-create.pl is second useful tool which will create new VE from upstream Debian mirror and tweak it a bit (the way I like it)
  • vz-clone.pl is latest addition to set of tools and it allows to create temporary clone of virtual machine for testing (of upgrades?) or QA

While those tools work well for me, they don't have much error checking, and most of parameters in them are actually hard-coded. Since, I am having presentation about virtualization on Linux on HrOUG this year, I will probably improve this tools a bit more.

However, I'm not quite sure if this tools would be of general use to somebody else. If it is, please drop me a note, and I would love to improve them a bit more...