This week I learned valuable lesson: if you are using MIPS from 2008 in your RAID controller, you can't really expect it to be faster than more modern Intel CPU when doing RAID 10 on disks.

It all started with failure of SSD in our bcache setup which sits on top of MegaRAID RAID10 array. Since this required me take one of ganeti nodes down, it was also a good opportunity to add one more disk (we where running 6 disks and one SSD) and switch to software md RAID10 so we can use all 7 disks in RAID10. In this process, I did some benchmarking and was shocked with results.

First, let's see original MegaRAID configuration:

# lsblk --scsi -m
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN NAME   SIZE OWNER GROUP MODE
sda  0:0:6:0    disk ATA      WDC WD1002FBYS-1 0C12      sda  931.5G root  disk  brw-rw----
sdb  0:0:7:0    disk ATA      INTEL SSDSC2BW24 DC32      sdb  223.6G root  disk  brw-rw----
sdc  0:2:0:0    disk DELL     PERC H310        2.12      sdc    2.7T root  disk  brw-rw----
# hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   13920 MB in  1.99 seconds = 6981.76 MB/sec
 Timing buffered disk reads: 1356 MB in  3.00 seconds = 451.81 MB/sec
and let's compare that with JBOD disks on controller and creating md array:
# hdparm -Tt /dev/md0
/dev/md0:
 Timing cached reads:   13826 MB in  1.99 seconds = 6935.19 MB/sec
 Timing buffered disk reads: 1888 MB in  3.01 seconds = 628.05 MB/sec
So, TL;DR is that you are throwing away disk performance if you are using hardware RAID. You didn't expect that? I didn't. In retrospect it's logical that newish Intel CPU can process data much faster than slowish MIPS on RAID controller, but on the other hand only difference is RAID overhead because same controller is still handling disks with software raid.

I also wrote document with a lot of console output and commands to type if you want to do the same: https://github.com/ffzg/gnt-info/blob/master/doc/megaraid-to-md.txt

Today I was explaining xclip utility and found this useful snippet which I wrote back in 2011 that allows you to edit browser textarea in terminal using vi with syntax highlighting.

#!/bin/sh

# workflow:
# 1. focus textarea in browser you want to edit
# 2. press ctrl+a then ctrl+c
# 3. switch to terminal and start this script with optional extensioni for highlight: xclip-vi html
# 4. edit file in vi, and save it
# 5. switch back to browser, and press ctrl+v in already selected textarea

ext=$1
test -z "$ext" && ext=txt

xclip -out > /tmp/$$.$ext && vi /tmp/$$.$ext && xclip -in -selection clipboard < /tmp/$$.$ext

xclip-vi.sh github gist

These days, my work seems more like archeology than implementing newest and coolest web technologies, but someone has to keep an eye on old web servers which are useful to people.

In this case, it's two major Debian releases upgrade which of course did result in some amount of breakage as you would expect. Most of breakage was easy to fix, but we had one site which had only pyc files, and python2.6 with correct modules wasn't supported on current distribution.

First idea was to use python de-compiler to generate source python files, and this only got me to different python errors which I didn't know how to fix. so, this was dead end.

I did have a backup of machine before upgrade on zfs pool, so next logical idea was to run minimal amount of old binaries to keep the site up. And I decided to do it in chroot so I can easily share mysql socket from current installation into pre-upgrade squeeze. Let's see what was involved in making this happen.

1. make writable copy of filesystem

Our backups are on zfs pool, so we cloned all three disks into same layout as they are mounted on filesystem:

2018-06-08.10:07:53 zfs clone lib15/backup/pauk.ffzg.hr/0@2018-05-01 lib15/export/pauk.ffzg.hr
2018-06-08.10:08:19 zfs clone lib15/backup/pauk.ffzg.hr/1@2018-05-01 lib15/export/pauk.ffzg.hr/home
2018-06-08.10:08:38 zfs clone lib15/backup/pauk.ffzg.hr/2@2018-05-01 lib15/export/pauk.ffzg.hr/srv

2. nfs export it to upgraded server

Take a note of crossmnt parameter, this will allow us to mount all descending filesystems automatically.

root@lib15:~# grep pauk /etc/exports 
/lib15/export/pauk.ffzg.hr 10.21.0.2(rw,fsid=2,no_subtree_check,no_root_squash,crossmnt)
root@lib15:~# exportfs -va
exporting 10.21.0.2:/lib15/export/pauk.ffzg.hr

3. mount nfs export

root@pauk:~# grep squeeze /etc/fstab
10.21.0.215:/lib15/export/pauk.ffzg.hr  /mnt/squeeze    nfs     defaults,rw     0 0

4. start chroot with old version of service

To make this work, I first edited apache configuration in chroot to start it on different port, and configured front-end server to redirect site to new port.

root@pauk:/home/dpavlin# cat chroot-squeeze.sh 

to=/mnt/squeeze

mount --bind /dev $to/dev
mount --bind /sys $to/sys
mount --bind /proc $to/proc

mount --bind /var/run/mysqld/ /mnt/squeeze/var/run/mysqld/

/usr/sbin/chroot $to /etc/init.d/apache2 start

5. redirect apache traffic to server in chroot

You will have to insert something like this in current apache configuration for vhost:

<Proxy http://127.0.0.1:18080/*>
    Order allow,deny
    Allow from all
</Proxy>
<Location /german2/>
    RequestHeader set Host "www.ffzg.unizg.hr"
    ProxyPreserveHost On
    ProxyPass        http://127.0.0.1:18080/german/
    ProxyPassReverse http://127.0.0.1:18080/german/
</Location>

You might scream at me that in days of containers, systemd and other better solutions chroot is obsolete. In my opinion, it's just a right tool for a job from time to time, if you have backup :-)

This year on DORS/CLUC 2018 I decided to talk about device tree in Linux kernel, so here are blurb, presentation and video of that lecture.

You have one of those fruity *Pi arm boards and cheep sensor from China? Some buttons and LEDs? Do I really need to learn whole new scripting language and few web technologies to read my temperature, blink a led or toggle a relay?
No, because your Linux kernel already has drivers for them and all you need is device tree and cat.

For last few years, one of first tools which we install on each new server is etckeeper. It saved us couple of times, and provides nice documentation about changes on the system.

However, git can take a lot of space if you have huge files which change frequently (at least daily since etckeeper has daily cron job to commit changes done that day). In our case, we have bind which stores jnl files in /etc/bind which results in about 500 Kb change each day for 11 zones we have defined.

You might say that it doesn't seem to be so bad, but in four months, we managed to increase size of repository from 300 Mb to 11 Gb. Yes, this is not mistake, it's 11000 Mb which is increase of 36 times! Solution for this is to use git gc which will in turn call git-pack to compress files. But this is where problems start -- git needs a lot of RAM to do gc. Since this machine has only 1 Gb of RAM, this is not enough to run git gc without running out of memory.

Last few times, I transferred git repository to other machine, run git gc there and than transfered it back (resulting in nice decrease from 11 Gb back to 300 Mb), however, this is not ideal solution. So, let's remove bind jnl files from etckeeper...

Let's start with our 11 Gb git repo, copy it to another machine which has 64 Gb or RAM needed for this operation.

root@dns01:/etc# du -ks .git
11304708        .git

root@dns01:~# rsync -ravP /etc/.git build.ffzg.hr:/srv/dns01/etc/
Now, we will re-create local files because we need to find out which jnl files are used so we can remove them from repo.
root@build:/srv/dns01/etc# git reset --hard

ls bind/*.jnl | xargs -i git filter-branch -f --index-filter 'git rm --cache --ignore-unmatch {}'

echo 'bind/*.jnl' >> .gitignore
git commit -m 'ignore ind jnl files' .gitignore
Now, finally we can shrink our 11 Gb repo!
root@build:/srv/dns01/etc# du -kcs .git
11427196        .git

root@build:/srv/dns01/etc# git gc
Counting objects: 38117, done.
Delta compression using up to 18 threads.
Compressing objects: 100% (27385/27385), done.
Writing objects: 100% (38117/38117), done.
Total 38117 (delta 27643), reused 12846 (delta 10285)
Removing duplicate objects: 100% (256/256), done.

root@build:/srv/dns01/etc# du -ks .git
414224  .git

# and now we can copy it back...

root@dns01:/etc# rsync -ravP build.ffzg.hr:/srv/dns01/etc/.git .
Just as side note, if you want to run git gc --aggressive on same repo, it won't finish with 60 Gb or RAM and 100 Gb of swap, which means that it needs more than 150 Gb of RAM.

So, if you are storing modestly sized files which change a lot, have in mind that you might need more RAM to run git gc (and get disk usage under control) than you might have.

Last week I head pleasure to present at two conferences in two different cities: DORS/CLUC 2016 and Osijek Mini Maker Faire on topic of cheap hardware from China which can be improved with little bit of software or hardware hacking. It was well received, and I hope thet you will find tool or two in it which will fill your need.

I hope to see more hacks of STM8 based devices since we have sdcc compiler with support for stm8, cheap SWIM programmer in form of ST-Link v2 (Chinese clones, which are also useful as ARM SWD programmers) and STM8 has comparable features to 8-bit AVR micro-controllers but cheaper.

Every few years we have to renew SSL certificates. And there is always something which can go wrong. So I decided to reproduce exact steps here so that Google can find it for next unfortunate soul who has same problem.

Let's examine old LDAP configuration:

deenes:/etc/ldap/slapd.d# grep ssl cn\=config.ldif 
olcTLSCACertificateFile: /etc/ssl/certs/chain-101-mudrac.ffzg.hr.pem
olcTLSCertificateFile: /etc/ssl/certs/cert-chain-101-mudrac.ffzg.hr.pem
olcTLSCertificateKeyFile: /etc/ssl/private/mudrac.ffzg.hr.gnutls.key
We need to convert OpenSSL key into format which GnuTLS understands:
deenes:/etc/ssl/private# certtool -k < star_ffzg_hr.key > /tmp/star_ffzg_hr.gnutls.key
Than we need to create certificate which includes our certificate and required chain in same file:
deenes:/etc/ldap/slapd.d# cat /etc/ssl/certs/star_ffzg_hr.crt /etc/ssl/certs/DigiCertCA.crt > /etc/ssl/certs/chain-star_ffzg_hr.crt
All is not over yet. OpenLDAP doesn't run under root priviledges, so we have to make sure that it's user is in ssl-cert group and that our certificates have correct permissions:
deenes:/etc/ldap/slapd.d# id openldap
uid=109(openldap) gid=112(openldap) groups=112(openldap),104(ssl-cert)

deenes:/etc/ldap/slapd.d# chgrp ssl-cert \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key

deenes:/etc/ldap/slapd.d# chmod 440 \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key

deenes:/etc/ldap/slapd.d# ls -al \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key
-r--r----- 1 root ssl-cert 3764 Jan 19 09:45 /etc/ssl/certs/chain-star_ffzg_hr.crt
-r--r----- 1 root ssl-cert 1818 Jan 17 16:13 /etc/ssl/certs/DigiCertCA.crt
-r--r----- 1 root ssl-cert 1946 Jan 17 16:13 /etc/ssl/certs/star_ffzg_hr.crt
-r--r----- 1 root ssl-cert 5558 Jan 19 09:23 /etc/ssl/private/star_ffzg_hr.gnutls.key
Finally, we can modify LDAP configuration to use new files:
deenes:/etc/ldap/slapd.d# grep ssl cn\=config.ldif 
olcTLSCACertificateFile: /etc/ssl/certs/DigiCertCA.crt
olcTLSCertificateFile: /etc/ssl/certs/chain-star_ffzg_hr.crt
olcTLSCertificateKeyFile: /etc/ssl/private/star_ffzg_hr.gnutls.key
We are done, restart slapd and enjoy your new certificates!

When I started playing with Raspberry Pi, I was a novice in electronics (and I should probably note that I'm still one :-).

But since then, I did learn a few things, and along that journey I also figured out that Raspberry Pi is great little device which can be used as 3.3V programmer for AVR, JTAG, SWD or CC111x devices (and probably more).

I collected all my experiences in presentation embedded below which I had pleasure to present at FSec conference this year. I hope you will find this useful.