These days, my work seems more like archeology than implementing newest and coolest web technologies, but someone has to keep an eye on old web servers which are useful to people.

In this case, it's two major Debian releases upgrade which of course did result in some amount of breakage as you would expect. Most of breakage was easy to fix, but we had one site which had only pyc files, and python2.6 with correct modules wasn't supported on current distribution.

First idea was to use python de-compiler to generate source python files, and this only got me to different python errors which I didn't know how to fix. so, this was dead end.

I did have a backup of machine before upgrade on zfs pool, so next logical idea was to run minimal amount of old binaries to keep the site up. And I decided to do it in chroot so I can easily share mysql socket from current installation into pre-upgrade squeeze. Let's see what was involved in making this happen.

1. make writable copy of filesystem

Our backups are on zfs pool, so we cloned all three disks into same layout as they are mounted on filesystem:

2018-06-08.10:07:53 zfs clone lib15/backup/pauk.ffzg.hr/0@2018-05-01 lib15/export/pauk.ffzg.hr
2018-06-08.10:08:19 zfs clone lib15/backup/pauk.ffzg.hr/1@2018-05-01 lib15/export/pauk.ffzg.hr/home
2018-06-08.10:08:38 zfs clone lib15/backup/pauk.ffzg.hr/2@2018-05-01 lib15/export/pauk.ffzg.hr/srv

2. nfs export it to upgraded server

Take a note of crossmnt parameter, this will allow us to mount all descending filesystems automatically.

root@lib15:~# grep pauk /etc/exports 
/lib15/export/pauk.ffzg.hr 10.21.0.2(rw,fsid=2,no_subtree_check,no_root_squash,crossmnt)
root@lib15:~# exportfs -va
exporting 10.21.0.2:/lib15/export/pauk.ffzg.hr

3. mount nfs export

root@pauk:~# grep squeeze /etc/fstab
10.21.0.215:/lib15/export/pauk.ffzg.hr  /mnt/squeeze    nfs     defaults,rw     0 0

4. start chroot with old version of service

To make this work, I first edited apache configuration in chroot to start it on different port, and configured front-end server to redirect site to new port.

root@pauk:/home/dpavlin# cat chroot-squeeze.sh 

to=/mnt/squeeze

mount --bind /dev $to/dev
mount --bind /sys $to/sys
mount --bind /proc $to/proc

mount --bind /var/run/mysqld/ /mnt/squeeze/var/run/mysqld/

/usr/sbin/chroot $to /etc/init.d/apache2 start

5. redirect apache traffic to server in chroot

You will have to insert something like this in current apache configuration for vhost:

<Proxy http://127.0.0.1:18080/*>
    Order allow,deny
    Allow from all
</Proxy>
<Location /german2/>
    RequestHeader set Host "www.ffzg.unizg.hr"
    ProxyPreserveHost On
    ProxyPass        http://127.0.0.1:18080/german/
    ProxyPassReverse http://127.0.0.1:18080/german/
</Location>

You might scream at me that in days of containers, systemd and other better solutions chroot is obsolete. In my opinion, it's just a right tool for a job from time to time, if you have backup :-)

This year on DORS/CLUC 2018 I decided to talk about device tree in Linux kernel, so here are blurb, presentation and video of that lecture.

You have one of those fruity *Pi arm boards and cheep sensor from China? Some buttons and LEDs? Do I really need to learn whole new scripting language and few web technologies to read my temperature, blink a led or toggle a relay?
No, because your Linux kernel already has drivers for them and all you need is device tree and cat.

For last few years, one of first tools which we install on each new server is etckeeper. It saved us couple of times, and provides nice documentation about changes on the system.

However, git can take a lot of space if you have huge files which change frequently (at least daily since etckeeper has daily cron job to commit changes done that day). In our case, we have bind which stores jnl files in /etc/bind which results in about 500 Kb change each day for 11 zones we have defined.

You might say that it doesn't seem to be so bad, but in four months, we managed to increase size of repository from 300 Mb to 11 Gb. Yes, this is not mistake, it's 11000 Mb which is increase of 36 times! Solution for this is to use git gc which will in turn call git-pack to compress files. But this is where problems start -- git needs a lot of RAM to do gc. Since this machine has only 1 Gb of RAM, this is not enough to run git gc without running out of memory.

Last few times, I transferred git repository to other machine, run git gc there and than transfered it back (resulting in nice decrease from 11 Gb back to 300 Mb), however, this is not ideal solution. So, let's remove bind jnl files from etckeeper...

Let's start with our 11 Gb git repo, copy it to another machine which has 64 Gb or RAM needed for this operation.

root@dns01:/etc# du -ks .git
11304708        .git

root@dns01:~# rsync -ravP /etc/.git build.ffzg.hr:/srv/dns01/etc/
Now, we will re-create local files because we need to find out which jnl files are used so we can remove them from repo.
root@build:/srv/dns01/etc# git reset --hard

ls bind/*.jnl | xargs -i git filter-branch -f --index-filter 'git rm --cache --ignore-unmatch {}'

echo 'bind/*.jnl' >> .gitignore
git commit -m 'ignore ind jnl files' .gitignore
Now, finally we can shrink our 11 Gb repo!
root@build:/srv/dns01/etc# du -kcs .git
11427196        .git

root@build:/srv/dns01/etc# git gc
Counting objects: 38117, done.
Delta compression using up to 18 threads.
Compressing objects: 100% (27385/27385), done.
Writing objects: 100% (38117/38117), done.
Total 38117 (delta 27643), reused 12846 (delta 10285)
Removing duplicate objects: 100% (256/256), done.

root@build:/srv/dns01/etc# du -ks .git
414224  .git

# and now we can copy it back...

root@dns01:/etc# rsync -ravP build.ffzg.hr:/srv/dns01/etc/.git .
Just as side note, if you want to run git gc --aggressive on same repo, it won't finish with 60 Gb or RAM and 100 Gb of swap, which means that it needs more than 150 Gb of RAM.

So, if you are storing modestly sized files which change a lot, have in mind that you might need more RAM to run git gc (and get disk usage under control) than you might have.

Last week I head pleasure to present at two conferences in two different cities: DORS/CLUC 2016 and Osijek Mini Maker Faire on topic of cheap hardware from China which can be improved with little bit of software or hardware hacking. It was well received, and I hope thet you will find tool or two in it which will fill your need.

I hope to see more hacks of STM8 based devices since we have sdcc compiler with support for stm8, cheap SWIM programmer in form of ST-Link v2 (Chinese clones, which are also useful as ARM SWD programmers) and STM8 has comparable features to 8-bit AVR micro-controllers but cheaper.

Every few years we have to renew SSL certificates. And there is always something which can go wrong. So I decided to reproduce exact steps here so that Google can find it for next unfortunate soul who has same problem.

Let's examine old LDAP configuration:

deenes:/etc/ldap/slapd.d# grep ssl cn\=config.ldif 
olcTLSCACertificateFile: /etc/ssl/certs/chain-101-mudrac.ffzg.hr.pem
olcTLSCertificateFile: /etc/ssl/certs/cert-chain-101-mudrac.ffzg.hr.pem
olcTLSCertificateKeyFile: /etc/ssl/private/mudrac.ffzg.hr.gnutls.key
We need to convert OpenSSL key into format which GnuTLS understands:
deenes:/etc/ssl/private# certtool -k < star_ffzg_hr.key > /tmp/star_ffzg_hr.gnutls.key
Than we need to create certificate which includes our certificate and required chain in same file:
deenes:/etc/ldap/slapd.d# cat /etc/ssl/certs/star_ffzg_hr.crt /etc/ssl/certs/DigiCertCA.crt > /etc/ssl/certs/chain-star_ffzg_hr.crt
All is not over yet. OpenLDAP doesn't run under root priviledges, so we have to make sure that it's user is in ssl-cert group and that our certificates have correct permissions:
deenes:/etc/ldap/slapd.d# id openldap
uid=109(openldap) gid=112(openldap) groups=112(openldap),104(ssl-cert)

deenes:/etc/ldap/slapd.d# chgrp ssl-cert \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key

deenes:/etc/ldap/slapd.d# chmod 440 \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key

deenes:/etc/ldap/slapd.d# ls -al \
/etc/ssl/certs/DigiCertCA.crt \
/etc/ssl/certs/star_ffzg_hr.crt \
/etc/ssl/certs/chain-star_ffzg_hr.crt \
/etc/ssl/private/star_ffzg_hr.gnutls.key
-r--r----- 1 root ssl-cert 3764 Jan 19 09:45 /etc/ssl/certs/chain-star_ffzg_hr.crt
-r--r----- 1 root ssl-cert 1818 Jan 17 16:13 /etc/ssl/certs/DigiCertCA.crt
-r--r----- 1 root ssl-cert 1946 Jan 17 16:13 /etc/ssl/certs/star_ffzg_hr.crt
-r--r----- 1 root ssl-cert 5558 Jan 19 09:23 /etc/ssl/private/star_ffzg_hr.gnutls.key
Finally, we can modify LDAP configuration to use new files:
deenes:/etc/ldap/slapd.d# grep ssl cn\=config.ldif 
olcTLSCACertificateFile: /etc/ssl/certs/DigiCertCA.crt
olcTLSCertificateFile: /etc/ssl/certs/chain-star_ffzg_hr.crt
olcTLSCertificateKeyFile: /etc/ssl/private/star_ffzg_hr.gnutls.key
We are done, restart slapd and enjoy your new certificates!

When I started playing with Raspberry Pi, I was a novice in electronics (and I should probably note that I'm still one :-).

But since then, I did learn a few things, and along that journey I also figured out that Raspberry Pi is great little device which can be used as 3.3V programmer for AVR, JTAG, SWD or CC111x devices (and probably more).

I collected all my experiences in presentation embedded below which I had pleasure to present at FSec conference this year. I hope you will find this useful.

We have been running ganeti cluster in our institution for more than a year now. We did two cycles of machine upgrades during that time, and so far we where very pleased with ability of this cloud platform. However, last week we had a problem with our instances -- two of them got owned and started generating DoS service attack to external resources. From our side it seemed at first like our upstream link is over saturated, and we needed to way to figure out why it is.

gnt-info.png

At first, it seemed like this would be easy to do. Using dstat,i I found that we are generating over 3 Gb/s traffic every few seconds to outside world. We have 1 Gb/s upstream link, but our bonded interfaces on ganeti nodes can handle 3 Gb/s of traffic, so for a start we where saturating our own link.

But which instance did that? I had to run dstat on every node in our cluster until I found two nodes which had instances which where overloading our link. Using iftop I was able to get hostname and IP address of instances which I wanted to shut down. However, this is where problems started. We didn't have DNS entries for them, and although I had IP and mac address of instances I didn't had easy way to figure our which instance has that mac.

Than I figured out that I can get mac from kvm itself, using ps. Once I found instances it was easy to stop then and examine what happened with them.

But, this got me thinking. Every time I have a troubleshooting problem with ganeti, I basically use more or less same command-line tools to figure out what is going on. But I didn't have a tool which would display me some basic stats about instance, but including mac addresses and network traffic (which in our configuration are tap devices added to bridges). So I wrote, gnt-info which presents nice overview of your instances in ganeti cluster which you can grep to drill-down into particular instance or host.

We all read hackaday, and when I read Five Dollar RF Controlled Light Sockets post I decided that I have to buy some. However, if you read comments on original Cheap Arduino Controlled Light Sockets - Reverse Engineering RF post and especially comments, you will soon figure out that ordering same looking product from China might bring you something similar but with different internals.

In my case, all four light sockets turn on or off with any button press on remote which was a shame. When I opened remote and socket, I also had bad surprise. My version didn't have any SPI eeprom, but just two chips, ST F081 FB 445 in remote and ST ED08 AFB422 in light bulb (in picture hidden below receiver board).

remote.jpg socket-top.jpg socket-bottom.jpg

But, I already had acquired two sets so I wanted to see what I can do with them. Since I couldn't read eeprom to figure out code, I decided to use rtl-sdr to sniff radio signals and try to command them using cheap 315 MHz Arduino module.

I used gqrx to sniff radio signals and I was not pleased. Remote drifted all over the place mostly around 316 MHz and it was some trial and error to capture signals which are generated when buttons are pressed. However, I have verified that it's sending same signal multiple times no matter which keys I press (which would explain why four pins on remote are soldered together).

After a while I had two traces (since I have two sets of light sockets) and could decode binary data which is sent from following picture:

signals.png

How I knew that one set is transmitting 1000100110110000000000010 and another one 1011001001011111000000010. From looking into timing in audacity, it seemed that each bit is encoded in short-long or long-short sequence where short one is about third of long one, and one bit is about 1200 ms. I cheated here a little and stuck scope into scope into transmit trace on remote to verify length of pulses just to be sure.

So as next step I wrote simple Arduino sketch to try it out:

#define TX_PIN 7
#define LED_PIN 13

char *code = "1000100110110000000000010";
//char *code = "1011001001011111000000010";

void setup() {
  pinMode(LED_PIN, OUTPUT);
  pinMode(TX_PIN, OUTPUT);
}

void loop() {
  digitalWrite(LED_PIN, HIGH);

  for(int i = 0; i < strlen(code); i++) {
    int i1 = 300;
    int i2 = 900;
    if (code[i] == '1' ) {
      i1 = 900;
      i2 = 300;
    }
    digitalWrite(TX_PIN, HIGH);
    delayMicroseconds(i1);
    digitalWrite(TX_PIN, LOW);
    delayMicroseconds(i2);
  }
  
  digitalWrite(LED_PIN, LOW);  
  delay(3000);
}
So, I compiled it, uploaded to Arduino and... nothing happens. Back to the drawing board, I guess.

When I was looking into gqrx I could see that signal is sent as long as I'm holding button up to 10 seconds. From experience before I know that this cheap receivers need some tome to tune into frequency so next logical step was to send same signal multiple times. And guess what, when I sent same singal twice with 2000 ms delay between them everything started to work.

Again somewhat. Light socket in far corner of hall seemed to have problems receiving signal which would put two light socket in hall in opposite state: one would be on and another would be off. This was fun, and could be fixed with simple antenna on Arduino module (since currently I don't have any) but I will conclude that your IoT device should send different codes for on and off state so something like this won't happen to you.

Then I got carried away and added commands to change all parameters to experiment how sensitive receiver is. You can find full code at https://git.rot13.org/?p=Arduino;a=blob;f=light_sockets/light_sockets.ino With this experiments I found out that you don't have to be precise with timings (so my oscilloscope step was really not needed). Receiver works with 500 ms low and 1100 ms high (for total of 1600 ms per bit) on high end, down to 200 ms for low and 800 ms for high (for total of 1000 ms per bit).

I suspect that chips are some kind of 26 bit remote encoders/decoders but I can't find any trace of datasheet on Internet. This is a shame, because I suspect that it's possible to program light sockets to respond to any code and in theory address each of them individually (which was my goal in beginning). However poor construction quality, and same code for on and off state (combined with poor reception) makes me wonder if this project is worth additional time.