Every year, our three day annual DORS/CLUC 2014 conference is happening. This year, the dates shifted a few weeks later, which resulted in less students showing up because of exams, so it was a somewhat different experience than years before. For few years now we are not at the University of Zagreb, FER location so it also changed conference a bit. Having said that, even after move from FER, we still had a bus of students from my own faculty FOI in Varaždin, and they where missing this year.
It was still full conference in new (2nd floor, not ideal for breaks in fresh air which is a must to stay for 11 hours each day, mind you) location at Croatian Chamber of Economy new and nice conference hall with wifi which was stable but didn't allow UDP traffic. Both mosh and n2n didn't work for me.
It was also in very different format. I would love to know did it worked for people or not. Instead of charging for workshops, they where included in conference price, and as every year, it you where interested in topic, nobody will turn you away from workshop because of space :-) This also meant that workshops are three hours slots at the end of the day after 7 hours of lectures. When conference started, we where afraid how will we accommodate all that people at workshops, but sense prevailed and about 20 or so people stayed for workshop each day.
Parallella and Epiphany 16 core mesh CPU
I had 5-minute lightning talk about Parallella, and hopefully managed to explain, that there is now interesting dual-core ARM, with interesting DSP-like capabilities backed by OpenCL and FPGA. This is unique combination of processing power, and it would be interesting to see which part of this machine can run OpenVPN encryption best for example, because it has 1Gbit/s ethernet interface.
ZFS workshop, updated to 0.6.3
ZFS on Linux had a 0.6.3 release just in time, and I presented two and half hour long workshop about ZFS for which 10-20 people stayed, after 7 ours of presentations. I somewhat field to show enough in command-line, I'm afraid, because I was typing too little. I did managed to show what will you get if you re-purpose several year old hardware for ZFS storage. Something along lines of 2004 year hardware with 8 SCSI disks.
I managed to create raid-10 like setup, but with all benefits of ZFS, fill it up and scrub it during workshop.
root@debian:/workshop# zfs list NAME USED AVAIL REFER MOUNTPOINT workshop 268G 28K 268G /workshop workshop/test1 280K 28K 144K /workshop/test1 workshop/test1/sub1 136K 28K 136K /workshop/test1/sub1 root@debian:/workshop# zpool status pool: workshop state: ONLINE scan: scrub repaired 0 in 0h44m with 0 errors on Tue Jun 17 17:30:38 2014 config: NAME STATE READ WRITE CKSUM workshop ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02KAT ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02KBB ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02KCK ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02KDD ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02L4S ONLINE 0 0 0 scsi-SFUJITSU_MAS3735NC_A107P4B02L4U ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 scsi-SFUJITSU_MAW3073NC_DAL3P6C04079 ONLINE 0 0 0 scsi-SFUJITSU_MAW3073NC_DAL3P6C040BM ONLINE 0 0 0 errors: No known data errorsI think it might be good idea to pxeboot this machine on demand (for long-term archival storage) and copy snapshots to it on weekly basis for example. Think of it as tape alternative (quite small, 300G) but with rather fast random IO. Idea was to use this setup for ganeti-backup target, but dump format of ext file-system forced us to use zfs volumes to restore backup on other RAIDZ1 4*1.5T SATA pool, and it was very slow.
In current state, it can receive zfs snapshots at 30-40 MB/s and it's using single core for ssh, which is bottleneck. More benchmarks have to be done on this machine to see weather it's worth electricity it's using...
Ganeti - our own cloud
Another interesting part of infrastructure work last year for me was with Luka Blašković. We migrated all servers from faculty and library to two Ganeti groups. We are running cluster of reasonable size (10+ nodes, 70+ instances). Everything we did is done from legacy hardware which is now much better utilized. Some machines where never backuped and firmware upgraded so it was first time for them to have this kind of maintenance in last 10 years. Now we can move VM instances to another machine, and we are much more confident that services will stay running via live migration for scheduled maintenance or restart in case of hardware failure.
For workshop, we decided to chew a bit more than we can swallow. We spun up KVM images on our ganeti cluster and went through installation of workshop ganeti on them and joining them to new cluster. This went fairly well, but then we started configuring xen to spawn new instances (ganeti kvm with ganeti xen on top of it) we had some problems with memory limits which we managed to fix before end of workshop.
In our defense, we really believe that workshop was more interesting this way, probably because people didn't want to leave (few brave ones which where with us all the way to the end, that is). When you try to deploy something as complex as Ganeti you will run into some problems, so seeing troubleshooting methods used is usually as helpful as solution itself.
All in all, it was interesting and very involved three days. Hope to see you all again next year.