LSI MegaRAID slow compared to md

This week I learned valuable lesson: if you are using MIPS from 2008 in your RAID controller, you can't really expect it to be faster than more modern Intel CPU when doing RAID 10 on disks.

It all started with failure of SSD in our bcache setup which sits on top of MegaRAID RAID10 array. Since this required me take one of ganeti nodes down, it was also a good opportunity to add one more disk (we where running 6 disks and one SSD) and switch to software md RAID10 so we can use all 7 disks in RAID10. In this process, I did some benchmarking and was shocked with results.

First, let's see original MegaRAID configuration:

# lsblk --scsi -m
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN NAME   SIZE OWNER GROUP MODE
sda  0:0:6:0    disk ATA      WDC WD1002FBYS-1 0C12      sda  931.5G root  disk  brw-rw----
sdb  0:0:7:0    disk ATA      INTEL SSDSC2BW24 DC32      sdb  223.6G root  disk  brw-rw----
sdc  0:2:0:0    disk DELL     PERC H310        2.12      sdc    2.7T root  disk  brw-rw----
# hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   13920 MB in  1.99 seconds = 6981.76 MB/sec
 Timing buffered disk reads: 1356 MB in  3.00 seconds = 451.81 MB/sec
and let's compare that with JBOD disks on controller and creating md array:
# hdparm -Tt /dev/md0
/dev/md0:
 Timing cached reads:   13826 MB in  1.99 seconds = 6935.19 MB/sec
 Timing buffered disk reads: 1888 MB in  3.01 seconds = 628.05 MB/sec
So, TL;DR is that you are throwing away disk performance if you are using hardware RAID. You didn't expect that? I didn't. In retrospect it's logical that newish Intel CPU can process data much faster than slowish MIPS on RAID controller, but on the other hand only difference is RAID overhead because same controller is still handling disks with software raid.

I also wrote document with a lot of console output and commands to type if you want to do the same: https://github.com/ffzg/gnt-info/blob/master/doc/megaraid-to-md.txt