An Adventure in the Homelab

An Adventure in the Homelab

·

5 min read

For nearly 2 years now I've run some kind of homelab in my house. In that time it's survived a house move and been greatly expanded upon. Right now I'm in the process of making some radical modifications and downsizing somewhat, so I thought I'd outline the history of the lab, it's uses, and plans for the future.

It began with the purchase of a Dell R710, a favourite of the home server enthusiast everywhere, crammed on a desk in a spare bedroom. My specific model comprises two hexa core Intel X5660 CPUs running at 2.8GHz with 48GB of RAM and an H700 RAID Controller for the 6 3.5" drive bays. The use case here was to spin up unRAID and run a number of containers, as well as access the server as a NAS, but in order to do this I had to get the RAID Controller to play nice.

unRAID needs direct access to each drive, something the H700 can't do as it doesn't support IT (initiator target) mode - only IR (integrated raid) mode. So I had to replace it. During my search for a replacement I learned Dell servers can be a little picky over what RAID cards they'll accept, but ultimately settled on an H330 which I then flashed into IT mode using a guide found here.

My goal was to use it as a NAS primarily, so I spun up unRAID and over the course of the next few months filled those 6 bays with hard drives and subsequently filled the drives themselves, prompting my next expansion: a Dell Xyratex Compellent HB-1235. This handy DAS gave me space for additional 12 3.5" drives, which I linked up to the R710 using a Dell H200E HBA and a couple of SFF-8088 SAS cables.

The final upgrades to my lab at this stage were an APC 3000 UPS to allow for tidy shutdowns in the event of a power failure, and (as I wanted to try out some hardware transcoding in Plex/Emby) an HP nVidia Quadro P2000 which with the help of a PCI riser cable and rear IO bracket removal fit snugly into the R710 case.

This setup was pretty spiffy. I was hovering around 60TB of capacity with 2 drives for parity and everything I wanted to run could; using the container functionality in unRAID. But upon moving house in late 2019 I saw the opportunity to take the lab to the next extreme.

The new house gave me a whole loft to play with, and so the first move was to get solid network infrastructure in place. I went for Ubiquiti hardware for this, snagging a Cloud Key Gen2, Security Gateway Pro 4, a 24 port PoE switch, a 16 port PoE switch, and a couple of AP Pros.

I then put the R710 into retirement and spun up a SuperMicro CSE-826 with 2 hexa core Intel Xeon E5-2630 V2s at 2.6GHz, 32GB of RAM, and 2 LSI SAS3801E-S controllers, along with a SuperMicro CSE-846 with a deca core Intel Xeon E5-2680 at 2.8GHz and 256GB of RAM.

Now, that sounds a like a hell of a lot of hardware - and you'd be right. My plan was to use the CSE-846 as my new file server, use the CSE-826 as my container server, and attach the 826's backplane to the Dell H200E now installed in the CSE-846. That's exactly what I did, and while not the most orthodox solution it works. The Quadro P2000 also found its way into the CSE-826 for direct access to the Emby Docker container. With 24 3.5" bays provided by the CSE-846, and 8 by the CSE-826, I now had room to fit a full FreeNAS array, comprising 3 RAIDz2 vdevs of seven 6TB drives, seven 8TB drives, and 7 10TB drives, toalling a whopping 98TB of usable space and allowing for 2 drive failures per vdev before complete data loss.

As you might imagine this setup is not without issue, which I'll list below:

  • I'm using out of date hardware. While it all works and probably will for some time, there's just something nice about having up to date tech.

  • FreeNAS on the CSE-846 does not need an E5-2680, demonstrated by CPU usage never exceeding 30%.

  • FreeNAS does not need 256GB of RAM for a 100TB array. While nice to have, all of this is simply used for caching, which seems slightly overkill.

  • My container server does not need 2 E5-2630s, again demonstrated by the incredibly low CPU utilisation.

  • The joining together of one backplane into an expansion card of another server feels slightly wrong. Should I have made use of the Xyratex for additional bays? Probably. But that just adds to...

  • The mammoth power usage.

  • The noise generated by running two rack servers. Yes, I could swap out the case fans for some quieter ones, but that still leaves the incredibly loud PSU fans which I don't really want to touch - digging around in a PSU is where I draw a line.

  • FreeNAS spitting out SMART errors and kicking out perfectly good drives. This has happened three times now, and each time the drive has been absolutely fine and passed a short, conveyance, and extended test, and been happily reused. It's not localised to any particular bay. My only theory is that the HBA is dying a slow death or FreeNAS is doing stupid things.

  • I need more space. I have 4TB of space remaining meaning a purchase of another 7 drives which will have to go in the CSE-826 chassis and use the nasty backplane-into-HBA setup. And after that all my bays will be full and I'll have to look into getting the Xyratex involved, adding to the noise, ower usage, and...

  • The multiple points of failure. Two backplanes, two HBAs, and potentially a DAS added to the mix. Not to mention I setup FreeNAS on a USB (as was recommended at the time) which I don't trust all that much. If the FreeNAS box goes down that's it, no file access.

It's for all these reasons I've been looking for a new solution - and I think I've got one. A cluster of single board computers. Of course for this I'll need to devise a way to safely migrate all the existing data, figure out a case solution, and if the whole thing is even viable, but hopeful. I'll be updating this post with the continuation to the story once I've written it. Take it easy!