The journey to a 10GbE home network
My first memorable experience with a well-designed and functional network was in 2009 at the African University of Science and Technology (Abuja). My friend, @bigbrovar, designed and built the network, and was its sole administrator.
Speaking of the network being very functional, I am referring to the breadth of services running within the network that needed to be kept alive. The university assigned laptops to students. These laptops ran a custom Linux distro (also put together by my friend) and, at some point, loaded the user’s home directory off the network, making it possible for students to login to their desktop profile on any school-issued computer they can find. There was also some caching setup in place such that any HTTP download is done once over the internet and subsequently on the network, dramatically cutting down bandwidth requirements for system updates, etc. I used to run Ubuntu on my primary laptop at the time and happily stopped by the school on weekends to keep my packages up to date.
My first home network #
In 2013, as the new wireless standard (802.11ac) started making it into consumer devices and routers, I decided to get serious about setting up a home network. Before then, wireless routers from ISPs stuck at 802.11n made it frustrating to do anything serious on the network outside of normal internet connectivity. File transfers? Time machine backups? Just forget it unless painfully slow transfers are a kink. The promise of moving data at speeds up to 1.3Gbps led me to get my first Linksys router (EA6900) and hook it up to my ISP’s modem.
Over time, I added utility to my shiny new home network. I got my first NAS - a self-contained single-drive Western Digital device. I was able to run time machine backups to it over the network. Wireless connectivity often topped off at about 700Mbps, so I found myself using a cable for the most part when I wanted to perform transfers at full speed. I also set up Plex on a spare laptop and had it index my media collection stored on the NAS. This was a small house, and my TV wasn’t smart, so I connected the laptop directly to the TV and got a small Logitech keyboard to use as a remote. My basic home entertainment setup was good to go!
Doing more with a bigger space #
In January 2017, I moved to a new apartment. This one was slightly larger than the previous one - two bedrooms instead of one. The layout was also such that the spot where I intended to set up my desk wasn’t necessarily a good place to situate my wireless router, so I had to run a Cat6 cable from my bedroom to the living room (via the ceiling) where I placed the router. IPNX fiber to the home had also become a thing, and I got on a 50Mbps plan. At my desk, the cable from the living room ended up in a small 8-port switch from which I could connect my NAS, my desktop, and any other devices that needed to be hardwired to the network for guaranteed availability.
The service I got from my ISP guaranteed the quoted speed in both directions (uplink/downlink) and assigned me a dedicated IP address. This encouraged me to take things up a notch and set up my Plex server to access it outside my home. I also decided to set up a bigger NAS. The previous one had a capacity of 4TB. I got a Synology DS916+ enclosure which offered four hard drive bays, and I got four 8TB HGST drives to go with it. I eventually moved routing responsibilities away from my Linksys router to a single-board computer on which I had set up pfSense. This was an attempt to own my network stack further and be able to do stuff like setting up Squid to serve as a caching and forwarding proxy. Remember my experience with this at my friend’s school, as described at the start of this article? That’s what I was trying to replicate. Unfortunately, most websites at this time were beginning to move to HTTPS as a default mode of transport, making it such that transparent caching and forwarding required some extra man-in-the-middle hacks. This wasn’t necessarily “transparent” to a new device joining the network as it required manual intervention to suppress certificate errors that would unfailingly show up. Oh well.
Considering how well IPNX worked for me at home, I quickly figured it made sense to adopt the same setup at work. Stable and reliable internet connectivity isn’t something I joke with, and we ultimately relegated our existing Tizeti setup to the background. Given my tendency to be hands-on - a trait shared by my engineering colleagues at the time, we set out to design what the office network should look like. The size of the space meant that the wireless setup wasn’t going to be as simple as just dropping a wireless router in the space and hoping for the best. This was how I got introduced to the world of setting up Unifi access points.
I know you are here to read about my eventual 10GbE home network setup, but let me make a slight detour and walk you through another opportunity I had at work to design and set up our office network at a new location. This also presented my first practical foray into 10GbE networking, as the location was a large compound with two buildings. I decided to go with a layout that connected both via a 10Gbps link over fiber. There was a need to ensure the entire premises was blanketed with WiFi and video surveillance. For the latter, I was convinced we could get a better setup than the one a third-party vendor installed at our previous office location. I mean, it’s 2019, and a custom IP camera setup has the potential to provide better flexibility and overall more ROI compared to the basic CCTV setup most Lagos installers are famous for. As part of fact-finding, I ordered two Reolink cameras to test with Synology Surveillance Station on my NAS at home. The setup was quite straightforward, so we went ahead and purchased enough to install around the premises.
My 10GbE home network #
After I secured the lease to my new home in mid-2021, I started mapping out what the network layout would look like. The guiding principle was to guarantee future-proofing. This meant I had to run cables from the location I intended to use as my network hub to everywhere I might need to have a stationary device hardwired to the network. All wall points will also be capable of 10 GbE speeds. The future may be wireless, but bleeding edge high bandwidth connectivity will always be wired, and I would like to experiment with or leverage devices that require such in my home.
One of the random challenges I ran into while setting up the network was how unnecessarily expensive 10GbE networking equipment was. When I was planning the setup, Ubiquity didn’t have reasonable options for 10GbE switches, and I was stuck with a 6-port POE one, which only had four RJ45 ports with two SFP ports. I got four of these since my plan required 15 10GbE-capable wall points. Thankfully, by October 2021, Ubiquity released a single 24-port 10GbE switch. I wasted no time purchasing that instead and gave out three of the smaller switches I got previously.
My early shopping list looked like this:
- a 500ft roll of CAT7 cable (to run from my patch panel to wall plates)
- a 1000ft roll of CAT6 cable (to run from my patch panel to the various spots I intend to place access points)
- access points. U6-Pro/Lite for indoors, and U6-Mesh for outdoors.
- a 24-port poe switch (12 ports @ 1GbE, 12 ports @ 2.5GbE)
- a 24 port 10GbE switch
- a 48-port patch panel
- a 48u server rack (I didn’t want to constrain myself by what I thought I needed at the time. Again, future-proofing, haha)
- a lot of RJ45 connectors and strain relief boots
- a direct attached copper cable to connect both switches using their SFP+ ports
- a 6-port Intel-based single-board computer to use as a firewall/router
- an Intel NUC to host my Unifi controller and other “essential” software such as pi-hole and homeassistant in virtual machines
My choice of U6-Lite access points was influenced by wanting wireless devices to be within a reasonably short distance from the closest access point at all times without necessarily having to deal with signal interference issues. My primary connection to the internet offers 1Gbps of bandwidth which means the TX rate wifi-connected devices are able to maintain will be their bottleneck. The idea was to maximize that as much as I reasonably could. I eventually had to switch out three of these access points to Pros. This was to address three dead zones I discovered.
I hate power outages and would hate for my network equipment to be subject to unexpected power failures, so I put multiple redundancies in place on that front. I had the electrical wiring of my home redone to split the space across two distribution boards, with one of them prioritized and configured to be the only one powered in the event of a power outage that sees my bank of lithium-ion batteries drained to 50%. The spaces covered by priority db include the workshop (the base of my home network). In addition, there is a rack-mounted APC UPS capable of keeping devices in the rack powered for well over an hour in the event of an inverter shutdown.
I am still making updates to the network, and I recently added Starlink to provide upstream redundancy. I am also putting together components to build a new NAS, which alongside the desktop I built over the holidays, will take full advantage of the 10GbE network.
Thanks to @bigbrovar, @timmmms_, @twisted_myk, @eeyitemi, @tomiadesina_ and @couth__ for reading drafts of this.