Placeholder Image

Subtitles section Play video

  • Right now I have four eight terabyte SSDs installed in my 2.5 gigabit NAS.

  • And with this spare, that's forty terabytes of flash storage.

  • The NAS runs greatdon't get me wrongbut 2.5 gigabits?

  • That's leaving a lotta performance on the table.

  • I've been holding these SSDs back for too long.

  • And no, I'm not gonna install 'em in a Raspberry Pi—I already did that and found it couldn't

  • keep up, it could barely do one gigabit!

  • Today, I'm gonna scrap together the cheapest-but-fastest storage server in my homelab using used server

  • parts.

  • And I'm hopefully gonna be able to use it to edit 4K ProRes footage anywhere on my network.

  • Right now I store all my video projects on my Mac's internal SSD.

  • And that's greaton that computer...

  • but it's not a NAS.

  • Video editing on my other Mac isn't perfect using macOS's built-in file sharing.

  • I mean, it works... but I have a bonafide homelab now.

  • And I wanna unleash these SSDs.

  • So I'm gonna build a server with all this: A Supermicro ITX server motherboard, with

  • an Intel Xeon server CPU.

  • I got that for $270 bucks on eBay.

  • A hundred and twenty eight gigs of ECC RAM I bought used for $230 bucks from a local

  • eWaste recycler.

  • Five Samsung QVO 8 terabyte SSDs I bought newfor, eh... almost $4,000 dollars.

  • And this prototype mini ITX rackmount case MyElectronics sent for me to test.

  • But I'm missing one partand that brings me to this video's sponsor!

  • Micro Center!

  • I normally come here to pick up Raspberry Pi stuff in the maker section.

  • And of course Micro Center is the best place to buy PCs, Macs, and custom build parts.

  • But did you know they even sell servers and server parts?!

  • I need a boot SSD for my server, and whaddya know, they sell Supermicro-branded server

  • SSDs, and they even have ECC RAM!

  • And if I needed a whole prebuilt rack server?

  • I could just grab one and

  • [employee] Hey bud, you break it, you buy it.

  • Right now, Micro Center's offering new customers $25 off any AMD or Intel CPU, in-store only.

  • Or, even for existing customers, you can submit your build to Micro Center's build showcase

  • and get a coupon for your next visit.

  • Limit one per customer, and you can find details for both offers in the description below.

  • Now, do I really need a brand-new Supermicro SSD?

  • No, not really.

  • I coulda used any decent SSD, or heck, even a SuperDOM if I wanted to go full-on server.

  • But the drive does have a nice warranty, plus, it's a good hook for the sponsor.

  • And don't tell Micro Center, but I'd still recommend 'em even if they didn't sponsor.

  • I just wish they had more locations!

  • But now that we have all the parts, it's time for a build montage!

  • [incredible 80s vibes]

  • All right, everything fit together nicely, and I'm really digging this prototype 2U case

  • from MyElectronics.

  • The sent me their blue version when I rackmounted the Pi Cluster for my remote farm server build,

  • but there were a few things I didn't like about it.

  • This new black version has more ventilation, and a LOTTA keystone jacks, which is nice

  • because I can expose different ports like USB, HDMI, or SATA directly out the front

  • or back.

  • They aren't selling it quite yet, but they do have a page for it on their website, and

  • they said it'll be around two hundred bucks, give or take.

  • Now, I mentioned earlier my goal is to build a cheap-but-fast NAS, using all flash storage.

  • Cheap is kinda the wrong term when this entire servereven with a bunch of used partscomes

  • in around five thousand bucks.

  • But that's a heck of a lot cheaper than enterprise flash servers, like LTT's Whonnock servers.

  • But the big question is: can this thing compete?

  • Can it saturate my 10 gig network?

  • More importantly...

  • will it even POST?

  • I plugged in a TrueNAS install USB and powered it up, and it did boot.

  • I had to mess with some BIOS settings, but I got TrueNAS installed... and then it wouldn't

  • actually boot off the NVMe drive.

  • This motherboard came out around 2015.

  • Back then, NVMe was still pretty new.

  • I spent hours debugging boot problems, and finally decided to drop in a SATA M.2 drive

  • from Cytron.

  • They sent me this thing months ago and I told 'em I'd test it out at some point... but,

  • well, better now than never I guess!

  • The SATA drive booted without a problem, and I had TrueNAS running.

  • But how did I choose TrueNAS?

  • Well, before I set this thing up, I wanted to talk to an expert, to make sure I was getting

  • exactly what I needed from this server.

  • I told Wendell over at Level1Techs I wanted speed more than capacity, and I was gonna

  • use this as an edit NAS.

  • I had already bought the hardware at this point, but he mentioned the faster you want

  • your storage, the more you have to worry about CPU clock speeds and memory bandwidth.

  • The little Xeon D isn't a slouch, but it's definitely an older CPU.

  • But Wendell said it should do fine for 10 gigs.

  • So I also asked him about what distro I should use.

  • I'm more familiar with Linux, but I know a lotta people build storage servers with TrueNAS

  • Core, which is based on FreeBSD.

  • He seemed to gravitate towards TrueNAS for this build.

  • [Wendell] What TrueNAS gives you is a lot of out-of-the-box tuning that is generally

  • very good across a wide variety of hardware for relatively high-performance low-latency

  • kernel tunablesmemoryit's sort of tuned for that storage interrupt-heavy probably

  • between interrupt from IO and interrupt from storage kind of a thing.

  • [Jeff] So TrueNAS it is!

  • But I also wanted to know his thoughts on what kind of drive layout I should use.

  • I could do a RAIDZ2 with all five drives, but I didn't know if that would eat up too

  • much CPU power or drive bandwidth.

  • [Wendell] I mean, you could try five-drive RAIDZ2 and see if that meets your criteria.

  • It probably will.

  • RAIDZ will have more overhead and the relatively anemic processor and memory bandwidth will

  • be a little bit more of a challenge.

  • A striped mirror with four drives and a spare would be dramatically faster, dramatically

  • way less overhead.

  • So I would say that you could probably do a striped mirror and get much closer to that

  • 10 gig saturation.

  • [Jeff] To make things simple, I'm gonna stick with a striped mirror, and keep the fifth

  • SSD running as a hot spare.

  • And if you like this kinda stuff, you should definitely subscribe to Level1Techs.

  • And of course in the middle of that call, this happened:

  • [Jeff] I'm gonna be deploying the serverwoah, almost dropped the server.

  • Pulling a Linus.

  • [Wendell] No, that's the other Linus channel.

  • Yeah.

  • Dropping things.

  • All right, so after the professional consult, it was time to finally set up the storage.

  • In the TrueNAS UI, I set up a storage pool called edit.

  • To set it up, I picked the first two SSDs, and added them to a data VDEV, then I repeated

  • that layout again so it would have two vdevs, both of them set up as mirrors.

  • Then ZFS will stripe them together, making basically a RAID-10-style layout.

  • Creating it is pretty quick.

  • The next thing I did was added a scheduled SMART test, so TrueNAS would check over the

  • health of the drives and notify me if anything looked flaky.

  • I let it run weekly, and then I also added a snapshot task, to run daily.

  • That will create a daily backup snapshot, but let me remind you as always, this is NOT

  • a backup!

  • It's just a convenient way to go back to a point in time if you mess something up in

  • this storage array.

  • You still need separate backups.

  • TrueNAS already set up a scrub task, that checks the data integrity on the array.

  • I was happy with the defaults, so I moved on to setting up a share.

  • First, I created a user account so I could access the share, then under Sharing, I added

  • a Samba share, and enabled the Samba service.

  • So I had my pool runningbasically RAID 10 but with ZFSbut I ran into some problems.

  • First, I was getting spotty write performance.

  • It would write for about 30 seconds, then nothing for a while, then write again, then

  • nothing.

  • It averaged to less than 300 megabytes per second.

  • So I set the block size to 1 megabyte since I'm mostly dealing with videos, and turned

  • off sync.

  • That's normally a bad idea, but in my case I'll have a separate copy, and it'll be running

  • on a UPS, so I can accept that risk.

  • Turning off sync lets ZFS tell my computer a file's written to disk before it actually

  • completes the write.

  • It just stores it in RAM for a little bit while the drive catches up.

  • That helped, but something was still weird.

  • Initially I'd get a gigabyte per second on my network copies, but then it would limp

  • along at a slow but steady 320 megabytes.

  • So maybe a cache was filling up, or maybe the disks got slow for some reason.

  • I'm not sure.

  • After debugging things a while, I called up Wendell again to help with ZFS, AND I brought

  • in Patrick from Serve the Home to figure out what was going on with the server hardware.

  • I eventually threw an HBA in the PCI Express slot, since I know it's bandwidth is straight

  • through to the Xeon CPU, and what do you know?

  • I was writing files at a gigabyte per second all day long.

  • But I don't wanna run an HBA, because it would eat up about 7 more watts, and I'd need to

  • hack together a cooling solution for it.

  • So I pulled it back out, and plugged the drives back into the motherboard, and wouldn't you

  • know?

  • They're getting 700 megabytes per second now.

  • That's more than double what I was getting before.

  • So I tried over and over replicating my earlier setup, where it only got like 320 megabytes

  • per second, but I couldn't.

  • And heck, 700 megabytes per second?

  • That's good enough for me.

  • For now at least.

  • Read speeds are over a gigabyte per second, and latency.

  • Boy, latency is amazing on here.

  • I'm able to edit 4K footage in Final Cut Pro without any problem.

  • Well, there was one problem.

  • I had to go into the samba settings and enable this extension to make Final Cut happy.

  • But once I did that, I could edit off the NAS almost like it was local.

  • So the first video I edited on the NAS is this Starlink video.

  • It was a little simpler, without a ton of camera angles or complex effects.

  • But it worked great, and I'll probably be tweaking things a bit in the coming weeks.

  • The most important thing is getting nightly backups set up.

  • That way even if ZFS fails or multiple hard drives go bad, I'll still have all the edits

  • I was working on.

  • Someday I might even look into hourly remote snapshots too, but for now, I've solved my

  • problem, and I have a LOT more fast, low-latency storage on a real NAS so I'm not tied down

  • to one main editing workstation.

  • I also spent some more time getting that NVMe to finally boot, so I swapped it back in and

  • set aside the MakerDisk for now.

  • This new NAS gives me 700 megabyte per second writes and 1.1 gigabytes per second reads.

  • And it upgrades my SSD storage from the 4 terabytes I had on my Mac Studio to fourteen

  • terabytes, even with three of those drives used for redundancy.

  • I learned a LOT about ZFS tuning, and about making the most of used server hardware.

  • I also learned about how to manage IPMI and BIOS upgrades with FreeDOS, and how to put

  • quiet Noctua fans in a server and not have the fans go haywire.

  • I put up blog posts with guides for both of those things, linked below.

  • This server uses about 42 watts of power at idle and 70 watts under load, so it's pretty

  • energy efficient, even though it's running slightly older hardware.

  • Will I upgrade to something more elaborate in the future?

  • Who knows!

  • Right now, this thing is gonna give me enough overhead without eating up so much power like

  • the petabyte Storinator.

  • All that's left is adding a label and installing it in my rack.

  • Until next time, I'm Jeff Geerling.

Right now I have four eight terabyte SSDs installed in my 2.5 gigabit NAS.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it