For this new year, I’d like to learn the skills necessary to self host. Specifically, I would like to eventually be able to self host Nextcloud, Jellyfin and possibly my email server too.
I’ve have a basic level understanding of Python and Kotlin. Now I’m in the process of learning Linux through a virtual machine because I know Linux is better suited for self hosting.
Should I stick with Python? Or is JavaScript (or maybe Ruby) better suited for that purpose? I’m more than happy to learn a new language, but I’m unsure on which is better suited.
And if you could start again in your self hosting journey, what would you do differently? :)
EDIT: I wasn’t expecting all these wonderful replies. You’re all very kind people to share so much with me :)
The consensus seems to be that hosting your own email server might be a lot, so I might leave that as future project. But for Nextcloud and Jellyfin I saw a lot of great tips! I forgot to mention that ideally I would like to have Nextcloud available for multiple users (ie. family memebers) so indeed learning some basic networking/firewalling seems the bare minimum.
I also promise that I will carefully read the manuals!
Programming knowledge is largely irrelevant, as in to gain sensible benefits from it you have to be generalist software engineer with decade+ of experience of seeing it all. Then yeah, you can read any code, any stack traces and figure out the intent of developers of the system and what is undocumented/incorrectly documented.
Focusing on one particular language is the right and wrong answer at the same time. Wrong in a sense that you’ll have to pick up other languages along your journey anyway and right because you need to achieve mastery in one of them to get to more advanced programming topics. Pick a language that you have fun using and don’t care about anything else.
As for what to learn for self-hosting… Linux (pick a distro, let’s say ubuntu LTS w/o gui, ssh there and get comfortable with it. It includes installation, filesystems, RAID setups), networking, HTTP/S (that’s the main thing you’ll be interacting with as self-hoster and knowing various nuances of reverse proxying is a must), firewalling, basics of security and hardening, docker, monitoring, backups.
Patience, most of all.
Also, backups and notes. The solution you use to host might take care of the backups. For example, I use Unraid, so if any drive fails the system can simulate the data on that drive until I can get it shut down to replace it, and then recreate the data on the new drive.
As for notes, those are important so that you can always know what you’ve done, and what you need to do. That way, if you ever have to do it again, say if you’re setting up another server or replacing one that failed, you know the steps you took to get it set up exactly how you like. It’s also handy because you’ll be doing things like assigning services to ports, and you’ll probably at some point want to know what services are on what ports without going through and checking each one. Things like that are handy things to stick in notes.
Other than that, you don’t need a lot of skills to set something like a home server up. You just need to read the documentation for each service you’re planning to use, and get familiar with how it works.
Unraid is not a backup. It is good, but if your data goes wrong for different reasons or you lose the entire device, you can’t restore it. Dedicated backups are a must for anything serious!
Unraid absolutely is a backup. That’s the whole point of the OS. And furthermore , the backup can be backed up at any time and stored on another device, allowing you to restore the entire OS and its configuration. And by “lose the entire device”, I’m assuming you mean the OS is corrupted. At that point, you simply burn a new USB and reconnect the drives, or move them to any other system running Unraid.
Docker really. If something goes bad, trash the container and start again without loosing your actual data.
Mostly Docker.
Portainer and plugging Docker Compose XML into Portainer stacks makes Docker stupid-simple. (personally speaking as a stupid person that does this)
Cloudflare tunnels for stuff people other than you might want to access.
Tailscale if it’s only you.
Reverse proxy & port forwarding for sharing media over Jellyfin without violating the Cloudflare Tunnel ToS.
Dokploy is a pretty easy web gui and is itself a docker container.
Makes it dead simple to manage multiple containers and domains. (Not for power users that need kubernetes level flexibility)
You don’t need to be a programmer to selfhost.
The most important “skills” to have if you want to selfhost imo are:
-
Basic Networking knowledge
-
Basic Linux knowledge
-
Basic docker/docker compose knowledge
But I’d say to not get lost in the papers and just jump right in. Imo, the best way to learn how to selfhost is to just… Do it. Most everything is free and fairly well documented
Perseverance
Totally agree! I’m not a programmer and I have several services running in my home server. I’m just curious and have used Linux for a decade as a normal user. With just these 3 basic knowledge skills you’re good to go.
Where’d you learn Docker basics? I pretty much have no clue what’s going on every time I try to even start.
https://docker-curriculum.com/
Best resource I found so far. I tried docker’s tutorial but it was not good at all.
-
Learn how to properly backup your data in case you nuke something you shouldn’t
And regularly check them. I just found out the hard way this last week that my backups haven’t been running for a few weeks …
Yep.
I have friends in the SMB space, one thing they do is a regular backup verification (quarterly). At that frequency, restoring even a few files (especially to a new VM), is very indicative, especially if it’s a large dataset (e.g. Quickbooks).
In Enterprise, we do all sorts of validation, depending on the system. Some is performed as part of Data Center operations, some is by IT (those are separate things), some by Business Unit management and their IT counterparts.
Unfortunately, that wouldn’t have done anything. Because I did that in December and they stopped running like 2 weeks after my verification. I would have caught it on my next scheduled validation, but that doesn’t help me now 😕
I mean, it still helps right? It limits your losses to X weeks instead of X months or, I hate to say it, X years.
the patience to read lots of documentation.
And maybe patience to power through a lack of documentation.
These 1000% eventually your gonna run into a problem / situation that does not have much documentation. Powering through step by step logically can test the best of us. You can spend 56 hours in a day on one problem. Give up. The next morning figure it out in 10 minutes. It’s a marathon not a sprint.
If you want to program something, the closest you’re gonna get to programming is Ansible and Bash scripts.
You might want to get self hosting hardware like Synology or the like if you’re not ready to dig.
Otherwise here’s some things you need to know:
- Docker
- Easy, consistent deployment of services in their own environments. Think a VM but with almost no overhead.
- Docker Compose
- Run docker containers with consistent configuration in files.
- Connect various containers to each other on the same or different networks.
- Get multiple containers to start together and talk to each other.
- Systemd
- Manage any service on Linux. If anything needs to start on boot, restart when crashed, start on timer, you want Systemd.
- You can manage your docker compose containers lifecycle via Systemd.
- NGINX/Apache/Caddy
- A web server for reverse proxy. You’d probably need one at some point, especially if you want HTTPS. Your services get hidden behind it.
- ZFS
- Reliable redundant storage. You’ll need storage. Use ZFS with 2-disk redundancy.
- Supports automatic snapshots for recovering from oopsies. E.g. deleted something or some software shat on your data.
- Can use recertified disks from serverpartsdeals.
- Can use USB disks or USB box with multiple disks. If you end up going the USB route, ask me for tested hardware.
- Backup system
- Something to do backup. There are many options.
- Ansible
- If you want to write code that describes your services and make them happen, you want Ansible. You write code (well YAML) and Ansible installs things, writes config files, sets up Systemd services, restarts things. It can be convenient especially if you have a lot of stuff and you want to be able to see all of your infrastructure in code in one place and be able to version it.
- Prometheus
- Monitoring your stuff. Is my backup service running? If not send me an email.
Oh and use Debian or Ubuntu LTS.
Great summary!
Why Debian or Ubuntu? (I have my own thoughts, but it would be useful to show even high-level reasons why they’re preferred).
Re: Backup - Backblaze has a great writeup on backup approach today. I’m a fan of cloud being part of the mix (I use a combo of local replication and cloud, to mitigate different risks). Getting people to include backup from the start will help them long-term, so great you included it!
Predictable cadence, stable operation, timely updates, huge community and therefore documentation. You can get up to 5 years from an LTS release of Debian or Ubuntu. With Ubuntu LTS and Ubuntu Pro (free) you could theoretically run a machine without upgrading for 10 years. If you run workloads in containers, it doesn’t matter how old the host OS is. As long as it’s security patches, you can keep on trucking.
Damn, 5 years from LTS? That’s impressive
Ansible is nice but I’ll repeat (as I said in another thread) it’s kind of advanced and gives a much better return on investment if you manage several hosts, plan to switch hosts regularly, or plan to do regular rebuilds of the environment.
If you end up going the USB route, ask me for tested hardware.
Send these my way chief
As briefly as possible:
- Host side
- If you use Intel, all is well.
- If you use AMD…
- Prior to AM5
- Use an ASMedia PCIe USB card (StatTech, Sonnet)
- X570 is especially bad, though I’ve had some success with B350, when using the chipset ports. The CPU ports are all bad. Small form factor PCs often only expose CPU USB ports. They work with single disk per port but if you peg a port with a multi-disk box, they crap out regularly.
- Post AM5
- Have only tested USB4 on X870 and it’s solid.
- Prior to AM5
- Client side
- WD Elements / MyBook
- If you get disconnects under load and you’re not on a shit AMD USB host, the USB-SATA controller is overheating. Open them and ahere a heatsink on it. Drill a hole in the case above it for better ventilation. Disconnections will stop. If you don’t want to deal with any of that buy the item below.
- OWC Mercury Elite Pro Quad
- Well built, solid controllers, no issues over a year of testing. I have 2, hosting an 8-disk RAIDz2 and 2 hosting a 5-disk RAIDz2.
- Terramaster
- A friend bought a 6-bay and tore it down for me. It has the same controllers as the OWC in a similar topology. If it’s cheaper it might be OK. I can vouch for the OWC though.
- Cables
- Get name brand cables, ideally higher spec than what you’d need! They aren’t important for a single USB disk but running a 4-disk box can max out the port bandwidth. If the cable can’t handle it… errors. Casually transmitting 10Gbps via easily detachable cables and ports isn’t trivial.
- WD Elements / MyBook
Much appreciated 🙏
Gnarly stuff with the WD’s huh? Unfortunately I think that’s what I’ll end up having to put up with since I can’t really find the other options for a decent price around here.
Funny enough I was half-considering just using a bunch of WD Elements. You think the MyBooks might fare any better?I used a mix of Elements and MyBook for years. Upon opening to heatsink, I didn’t see any significant differences between them. They use ASMedia or Jmicron, mostly ASMedia. The overheating issue depends on ambient temp and load. I’ve had one machine in a basement never experience them. Either way the solution is pretty straightforward and cheap. Once heatsinked, I haven’t had a problem.
The cables they come with are good.
- Host side
- Docker
Persistence and reading comprehension.
There’s no need to learn Python or any programming language to self host stuff, you just need to be able to follow blog posts and run some Docker commands.
I’m a software dev and haven’t touched a single line of code on my NAS. Everything is docker compose and other config files.
You don’t really need to know a specific language to self-host anything. But things like YAML, JSON, Docker, and some networking basic will go a long way.
If I could do anything different though, it would definitely be to write more documentation. Document the steps taking setting things up, log notes on when you have to fix something, archive webpages and videos that you used along the way. Currently doing that myself now after some time self-hosting.
One under-appreciated aspect of Docker is that it forces you to document all your setup steps in your dockerfile and docker-config files.
Learning Linux is a great start.
Learning any coding language will help you understand a bit more about the programs will work, however there isn’t much need to actually learn a specific language unless you plan to add custom programs or scripts.
The general advice for email is don’t. It’s very risky to host and it’s a big target for spam. Plus there’s challenges getting the big companies to trust your domain.
However hosting things behind a VPN (or locally on your home network) can let you learn a lot about networking and firewalls without exposing yourself to much risk.
I have no direct experience with next cloud but I understand it can be hosted on Linux, you can buy a Synology NAS and run it in that, or use something like TrueNAS.
Personally my setup is on one physical server so I use Proxmox which lets me run 2 different Linux servers and trueNAS on one single computer through virtual machines. I like it because it lets me tinker with different stuff like home assistant and it won’t affect say my adblocker/VPN/reverse proxy. I also use Docker to run multiple services on one virtual machine without compatibility issues. If I started again, I’d probably have gotten bigger drives or invested in SSDs. My NAS is hard drives because of cost but it’s definitely hitting a limit when I need to pull a bunch of files. Super happy with wireguard-easy for VPN. I started with a proprietary version of openVPN on Oracle Linux and that was a mistake.
Is there a good way to not self host email yet maintain good control? Like storing it on a local device. I know that addresses are portable with a domain, but still.
I personally haven’t explored self hosting mail. This thread is a year old but might give you insight from people who have.
I’ve heard about using
mailbox.orgto do what you’re talking about. It seems the general consensus is getting a clean IP mentioned in the thread linked above is the biggest challenge.Edit: mailbox isn’t the what I was thinking of. I’ve definitely heard of services that let you self host half of it and just do the send receive part.
I feel like objecting to the “General advice about email is don’t” thing but I don’t know if I understand the objections well enough to refute them. I self host email for mspencer.net (meaning all requests including DNS are served from hardware in my living space) and I have literally zero spam and can’t remember the last time I had to intervene on my mail server.
On one hand: My emails are received without issue by major providers (outlook, gmail, etc) and I get nearly zero spam. (Two spam senders were using legitimate email services, I reported them, and got human-seeming replies from administrators saying they would take care of it.) And I get amusing pflogsumm (summarizes postfix logs) emails daily showing like 5 emails delivered, 45 rejected, with all of the things that were tried but didn’t work.
On the other: most of the spam prevention comes from greylist, making all new senders retry after a few minutes (because generally a legit MTA will retry while a spammer will not) and that delays most emails by a few minutes. And it was a bear to set up. I used a like 18 step walkthrough on linuxbabe dot com I think, but added some difficulty by storing some use and alias databases on OpenLDAP / slapd instead of in flat files.
But hey, unlimited mail aliases, and I’m thinking of configuring things so emails bounce if they seem to contain just a notification that terms and conditions are updated somewhere. I don’t know, cause some chaos I guess.
And I have no idea if my situation is persuasive for anyone because I don’t know what the general advice means. And I worry it’ll have the unfortunate side effect of making self hosting type nerds like me start forgetting how to run their own email, causing control of email to become more centralized. And I strongly dislike that.
Lots of people have been talking about products and tools. It’s docker, tailscale, cloudflare proxmox etc. These are important, but will likely come and go on a long enough timescale.
In terms of actual skills, there’s two that will dramatically decrease your headaches. Documention and backup planning. The problem with developing those skills is, to my knowledge, they’ve only ever been obtained through suffering. Trying to remember how to rebuild something when you built it 6 months ago is futile. Trying to recover borked data is brutal. There’s no fail-safe that you haven’t created, and there’s no history that you haven’t written. Fortunately, these are also the most transferable skills.
My advice is, jump in. Don’t hesitate. The chops in docker/linux/networking will come with use and familiarity. If it looks cool, do it. Make mistakes. You will rapidly realise what the problems with your set up are. You will gain knowledge in leaps and bounds from breaking a thing vs learning by rote or lesson. Reframe the headaches as a feature, not a bug - they’re highlighting holes in your understanding. They signpost the way to being a better tech, and a more stable production environment.
The greatest bit about self hosting for me is planning the next great leap forward, making it better, cleaner, more robust. Growing the confidence in your abilities to create a system you can trust. Honing your skills and toolset is the entirety of the excercise, so jump in, and don’t focus on any one thing to master or practice before hand!
Networking is way more important than pretty much anything else. TCP/IP and http are going to stay for quite a while.
if you could start again in your self hosting journey, what would you do differently? :)
That’s an excellent question.
If I were to start over, the first thing that I would do is start by learning the basics of networking and set up a VPN! IMO exposing services to the public internet should be considered more of an advanced level task. When you don’t know what you don’t know, it’s risky and frankly unnecessary.
The lowest barrier to entry for a personal VPN, by far, is Tailscale. Automatic internal DNS and clients for nearly any device makes finding services on a dedicated machine really, really, easy. Look into putting a tailscale client right into the compose file so you automatically get an internal DNS records for a service rather than a whole machine.
From there, play around with more ownership (work) with regard to what can touch your network. Switch from Tailscale’s “trusted” login to hosting your own Headscale instance. Add a PiHole or AdGuard exit node and set up your own internal DNS records.
Maybe even scrap the magic (someone else’s logic that may or may not be doing things you need) and go for a plain-Jane Wireguard setup.
For sure use Tailscale for VPN. They have apps for iPhone, Android, macOS, and Linux, so setting up your own personal network will be easy. Hosting on the real internet is definitely advanced and not always necessary.
- Docker: You can practice on your main computer before complicating things with networking.
- How to set up a reverse proxy: DNS, certificates, etc. I recommend Caddy.
- Backups: If you use Docker Volumes, make sure you back those up too and test the backups.
To self-host, you do not need to know how to code.
To self-host, you do not need to know how to code.
I agree but also say that learning enough to be able to write simple bash scripts is maybe required.
There’s always going to be stuff you want to automate and knowing enough bash to bang out a script that does what you want that you can drop into cron or systemd timers is probably a useful time investment.
Take the time to properly understand Linux file ownership and permission. Permission will be the cause of many issues you will encounter in you self-hosting journey on Linux. Make sure you know the basics of
chmod
(change permission) andchown
(change ownership), Linux users and groups. This will save you some head-scratching, but don’t worry, you will learn by doing !Remember that, if you setup everything right, especially with docker, running as root / with
sudo
is not required for any of the services you may want to run.Determination, patience, a willingness to learn anything you need to.
If you have those, in time, you will be able to get your lab up and running. I started mine with a minimal knowledge of Linux (I could install it from a USB and poke around). Now it’s the center of my families digital life.
You’ll get there in time.