Dude, are you living in your company's server room?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Lol - not quite. It sounds like a lot, but all of this runs on a couple of HP DL360s, a handful of Raspberry Pis, a nettop box, and a couple of consumer NASes.
"i swear it's not a lot"
Goes on the describe an infrastructure setup comparable to most medium sized businesses
I love this community!
- 116 docker containers
- Running on 25 docker hosts
- 50 are the same on each docker host - Watchtower and Portainer agent
- 38 Proxmox LXCs (19 are docker hosts)
- 8 physical servers
- 7 VLANs
- 5 SSIDs
- 2 NASes
And a partridge in a pear treeeee.
Lol - Merry Christmas, my anonymous friend. 🎅
When I read lists like this, I often wonder, what is this person doing with all these containers and such? Do they actually use all of them regularly?
I've got:
1 proxmox machine serving - Openmediavault - 2 shares (jellyfin, general smb shares) Homeassistant Uptimekuma for monitoring Jellyfin
And some misc VMs for trying out things.
1 pi4b - pihole 1 pi3a+ tailscale subnet router / exit node
I often look at lists of things i can host and think to myself "do I need this?". This br8ngs me back to huge lists of services like this and my curiosity. Do folks actually interact with all these services regularly? Honest question, no shade intended.
Do folks actually interact with all these services regularly?
In my case, yep. I believe in as much separation between services as possible, so each service essentially resides on its own docker host, whether physical or Linux container.
That said, some of my services are stacks of multiple containers. For example. my DNS service is a pair of Pi-hole DNS servers, each running their own Pi-hole container, but each one also running containers for Cloudflare tunnel and telemtry export to Prometheus.
Immich has a stack of 6 containers, Piped a stack of 5. So, out of the 66 containers (that aren't Portainer agent or Watchtower), it probably condenses down to around half that number (eg. the 25 docker hosts I have, plus a handful or two others).
each service essentially resides on its own docker host, whether physical or Linux container.
This is the way. Multiple simple dedicated systems is so much easier to maintain than a single "do everything" server.
It's what docker and Proxmox were born to do!
It's not much, but I've got a little LG netbook with an Atom CPU and 2GB RAM running Pi-hole and Syncthing.
My starting point (with this incarnation of my homelab) was my Asrock ION330 nettop box. Then I discovered Raspberry Pis. Then I decided I needed a couple of HP DL360s. RIP my power bill.
One day when I'm all growed up I want to have a better setup. For now I've got what I absolutely need.
Yep - fair enough. Admittedly, my homelab is as much for professional development as it is home use, but pretty much everything gets used all the time.
How do people get to so many Docker containers before moving to Kubernetes? I only have 76 containers across 68 pods and that's far too much for me to manage in Docker.
Honestly, anything not mission critical (network/internet and home automation, mainly) gets auto-updated by Watchtower. I have Watchtower set to pull latest images of everything on a weekly basis, and specific containers that are set to monitor only. Every Saturday morning, I check the Slack channel for notifications of containers that need controlled updating.
Not really doing much docker, but a lot of LXC - everything scripted with ansible. I define basic container metadata in a yaml parsed by a custom inventory plugin - and that is sufficient for deploying a container before doing provisioning in it.
You've got like a whole DCs worth of stuff. I've downscaled the hardware in my server a lot, but it's still just a single Threadripper 2970wx with 128 GB RAM and 50 TB of ZFS storage and 50 TB of cloud based object storage in a midtower case. I have like 20 containers running, one is a Caddy webserver which acts as a reverse proxy for all the others.
I love to do things to excess as much as the next geek, but I could never find a reason to run as much as you have.
Honestly, it's because I like to play. I don't need PEAP auth for my wireless network, but I run a radius server providing MAC and user auth, anyway.
I hear ya, the answer to "why?" is usually "because I can" 😂
About 8 months ago I had 20x HDDs and 8x NVME drives in my server, totaling 187 TB across three ZFS pools. I could write to the largest pool (2 RAIDZ1 striped vdevs, 6 drives wide) at 250 MB/sec and read from it at over a GB/sec and that was from spinning rust with NVME "special devices".
What was I doing with all of this? Pirating movies and TV shows and running a media server for my friends and family.
- 8 Hosts (6 physical/local, 2 VPS/remote)
- 72 Docker containers
- Pi-hole (3 of them, 2 local, 1 on a VPS)
- Orbital-sync (keeps the pi-holes synced up)
- Searxng (search engine)
- Kutt (URL shortener)
- LenPaste (Pastebin-like)
- Ladder (paywall bypass)
- Squoosh (Image converter, runs fully in browser but I like hosting it anyway)
- Paperless-ng (Document management)
- CryptPad (Secure E2EE office colaboration)
- Immich (Google Photos replacement)
- Audiobookplayer (Audiobook player)
- Calibre (Ebook management)
- NextCloud (Don't honestly use this one much these days)
- VaultWarden (Password/2FA/PassKey management)
- Memos (Like Google Keep)
- typehere (A simple scratchpad that stores in browser memory)
- librechat (Kind of like chatgpt except self-hosted and able to use your own models/api keys)
- Stable Diffusion (AI image generator)
- JellyFin (Video streaming)
- Matrix (E2EE Secure Chat provider)
- IRC (oldschool chat service)
- FireFlyIII (finance management)
- ActualBudget (another finance thing)
- TimeTagger (Time tracking/invoicing)
- Firefox Sync (Use my own server to handle syncing between browsers)
- LibreSpeed (A few instances, to speed testing my connection to the servers)
- Probably others I can't think of right now
Most of these I use at least regularly, quite a few I use constantly.
I can't imagine living without Searxng, VaultWarden, Immich, JellyFin, and CryptPad.
I also wouldn't want to go back to using the free ad-supported services out there for things like memos, kutt, and lenpaste.
Also librechat I think is underappreciated. Even just using it for GPT with an api key is infinitely better for your privacy than using the free chatgpt service that collects/owns all your data.
But it's also great for using gpt4 to generate an image prompt, sending it through a prompt refiner, and then sending it to Stable Diffusion to generate an image, all via a single self-hosted interface.
- 33 nomad jobs, most being containers
- 12 physical nomad clients
- 3 amd64 poweredge
- 2 pi4
- 6 Nano Pi r5c
- 1 odroid M1
- Ceph: (nomad orchestrated)
- 8 OSD
- 50TB total raw disk
Ah - I've been meaning to look into Nomad. I have plenty of admiration for Hashicorp's products. How are you finding it?
At my day job, we took a look at nomad and now we are planning to run everything in nomad. It's just so simple to understand and a joy to use.
I believe they changed some of their licensing from the fallout of their IPO. Just worth noting for the selfhosting crowd. I know terraform is being forked entirely, but I'm unfamiliar with the specifics beyond that.
I have a NAS and it runs deluge to download torrents, and hosts two very basic websites.
I don't have a homelab ( space contrains ) but I do have 2 vps that I use to host in total 13 docker containers, mail server and an xmpp server.
Edit: My lemmy server is also hosted on them.
What I'm more interesting in is what is it that you selfhost to have so many docker containers?
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
AP | WiFi Access Point |
DNS | Domain Name Service/System |
ESXi | VMWare virtual machine hypervisor |
Git | Popular version control system, primarily for code |
HTTP | Hypertext Transfer Protocol, the Web |
LVM | (Linux) Logical Volume Manager for filesystem mapping |
LXC | Linux Containers |
MQTT | Message Queue Telemetry Transport point-to-point networking |
NAS | Network-Attached Storage |
NUC | Next Unit of Computing brand of Intel small computers |
PSU | Power Supply Unit |
PiHole | Network-wide ad-blocker (DNS sinkhole) |
Plex | Brand of media server package |
PoE | Power over Ethernet |
RAID | Redundant Array of Independent Disks for mass storage |
SSO | Single Sign-On |
Unifi | Ubiquiti WiFi hardware brand |
VPN | Virtual Private Network |
VPS | Virtual Private Server (opposed to shared hosting) |
ZFS | Solaris/Linux filesystem focusing on data integrity |
nginx | Popular HTTP server |
20 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.
[Thread #370 for this sub, first seen 24th Dec 2023, 07:35] [FAQ] [Full list] [Contact] [Source code]
A single SFF desktop setup in a Node306. 2700x, 32 GB RAM, Arc A380, some WD reds.
- Homeassistant & associated packages for esphome and Zwave stuff
- Jellyfin
- *arr suite + transmission
- yacht
- uptimekuma
- paperless
- immich
- authelia with OIDC SSO for containers where possible
- traefik for reverse proxy
- Nexcloud
- valheim server
- boinc in the winter
- syncthing for phone sync
- more services for keeping up the others
Soon a pihole to come.
I want to expand my smart home setup. My project this spring is integrating my smart gas and electric meters into homeassistant. We are completely stripping the house so I am wiring up everything with KNX with a nee Zwave devices where needed. Greatly expanding the smartish home.
I also have to set up a proper network. Right now I am using my Proximus Internet Box from the ISP which admittedly is pretty customizable.
I've got one headless cheap desktop PC sitting under my desk.
Currently 3 physical boxes down from 4 and aiming for 2. It pretty well comes down to a hypervisor and a NAS and the regular aux gear like a switch and modem. They're big boxes though with about 35 TB storage, .5 TB RAM, and 72 cores between them so lots of space to make imaginary computers in.
Right now my goal is reducing the power footprint. Kill-a-watt places the whole set at 650 watts today and I should knock about 150 off when I get the other box virtualized.
Nice - have you got anything setup to monitor power consumption? I've got a few of those "smart" plugs running on Tuya (localised through Home Assistant) but I'm not 100% convinced of their accuracy just yet...
Just the kill-a-watt plug that the main power block is attached to. The servers have stats visible via the IDRAC (R730XD & R820) to break out for those, but nothing that shows a dashboard or such.
I've found the HP iLOs to be really unreliable for viewing across the network. Something I've been meaning to look into...
I'm able to get a lot of gear secondhand through my job, so I've got:
One 2u Intel server running proxmox in a 'cluster' (circa 2013ish. Added RAM and upgraded the CPU/storage.)
One Intel nuc with an i7-7th gen as the other host in the cluster - only one VM is set to fail over between the two if needed.
VMs:
- Plex
- 2x PiHoles (one of these is the failover VM) (these also have a few docker containers like Uptime Kuma.)
- Windows arr box (I know it's blasphemy but I felt more comfortable doing that stuff in windows)
- anything else I want to mess with because the server really doesn't run that hard.
Network:
- Sonicwall TZ 300 (incl a perpetual VPN license)
- Unifi 24 port switch (it's gigabit and POE but doesn't output enough power for the...)
- single Unifi AP.
All acquired over the last couple years for the low low price of "it was going into the trash anyway"
Dang, how does your isp feel about that many machines talking out to the internet, have they made you pay for business plans yet?
Lol - I'm on unlimited 1Gbps fibre here. So far, they haven't raised any concerns.
That's awesome, best of luck it stays that way!
I have a very modest 7 docker containers on a vm on my gaming rig and I have a raspberry pi for my DNS server. Honestly my setup is quite scuffed (in comparison to yours), but it does what I need it to do
Mine's pretty moderate in comparison to yours lol
- 2 cloud VPSes
- 2 physical locations
- 4 physical servers
- ~20-30 docker containers across the servers
- 3 VMs
- 3 managed switches
- 5 VLANs (2 with internet access)
- 2 SSIDs
Old laptop, Debian with docker running nextcloud, navidrome, jellyfin, gitea, librespeed, wireguard, dnsmasq, and nginx as a reverse proxy.
One laptop, 2 ssd, 4 Proxmox lxcs, 3 docker containers, 2 routers.
- 3 DL360G8 Esxi (86Ghz/512GB RAM)
- 1 DL380G8 TrueNAS
- 1 DL360G7 Veeam
- Dell n5070 Extended PVE SophosnUTM
- 48 Port Catalyst rack switch
- Cisco 2921
- Fibre Channel / iSCSI
50+ VMs and containers:
- VMware ESXi, vCenter, VMware Log Insight, VMware OPS
- DMVPN to remote locations like a desk switch at work and family member houses
- Sophos UTM
- Active Directory for my home computers
- hybrid sync to MS Entra (Azure Active Directory) with Entra Connect
- hybrid Exchange on Premise and Exchange online
- Active Directory for management network
- Security Onion VMs for IDS
- Network monitoring like Elastiflow, PRTG
- Docker, gitlab, OpenSalt / Saltstack
- Trellix ePO for AV
- Nessus vuln scanners
- Team Awareness Kit (TAK) server
- Active Directory Certificate Services
- Home media applications
These things are mostly to maintain familiarity and documentation development. I write off the cost of electricity as continuing education and professional development. More enterprise than some enterprises.
I've pared mine down a lot. The biggest hurdle for me has been storage.
It used to be 5 2u servers running a ceph cluster, but that got to be expensive and unruly.
Now it's mainly a small half depth supermicro for my firewall, a half depth supermicro for home assistant, a 2u Dell for unraid, and a small NAS.
Unraid houses Plex and the *arrs. Along with a handful of other useful services like immich.
I do colo a 1u HP though that houses my pbx, web server, unifi controller, jirai server, nextcloud, email, and a bunch of other servers that I run.
Now, I've got a lot of spare hardware though. 7 Dell 1u servers, 2 Dell 2u, a supermicro 3u, an HP 2u and a bunch of things clients that I might turn into replacements for my rokus.
2 Raspberry Pi 4 with a few services running (some directly, some via docker): pihole, pialert, gitlab plantuml, munin, restic rest server, jupyter instance, airsonic-advanced. And an old synology NAS which serves as document and media server