Selfhosted

45966 readers
1284 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

You can find screenshots on this page: https://docs.endurain.com/gallery/

4
 
 

Unfortunate news for those of us who have been following this podcast, its been a very entertaining and educational podcast. Unfortunately it ends in three episodes. Here are the podcast details for those who want to hear about it - its at the beginning of the episode.


Self-Hosted: 147: The Problem with Game Streaming

Episode webpage: https://selfhosted.show/147

Media file: https://aphid.fireside.fm/d/1437767933/7296e34a-2697-479a-adfb-ad32329dd0b0/431317f3-db02-48b3-a9c6-3cb43108daf9.mp3

5
 
 

In my journey to self hosting and Degoogling, one thing I've missed is being able to access my phone on a computer. Is there a self hosted solution that allows syncing between text messaging and a PC/web interface?

I don't necessarily need a sophisticated Features like customer management or automation. I just want to access my messages from another device, and of course have a server based backup. The ability to reply to messages from the computer is a plus but not necessary. Is there a good option for this?

6
7
 
 

Hey all. I'm hosting a Docmost server for myself and some friends. Now, before everyone shouts "VPN!" at me, I specifically want help with this problem. Think of it as a learning experience.

The problem I have is that the Docmost server is accessible over internet and everyone can log on and use it, it's working fine. But when I try to access over LAN, it won't let me log in and I am 99% sure it's related to SSL certs over LAN from what I've read.

Here's the point I've gotten to with my own reading on this and I'm just stumped now:

I've got an UNRAID server hosted at 192.186.1.80 - on this server, there's a number of services running in docker containers. One of these services is Nginx Proxy Manager and it handles all my reverse proxying. This is all working correctly.

I could not for the life of me get Docmost working as a docker container on UNRAID, so instead I spun up a VM and installed it on there. That's hosted at 192.168.1.85 and NPM points to it when you try to access from docmost.example.com - that's all dandy.

Then, I installed Adguard Home in a docker container on my UNRAID server. I pointed my router at Adguard as a DNS server, and it seems to me that it's working fine. Internet's not broken and Adguard Home is reporting queries and blocks and all that good stuff. So that's all still working as it should, as far as I'm aware.

So, in Adguard Home I make a DNS Rewrite entry. I tell it to point docmost.example.com to 192.168.1.80, where NPM should be listening for traffic and reverse proxy me to the Docmost server... at least I thought that's what should happen, but actually nothing happens. I get a connection timed out error.

I'm still pretty new to a lot of this stuff and have tried to figure out a lot of things on my own, but at this point I feel stuck. Does anyone have advice or tips on how I can get this domain to resolve locally with certs?

I can provide more info if needed.

Cheers all!

8
 
 

I've got forgejo configured and running as a custom docker app, but I've noticed there's a community app available now. I like using the community apps when available since I can keep them updated more easily than having to check/update image tags.

Making the switch would mean migrating from sqlite to postgres, plus some amount of file restructuring. It'll also tie my setup to truenas, which is a platform I like, but after being bit by truecharts I'm nervous about getting too attached to any platform.

Has anyone made a similar migration and can give suggestions? All I know about the postgres config is where the data is stored, so I'm not even sure how I'd connect to import anything. Is there a better way to get notified about/apply container images for custom apps instead?

9
 
 

Synology's telegraphed moves toward a contained ecosystem and seemingly vertical integration are certain to rankle some of its biggest fans, who likely enjoy doing their own system building, shopping, and assembly for the perfect amount of storage. "Pro-sumers," homelab enthusiasts, and those with just a lot of stuff to store at home, or in a small business, previously had a good reason to buy one Synology device every so many years, then stick into them whatever drives they happened to have or acquired at their desired prices. Synology's stated needs for efficient support of drive arrays may be more defensible at the enterprise level, but as it gets closer to the home level, it suggests a different kind of optimization.

10
16
submitted 22 hours ago* (last edited 22 hours ago) by [email protected] to c/[email protected]
 
 

Hi guys! What's the status of the Sipeed NanoKVM FOSS image? I was subscribed to the thread, and I even saw Jeff Geerling's comments. Eventually they claimed the whole image was open source, and left it at that. If you go now to their github, the last published image is from February, v1.4..0. But everyone talks about the last upgrade to 2.2.5? In fact, if I connect my NanoKVM, it does detect that update, but I don't think it's the fully open sourced version? Is this correct?

Anyone can provide a bit more detail on what's going on? Should I manually flash v1.4 that you can download from the repo? And if so...how do I do it?

Thanks!

11
 
 

My current picks are Woodpecker CI and Forgejo runners. Anything else that's lightweight and easy to manage?

12
 
 

I'm been listening to the Fedora podcast and it seems like the OCI images are now getting some serious attention.

Anyone using the Fedora base image to make custom containers to deploy Nextcloud, Caddy and other services? My thought is that Fedora focuses on security so in theory software packaged with it will be secure and properly configured by default. Having Fedora in the middle will also theoretically protect against hostile changes upstream. The downside is that the image is a little big but I think it is manageable.

Anyone else use Fedora?

13
14
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Long story short my Lidarr instance wasn’t creating Album folders and just dropping all the media into the Artists folder after importing, I noticed this because my Jellyfin instance was improperly displaying Albums as Playlists and not getting metadata as intended.

This is quite unfortunate as a lot of content was downloaded and not properly organized, I tried going at it manually however quickly realized how much media was just loosely tossed around.

Is there anyway to force Lidarr to re-manage media already imported or even a docker image designed specifically for media management that I could quickly spin up?

Edit: I believe I already fixed the root cause of my issue above, just need to figure out a logical way of going about the content that is already messed up.

14
 
 

MAZANOKE is a simple image optimizer that runs in your browser, works offline, and keeps your images private without ever leaving your device.

Created for everyday people and designed to be easily shared with family and friends, it serves as an alternative to questionable "free" online tools.

See how you can easily self-host it here:
https://github.com/civilblur/mazanoke


Highlights from v1.1.0 (view full release note)

I'm delighted to present some much-requested features in this release, including support for HEIC file conversion!

  • Added support to convert HEIC, AVIFJPG, PNG, WebP.
  • Paste image/files from clipboard to start optimization.
  • When setting a file size limit, you can switch between units MB and KB.
  • Remember last-used settings, stored locally in the browser.

The support from the community has been incredibly encouraging, and with over 4500 docker pulls, the project is now humbly making its way toward a 500 stars milestone.

The project also received its first donation, which I'm incredibly grateful for!

15
 
 

cross-posted from: https://reddthat.com/post/39309359

I've been running Home Assistant for three years. It's port forwarded on default port 8123 via a reverse proxy in a dedicated VM serving it over HTTPS and is accessible over ipv4 and ipv6. All user accounts have MFA enabled.

I see a notification every time there's a failed login attempt, but every single one is either me or someone in my house. I've never seen a notification for any other attempts from the internet. Not a single one.

Is this normal? Or am I missing something? I expected it to be hammered with random failed logins.

16
 
 

Hello folks,

I got my static IP and I am very happy now, I have been hosting a lot of services since I got the static IP, however I still have to host a fediverse service however it's not that easy to host any fediverse service, I tried to host GoToSocial but the devs said they don't support Podman and my server is podman only ( I tried installing docker but it was failing for some reason so I gave up and used podman instead of docker).

these are the services I am currently hosting ( basically all the easy services which you can host with just "docker compose up -d" :p ):

feel free to suggest some other cool services which I can host :D

17
 
 

I'm working on a project to back up my family photos from TrueNas to Blu-Ray disks. I have other, more traditional backups based on restic and zfs send/receive, but I don't like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can't be ransomwared and that I can't screw up once created.

The dataset is currently about 2TB, and we're adding about 200GB per year. It's a lot of disks, but manageably so. I've purchased good quality 50GB blank disks and a burner, as well as a nice box and some silica gel packs to keep them cool, dark, dry, and generally protected. I'll be making one big initial backup, and then I'll run incremental backups ~monthly to capture new photos and edits to existing ones, at which time I'll also spot-check a disk or two for read errors using DVDisaster. I'm hoping to get 10 years out of this arrangement, though longer is of course better.

I've got most of the pieces worked out, but the last big question I need to answer is which software I will actually use to create the archive files. I've narrowed it down to two options: dar and bog-standard gnu tar. Both can create multipart, incremental backups, which is the core capability I need.

Dar Advantages (that I care about):

  • This is exactly what it's designed to do.
  • It can detect and tolerate data corruption. (I'll be adding ECC data to the disks using DVDisaster, but defense in depth is nice.)
  • More robust file change detection, it appears to be hash based?
  • It allows me to create a database I can use to locate and restore individual files without searching through many disks.

Dar disadvantages:

  • It appears to be a pretty obscure, generally inactive project. The documentation looks straight out of the early 2000s and it doesn't have https. I worry it will go offline, or I'll run into some weird bug that ruins the show.
  • Doesn't detect renames. Will back up a whole new copy. (Problematic if I get to reorganizing)
  • I can't find a maintained GUI project for it, and my wife ain't about to learn a CLI. Would be nice if I'm not the only person in the world who could get photos off of these disks.

Tar Advantages (that I care about):

  • battle-tested, reliable, not going anywhere
  • It's already installed on every single linux & mac PC , and it's trivial to put on a windows pc.
  • Correctly detects renames, does not create new copies.
  • There are maintained GUIs available; non-nerds may be able to access

Tar disadvantages:

  • I don't see an easy way to locate individual files, beyond grepping through snar metadata files (that aren't really meant for that).
  • The file change detection logic makes me nervous - it appears to be based on modification time and inode numbers. The photos are in a ZFS dataset on truenas, mounted on my local machine via SMB. I don't even know what an inode number is, how can I be sure that they won't change somehow? Am I stuck with this exact NAS setup until I'm ready to make a whole new base backup? This many blu-rays aren't cheap and burning them will take awhile, I don't want to do it unnecessarily.

I'm genuinely conflicted, but I'm leaning towards dar. Does anyone else have any experience with this sort of thing? Is there another option I'm missing? Any input is greatly appreciated!

18
 
 

With the release of the 1.5 series of 42links (first announced here), my own approach at writing a bookmark collector has finally surpassed the functionality of its inspiration Espial: As you can see in the screenshot, deleting multiple links at the same time right from the index page is possible now. 🎉

I have been using 42links myself almost every day and I think I have now found and fixed the most embarrassing shortcomings. I would still very much welcome more users contributing ideas and/or people contributing code. :-)

19
 
 

I've been using Kopia to backup my Windows work machine, Linux personal computer, and my wife's Macbook.

Right now, It is just backing up to my NAS, but I would like to have it backup to a cloud solution.

I figured I would get some S3 storage somewhere and point Kopia at that to make the backup. I do not need a lot of space. I think 500gb would be enough. I do not want costs to be too high.

Do I have the right plan, or is there a better option?

Thanks in advance.

20
 
 

Edit: it seems like my explanation turned out to be too confusing. In simple terms, my topology would look something like this:

I would have a reverse proxy hosted in front of multiple instances of git servers (let's take 5 for now). When a client performs an action, like pulling a repo/pushing to a repo, it would go through the reverse proxy and to one of the 5 instances. The changes would then be synced from that instance to the rest, achieving a highly available architecture.

Basically, I want a highly available git server. Is this possible?


I have been reading GitHub's blog on Spokes, their distributed system for Git. It's a great idea except I can't find where I can pull and self-host it from.

Any ideas on how I can run a distributed cluster of Git servers? I'd like to run it in 3+ VMs + a VPS in the cloud so if something dies I still have a git server running somewhere to pull from.

Thanks

21
24
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

This is a story about something that happened just now. I'm not sure if this is the right place to post it, so I'm sorry in advance if it's not!

I self-host some services on an old laptop at home. Mainly Jellyfin and Nextcloud, which I use to text with some close friends.

I left this morning to spend some days with my parents, and I jokingly told one of my friends that "I hope nothing bad happens to the server, since I'll be gone for a week and I won't have physical access to it".

I've had problems with power cuts in the past, since I don't have a UPS (and my laptop's battery is dead), but they were mostly due to some faulty power connector that has been replaced, so I don't expect any weird stuff happening. My IP is dynamic, but I run a cron script to regularly check it and change the DNS records if it changes. So, I was pretty sure everything would actually be fine.

But if you've read the title of the post, you probably know where this is going.

I've used let's encrypt SSL certificates in the past with nginx proxy manager, and it was great! They automatically got renewed so I didn't have to really pay attention to that. Except after a year or so, they just stopped working. Nginx gives me a nondescript error when trying to connect to my domain registrar to create a new certificate, and after trying -and failing- to fix it, I decided to just use the SSL certificates my domain registrar provides.

That worked great! The only problem is they don't automatically update anymore; it just takes me 5 minutes to update them and I only have to do it once every 3-4 months, so it's fine...

A couple hours ago, I was trying to send a meme to my friend via nextcloud and... Failed to establish connection

panic.jpg

I try to open sonarr on my web browser. I get an EXPIRED_CERTIFICATE error. Date of today. Oh no.

You'll be thinking "What's the problem?", right? "Just update the certificate again!" Well, the problem is I need access to nginx proxy manager to do that. And I don't have its port forwarded (since I didn't want to expose it to the internet, because I didn't think I needed to).

I thought that was it. I was going to have to wait for a week until I got back home to fix it. But I still had ssh access to the server!

yes, I know, this is probably a very bad idea, don't expose your services and your ssh to the internet without a VPN like tailscale, but to be fair I don't know what I'm doing! At least I use a nonstandard port, and I use cert login instead of a password.

At first I tried replacing the cert files, but I realized that wasn't going to work. So I decided to do some ~~googling~~ web searching, and thankfully I found exactly what I needed: SSH tunneling.

What does that mean? Well, for the people like me that had no idea this was possible: you can use your SSH connection as a tunnel to access the server's local network (kind of like a vpn?). So I used the command:

ssh -NL LOCAL_PORT:DESTINATION:DESTINATION_PORT USER@SSH_SERVER -p SSH_PORT

I typed localhost:DESTINATION_PORT on my web browser... and nothing happened.

"Oops, actually it's localhost:LOCAL_PORT"

And... BAM! There it was, the nginx web interface! I typed my credentials, created a new cert, uploaded the cert files, changed the cert for all the services... and it worked! Crisis averted.

So, what did I learn from this? Well, that my server is never safe from failing to work lol. But I won this time!

22
 
 

Just found this in a box and according to some Googling, its TPD is 6.8w!!! Got Debian on there with LXDE but I don't need another laptop. The big drawback is that it has a 32bit processor. It has a 100mbit network port, USB2.0, 2gb RAM and WiFi which isn't working but is listed in ip -a

I've used it to add wireless capabilities to my ancient Brother laser printer but it was extremely slow ( 15 mins before a text page started printing, PER PAGE)

23
 
 

I am currently using NPM as my reverse proxy. It runs on a Raspberry Pi which also does pihole. I have a separate server for other non internet critical systems.

So local IP address mappings point a subdomain to the pi's IP, then nginx points to the correct device and port.

I am wondering if Traefik works the same way. Can I run Traefik on the Pi, then point my other sever at it? (I believe Caddy doesn't allow this)

24
 
 

Hello,

I have hosted azuracast in my minipc and I want to forward the IP of the song requester, right now it's only taking one IP the "podman container ip" so basically Azuracast thinks that every request is coming from the IP address 10.89.1.1 which is the IP of interface created by podman.

57: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:fa:6d:33:b9:39 brd ff:ff:ff:ff:ff:ff
    inet 10.89.1.1/24 brd 10.89.1.255 scope global podman3
       valid_lft forever preferred_lft forever
    inet6 fe80::b876:abff:fede:c3ef/64 scope link
       valid_lft forever preferred_lft forever

also I am explicitly forwarding the IP using X-Forwarded-Host.

reverse_proxy http://localhost:4000/ {
		header_up X-Forwarded-Host {host}
	}

I don't know how to resolve it, any help would be appreciated :)

Edit: I didn't had to so any of this stuff, what I should have done is just enabling "reverse proxy" option in Azuracast since Caddy forwards those headers by default unlike nginx. Thank you very much for helping me <3

25
 
 

As the title says, conduwuit has been forked as Tuwunnel which is labelled as the "successor with stable governance".

Love open source! Glad to see real matrix server alternatives keep pushing.

Will switch to it as soon as available. Will be, of course, 100% upgradeable from conduwuit.

view more: next ›