1
187

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
363
submitted 2 years ago* (last edited 2 years ago) by devve@lemmy.world to c/selfhosted@lemmy.world

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
17
4
21
submitted 4 hours ago* (last edited 4 hours ago) by Alfredolin@sopuli.xyz to c/selfhosted@lemmy.world

To the people here that host a synapse server, how do you handle registration?

Do you use the new matrix authentification server? How does that work?

If not, registration works via element web, where you can have a captcha to avoid a bot swarm. However the only accepted captcha in the synapse config is recaptcha. Have you read the news? Well, we will have to change the captcha method. I think I read somewhere it was possible to use hcaptcha on elementweb however the setting does not exist in synapse, or I did not find it.

How do we do?

5
9
submitted 6 hours ago* (last edited 6 hours ago) by Droopy@programming.dev to c/selfhosted@lemmy.world

GITHUB

hister.org

histerdocker

This is the config I used.

altr

6
69
submitted 1 day ago by tanka@lemmy.ml to c/selfhosted@lemmy.world

So it's my first time setting up a VPS. Is it to be expected to ban 54 IPs over a 12h timespan? The real question for me is whether this is normal or too much.

$ sudo fail2ban-client status sshd
Status for the jail: sshd
|- Filter
|  |- Currently failed: 3
|  |- Total failed:     586
|  `- Journal matches:  _SYSTEMD_UNIT=ssh.service + _COMM=sshd
`- Actions
   |- Currently banned: 51
   |- Total banned:     54
   `- Banned IP list:   [list of IPs]

fail2ban sshd.conf

$ sudo cat /etc/fail2ban/jail.d/sshd.conf 
[sshd]
enabled = true
mode = aggressive
port = ssh
backend = systemd
maxretry = 3
findtime = 600
bantime = 86400

I have disabled SSH login via password. And only allow it over an SSH key.

$ sudo sshd -T | grep -E -i 'ChallengeResponseAuthentication|PasswordAuthentication|UsePAM|PermitRootLogin'
usepam no
permitrootlogin no
passwordauthentication no
7
115
8
86

Seems like it might be time to build my next router before they become unaffordable. I've done some research, but I'd like to get the pulse of the community since other self-hosters may have a similar use care.

Should I use PFsense or OpenWRT? Should I use purpose built or minipc hardware?

This is for a home network (symmetric gigabit fiber). A few of the devices have 2.5LAN ports and it would be nice to make use of that speed locally. Primary uses include streaming Disney+ and YouTube, web browsing, and self-hosting a few services I connect to via wireguard. Sometimes I play games, but not competitively, so an extra ms of ping isn't going to throw me into a rage. I do use a remote desktop feature like steam link to play gamed on my home office PC from my bedroom. Ping is currently acceptable according to the system with occasional slowdowns when my family is slamming the WiFi.

I will need to provide WiFi access. If my existing router(s) have an AP mode, I imagine I can just plug them in via ethernet?

What kind of wireless AP hardware do I need if I want connections to transfer between a basement and attic AP with minimal interruption?

For the router itself, I see people using what look like barebones routers and others using a minipc with dual LAN. What do you use and what advantages/disadvantages have you experienced as a result.

Can I set up a wireguard VPN server in either pfSense or OpenWRT?

Are there any enshittification risks or open-source purity concerns with either choice?

Is there a significant difference in popularity between pfsense and openwrt?

I will happily accept hardware recommendations for 2.5GB capable router hardware for a home network with 1GB fiber. It needs to be able to handle inbound and outbound wireguard connections. I'm overwhelmed by the many options between all the minipcs and purpose built hardware. Location is USA.

I appreciate any insight you may have. I'm a Linux guy, but networking has always been my weak point so I'm asking for help.

9
44

I thought self-hosting requires, like, paid ownership of a website or something. I don't think I've ever self-hosted before and am lost with its guide.

My primary concern is RustDesk's warning about possibly shutting down its free self-hosting because of bot abuse, despite now requiring GitHub accounts. There seems to be nothing even remotely close to RustDesk, except possibly HopToDesk, which I heard is a fork of an older version or something.

It'd be nice to be able to keep this going just in case. Or are there free, E2EE servers out there that anyone knows of?

10
33
submitted 2 days ago* (last edited 1 day ago) by Imaginary_Stand4909@lemmy.blahaj.zone to c/selfhosted@lemmy.world

So I was trying to download a torrent (while seeding like 5 others) when I noticed my rates just kept gradually falling to 0B upload/download until spiking back up to 1-2MB before falling again. I check my Proxmox SMART test of my drives and then it shows one disk was degraded. When I try to view the overall "disks" tab in Proxmox it just times out and shows an error [communication failure (0)]

So I try to do a zpool scrub tank_name, which started Monday May 4 22:02:21 2026....

While scrubbing the checksum errors on the online repairing disk (wwn-0x5000c5004d033fc1) just keep climbing... I made the degraded disk go offline. Here's the current status of zpool status tank_name:

root@nova:~# zpool status Orico2tera4
  pool: Orico2tera4
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Mon May  4 22:02:21 2026
        3.53G / 378G scanned at 36.9K/s, 3.47G / 378G issued at 36.3K/s
        9.61M repaired, 0.92% done, no estimated completion time
config:

        NAME                                              STATE     READ WRITE CKSUM
        Orico2tera4                                       DEGRADED     0     0     0
          mirror-0                                        ONLINE       0     0     0
            ata-ST2000NM0011_Z1P2D6SC                     ONLINE       0    13     1
            usb-External_USB3.0_DISK01_20170331000C3-0:1  ONLINE       0     0     3  (repairing)
          mirror-1                                        DEGRADED     0     1     0
            wwn-0x5000c500357c0b91                        OFFLINE      0     0    21
            wwn-0x5000c5004d033fc1                        ONLINE       0     1 2.00K  (repairing)

errors: 49 data errors, use '-v' for a list

I haven't used these disks for super long, it's only been about 5 months of my homelab actually being used, and I wasn't doing constant torrenting until February. The disks are refurbished, 2TB each, and they're stored in a USB connected drive bay. my usage is pretty low, just 432.80 GB of 4TB (11.13%)

I've looked at my snapshots with zfs list -t snapshot, not sure when I should try to restore from a snap, but I've never done it before. I'll make sure to take backups more seriously from now on, don't be me...

Update:

Turned off the machine and bay, realized it had shit ventilation and that the drives were pretty hot, let it cool and gave everything a quick dust down. Nothing seemed to be bad or visibly fucked up?

After letting it chill out for about 2-3 hours I put the drive bay in a better vented spot and did a scrub, then resilvered the drive, then did another scrub. About to do some SMART tests.

Here's zpool status -v:

zpool status -v Orico2tera4
  pool: Orico2tera4
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 00:56:51 with 0 errors on Wed May  6 23:37:43 2026
config:

        NAME                                              STATE     READ WRITE CKSUM
        Orico2tera4                                       ONLINE       0     0     0
          mirror-0                                        ONLINE       0     0     0
            ata-ST2000NM0011_Z1P2D6SC                     ONLINE       0     0   199
            usb-External_USB3.0_DISK01_20170331000C3-0:1  ONLINE       0     0   125
          mirror-1                                        ONLINE       0     0     0
            wwn-0x5000c500357c0b91                        ONLINE       0     0   100
            wwn-0x5000c5004d033fc1                        ONLINE       0     0   462

errors: No known data errors

And then it again after a clear:

zpool status -v Orico2tera4 
  pool: Orico2tera4
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:57:18 with 0 errors on Thu May  7 01:28:30 2026
config:

        NAME                                              STATE     READ WRITE CKSUM
        Orico2tera4                                       ONLINE       0     0     0
          mirror-0                                        ONLINE       0     0     0
            ata-ST2000NM0011_Z1P2D6SC                     ONLINE       0     0     0
            usb-External_USB3.0_DISK01_20170331000C3-0:1  ONLINE       0     0     0
          mirror-1                                        ONLINE       0     0     0
            wwn-0x5000c500357c0b91                        ONLINE       0     0     0
            wwn-0x5000c5004d033fc1                        ONLINE       0     0     0

errors: No known data errors
root@nova:~# 

What have we learned?

  • Do biweekly scrubs
  • Put your drives in a not shit location
  • Do trims like, once a month maybe
  • Make way more frequent snapshots
  • Backup your shit!!! NOW!!! To literally anywhere else but just do it!!!
11
50
12
19
submitted 2 days ago* (last edited 22 hours ago) by testaccount789@sh.itjust.works to c/selfhosted@lemmy.world

Edit: Even with 0.0.0.0/0 split tunnel the interface stays active, which can be verified using curl --interface CloudflareWARP ipinfo.io.
So I can just set-up PBR and NAT:

iptables -t nat -A POSTROUTING -s VPN_IP_RANGE ! -d VPN_IP_RANGE -j SNAT --to-source 172.16.0.2
ip rule add from VPN_IP_RANGE table TABLE_ID
ip route add default dev CloudflareWARP table TABLE_ID
ip route add VPN_IP_RANGE dev VPN_INTERFACE # to keep connection between devices

warp-cli is Cloudflare's program to use their Warp VPN/DNS thingy. Since it only allows for closest server being used, I thought about putting it on my VPS.

So I did. I enabled the connection, and oh, SSH froze. No worries, I'll reconnect.
Unless... Yeah, it blocks incoming connections.
Tailscale comes to rescue.

But anyway, the warp-cli settings only allow excluding IP ranges for both directions, so 0.0.0.0/0 makes it pointless.
My only current idea is caveman solution - another VPS (for static IP) as first hop, excluding just that IP on second hop, for third hop to Warp.
Sadly, RackNerd has finally removed all the old offers, so no more $10.29/year VPSs.

Oh, and Tailscale will only work over relay when Warp is connected, so that's not an option.

13
63
submitted 3 days ago* (last edited 2 days ago) by stratself@lemdro.id to c/selfhosted@lemmy.world

Technitium DNS Server v15.1.0 has been released with support for OIDC! Now you can use your preferred identity provider to log in to user accounts, and manage your DHCP/DNS deployments with approriately granular permissions controls.

I've played around with it, and safe to say that the SSO integration works well. I've written a guide to set it up against Kanidm here. There were some OIDC/clustering bugs in prior v15 releases, and with v15.1.0 they have been squashed and solved.

The major release of version 15 also include various important changes, such as the following highlights:

  • A new API call for Prometheus metrics
  • Query Logs apps can now follow live updates
  • Codebase updated to .NET 10 runtime
  • HTTP tokens are now accepted via the Authorization: Bearer <token> header
  • Many other bugfixes, secfixes, and improvements...

Technitium is pretty great. Hope everyone enjoy the release :)

14
45
submitted 3 days ago* (last edited 3 days ago) by thanksforallthefish@literature.cafe to c/selfhosted@lemmy.world

Hey all, I did check for an immich sub first, but you smart people seem to be my only option now reddit has banned me for refusing to give them an email address.

Background: So, I have a Ugreen DH2300 NAS it runs a cut down version of debian. I've got docker running on it, which is happily hosting Jellyfin. Basic config of the drive volume is from root I have a docker tree and also a data tree. Immich & Jellyfin under docker, movies pictures tvshows books under data. I have pictures indexed by Jellyfin and it works but it isn't great. I have a vanilla copy of immich up and running, I can upload via web browser a pic at a time. The vanilla config puts those files in ./volume1/docker/immich/library/upload/very-long-random-number-directory

Where volume1 is the mounted displayed nas volume (from the nas host it's /mnt/volume1 if you ssh in)

Problem:

I have a terabyte of pictures under ./volume1/data/Pictures that is not visible in docker

Importing 1 by 1 via web browser is obviously not ideal. It also copies the set of pictures from one directory on the NAS volume to a duplicate under library/upload - not great for space.

I've seen the CLI tool exists and if I ssh into the NAS I can see the /Pictures directory as well as the docker/immich/library etc directory but it also has the downside of duplicating all the photos into the immich directory

Ideally I'd like to just index it like jellyfin does when you add files to movies or tv shows. I can't seem to even find a way to point the docker instance to the folder (i modified the .env file but it ignored it, so obviously got that wrong).

Is this the only way ?

EDIT Thank you all for the quick responses - I somehow managed to break the container altogether, so I'll reinstall from scratch and then add your suggested "external folders" config and see how I go.

Thanks again

15
50
submitted 3 days ago by pimat@feddit.org to c/selfhosted@lemmy.world

I'm fairly new to self-hosting and privacy. I used to be all about Apple. I scanned all my important documents and stored them in iCloud. That worked pretty well, but because I tend to make my life harder than necessary, I switched from an iPhone to a Pixel with GrapheneOS. It's a hassle, but I'm happy with my decision overall. Unfortunately, my files are still in iCloud. As a Mac user, that's not too bad, but not being able to access my files on the go is annoying.

I'm afraid to store all my important files in an LXC on my Proxmox server, even with daily backups.

Should I switch from iCloud to Nextcloud, Proton, or something similar? Or should I create an offsite backup—one encrypted in the cloud and one in my house? How are others handling this? Would an extra backup at a family member's house be a good idea? Is paying for cloud storage common? I'd really appreciate any suggestions or ideas. Right now, I'm feeling overwhelmed by all the possibilities. Also, having 2 TB of iCloud storage made it too easy, since I didn't carefully choose the files to upload. But paying 10 bucks a month feels a little stupid now that I don't have the comfort factor any more.

16
153

The Coral TPU driver has basically been abandoned by Google so if you are running a Linux kernel newer than 6.2 it will not function.

https://github.com/google/gasket-driver is the original driver which was archived on April 18, 2026

You can try the driver https://github.com/feranick/gasket-driver or https://github.com/dude84/gasket-driver-coral or search through the forks of the original gasket-dkms driver https://github.com/google/gasket-driver/forks

So in the future your options are to pin your kernel to 6.2, upgrade your hardware, hope that someone will keep a gasket-dkms fork updated for newer kernel versions, or make your own fork to do so yourself.

17
112

Security fixes

This release contains security fixes for the following advisories. We strongly advice to update as soon as possible.

SSO Login CSRF - GHSA-pfp2-jhgq-6hg5, GHSA-w6h6-8r66-hcv7
User/Organization Enumeration - GHSA-hxqh-ff5p-wfr3
SSO existing-user binding - GHSA-j4j8-gpvj-7fqr
GHSA-6x5c-84vm-5j56
SSRF via Icon Endpoint - GHSA-72vh-x5jq-m82g
Some crate's updated and other minor security enhancements

These are private for now, pending CVE assignment.

https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0

Original Reddit discussion: https://www.reddit.com/r/selfhosted/comments/1t2qd26/vaultwarden_1360_patches_vulnerabilities/

18
105
submitted 4 days ago* (last edited 4 days ago) by wardcore@lemmy.world to c/selfhosted@lemmy.world

Just dropped v1.4-beta of ONYX — an open-source, privacy-first messenger with E2E encryption for private chats and self-hostable groups/channels.


This release is mostly about media experience.

Voice Channels — but with a twist. If you're self-hosting an ONYX server on your local network, you now get real-time voice inside your group. No cloud, no third-party servers.

Media Player — vinyl record animation when playing music, seek ±10 seconds, playback speed from 0.25x to 4x with presets, next/previous track, auto-repeat. Audio files now show their filename instead of a generic label.

Audio Settings — microphone and output device selection, so you can actually choose what hardware ONYX uses.


Why ONYX?

https://lemmy.world/post/44633944


ONYX: https://github.com/wardcore-dev/onyx
Self-hosted server: https://github.com/wardcore-dev/onyx-server


Would love feedback, especially from anyone running it on LAN setups.

19
52
submitted 5 days ago by FEIN@lemmy.world to c/selfhosted@lemmy.world

I'm trying to make my first server (Immich + Navidrome + Nextcloud running on Debian, will use WireguardVPN for remote access), but my crappy XFinity router (XB7) just won't port forward at all to my server machine. I've tried so many things to make it work, so the best thing I can do now is buy my own router so that I can just use the Xfinity router as a bridge. Do you guys have recommendations for a secure, customizable enough, and long-distance router good for 6 people?

20
35
submitted 5 days ago* (last edited 1 day ago) by CmdrShepard49@sh.itjust.works to c/selfhosted@lemmy.world

I currently have frigate running on my Proxmox server in an LXC running Portainer and it has recently quit detecting anything and sending out notifications when someone walks up to my house.

I sat down to do some troubleshooting last night and first discovered in the logs that it wasn't detecting my Coral TPU any longer but after reseating the USB cable and rebooting the container that issue is gone. Now everything 'appears' to be working properly but it still won't detect anything and I'm at a loss on how to proceed.

Checking the logs, no errors are displayed. I can see all my camera feeds on the main view. The config hasn't changed. My masks are still in place. I dont recall the exact names, but the Settings>'Status' (with the bar graphs) window shows the Coral TPU as connected and used and Settings>Debug windows shows red boxes around moving objects, but I don't see any detection events, no short clips, or any other indicator that it's working.

Edit: well this is stupid but it was a frigate update that caused the issue. With 16.0, you have to add "enabled: true" under "detect" for each camera to get object detection. I hadn't updated it for some time and didn't read the release notes.

21
35

Quick update for anyone following the project. NutriTrace is a self-hosted nutrition tracker I've been building. Single Docker container, your data stays on your hardware, no external accounts.

This release ships the first native Android app alongside the existing PWA. Signed APK is attached to the GitHub release.

What you get on Android:

  • Standalone, or connect it to a NutriTrace server for sync
  • Health Connect for steps, sleep, heart rate, body weight
  • Native barcode scanning
  • Native notifications for water reminders, meal prompts, weigh-ins, and goal celebrations
  • OIDC SSO via deep link if you run Authentik, Keycloak, Pocket ID, etc.

Release: https://github.com/TraceApps/nutritrace/releases/tag/v1.0.0-rc.14 Repo: https://github.com/TraceApps/nutritrace

Still on the v1.0 release-candidate cadence so there will be be bugs. Please feel free to post issues here or on Github.

Thanks to everyone who's tried it, provided suggestions and filed bugs along the way. If you find it useful, a star on the repo or a mention to someone looking for a self-hosted nutrition/fitness alternative helps a lot.

22
313

Hi all, been a while since I posted the degoog beta, been head down working on all my apps.

Some of you may know me for Jotty and Cronmaster on top of degoog, hi friends!

Degoog is a search aggregator meant to be plug and play, the app itself is extremely light (about 50mb to 70mb ram usage whilst idle, biggest spike I've seen recently has been about 100mb in usage), and then there's a very comprehensive extension system where you can create engines, plugins and transports (transports are the way I've named systems to fetch data, headless browsers, curl alternatives and so on).

The app has been in beta for a while and today I've released the first stable beta, so I'd love to re-announce it here and get at bit more feedback. Next post I'll make will be once it's out of beta and fully stable, trying to not spam this too much.

Little quick history for anyone who hasn't seen the first post, this was born from my PERSONAL gripes with searxng (no shades, the internet is beautiful because it's vary), can't say my project is better, it's too new to say that, but it works more for my personal preferences and hopefully it resonates with some of you too.

Let's talk about AI usage like adults please

This is NOT vibecoded, it's not AI driven and it's NOT some slop put together in 5 minutes. Some people here know me, they know I maintain my projects, I code myself and I have been a software engineer for many many years.

I actually have rejected pull requests that were very obviously vibecoded (p.s. fucking hilarious if you check the CLAUDE.md file in the repo, the PRs were riddled with these comments I force in there).

That said I obviously make some use of AI, it's 2026, dunno what you all expect, open source however is my way to escaping how my day job REQUIRES me to use AI, so it's only used for stuff I can't be bothered to do (e.g. repetitive boring tasks, documentations, tests and heavy debugging). If you have issues with this please go tell a carpenter to use a manual screwdriver and not a power drill and see if they say yes or throw it at you, thank you <3

23
109
submitted 6 days ago by otter@lemmy.ca to c/selfhosted@lemmy.world

cross-posted from: https://lemmy.ml/post/46701277

I’ve been running my home lab since 2021 and honestly thought my update routine was solid: apt update && apt upgrade, reboot, job done.

Turns out I was wrong. I was checking CVE‑2026‑31431 (Copy Fail) this morning and realised that despite my “successful” updates, I was still running a vulnerable kernel from March.

I’ve had to rethink how I handle host updates. If you’re relying on a standard upgrade and a reboot to keep Proxmox or Debian hosts safe, you might want to check if yours is lying to you as well.

24
43
submitted 5 days ago* (last edited 5 days ago) by perishthethought@piefed.social to c/selfhosted@lemmy.world

... without using any variation of Syncthing.

My phone is usually on the same Wifi network as my PC, so some sort of auto-syncing via wifi would be great. Like how Immich syncs from the phone to my server, in an almost totally hands-off way.

What are the best non-Syncthing FOSS phone and PC file sync options these days?

Thanks!

ETA: Sorry, sorry, I should have explained: I no longer trust any variant of Syncthing. The wild chain of events last year left me completely questioning what was going on with that code base. I struggle with trust issues for FOSS software every so often and once I feel things have gone awry, I can't go back again. Plus, I really want to know about what's new and interesting right now.

Link to one conversation about Syncthing's events, if you are out of the loop:
https://mastodon.pirateparty.be/@surfhosting/115674236291033568

25
65

Like many self-hosters, I've looked upon the recent price hikes for storage in utter disbelief. Faced with paying double the price of what I paid only last year for new hard drives, I dug around my hardware stash and came across about a dozen of old 2.5" 320-500 GB drives which I had saved from the dumpster once, but never deployed. After all, they were too slow to be used as PC system drives and too small in storage size for any meaningful use in a server. Now seemed like a perfect time to look for a way to put them to good use after all. And I found it in mergerFS.

For anyone not familiar with it: in spite of its name, mergerFS is not a filesystem in the sense that in order to deploy it, you'll need to reformat any drives (although this wouldn't have been a problem for my use case). Instead, you can theoretically take a bunch of drives (JBOD) and string them together with no modification to their filesystem, keeping existing data intact. It is agnostic of the filesystems present on the drives, meaning you can even combine volumes formatted with, say, ext4, btrfs, and xfs. All drives will show up in your filesystem as a single volume, and - depending on the policies you configured - store some data on this and some data on that drive. Since data isn't striped, the drives will remain individually legible, i.e. there's no need to rebuild all of them after a drive fails.

Speaking of drive failure: while mergerFS itself does not come with RAID, you can add SnapRAID to the mix for parity-based RAID (although it's not real-time RAID; parity data must be written on schedule, so it's not for mission-critical data that is frequently being updated and rewritten).

Combined, these two technologies allow me to have my cake and eat it too:

  • I can put drives to use that would otherwise be rotting in a drawer.
  • I can avoid additional cost - both financial and ecological. (The energy bills won't increase by much, either, because most of the energy comes from solar cells on the roof.)
  • I can always flexibly tack on more drives, regardless of size.
  • I can have the added data security of a RAID, but at the price of very few (if any) of its drawbacks (e.g. no drives of equal size needed).

If this was news to you - maybe you want to give it a shot too. (I don't consider myself a very advanced user and I found it dead simple to deploy.)
If you're already running mergerFS and SnapRAID, feel free to showcase your use case and setup!
If you found any of the above incorrect or misleading, feel free to correct me.

view more: next ›

Selfhosted

59013 readers
564 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS