this post was submitted on 17 Jun 2023
49 points (100.0% liked)

Technology

37747 readers
169 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I've seen a lot of people saying things that amount to "those tech nerds need to understand that nobody wants to use the command line!", but I don't actually think that's the hardest part of self-hosting today. I mean, even with a really slick GUI like ASUSTOR NASes provide, getting a reliable, non-NATed connection, with an SSL certificate, some kind of basic DDOS protection, backups, and working outgoing email (ugh), is a huge pain in the ass.

Am I wrong? Would a Sandstorm-like GUI for deploying Docker images solve all of our problems? What can we do to reshape the network such that people can more easily run their own stuff?

top 37 comments
sorted by: hot top controversial new old
[–] [email protected] 20 points 1 year ago

If you're afraid of the CLI then you probably didn't be hosting anything complex yourself. The CLI is one of the least complicated parts of server administration.

[–] [email protected] 18 points 1 year ago* (last edited 1 year ago) (2 children)

the hardest part is doing backups and updates. Repeat after me:

no backup, no pity,

updates neglected, compassion rejected.

[–] [email protected] 6 points 1 year ago

Dear Debian users: please also update your Debian version, not just your packages. Like... once a decade would be an improvement for many poor servers.

[–] [email protected] 2 points 1 year ago

Haha, yeah, I totally have proper backups...

[–] [email protected] 16 points 1 year ago (2 children)

It's not the command line that's hard but the lack of proper documentation and tutorials that makes things hard.

[–] [email protected] 7 points 1 year ago

Especially when it's extremely rare to find documentation that aren't intended on being too verbose. Documentation with bottom line up front writing style is a rarity.

[–] [email protected] 3 points 1 year ago

man is your documentation for the tool itself.

[–] [email protected] 14 points 1 year ago (3 children)

Getting a decent VPS is pretty cheap. Email is the enormous problem. Even if your VPS provider allows outgoing email, your IP address will be flagged and blocked by all mailservers everywhere for the crime of not being Google or Microsoft, or not having a full-time person working 24/7 to satisfy the people in charge of blacklists. You can pay someone else to send your email, but that's going to cost you as much or more as the VPS you're using to host your entire app.

[–] [email protected] 3 points 1 year ago

It's actually rare these days that mail from my personal server (on a Linode/Akamai IP) is rejected, and I don't even have DMARC set up, only SPF and DKIM. I just use my old gmail address as a backup for those rare situations.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Something like Zoho is only $12 a year per hosted email address.

[–] [email protected] 2 points 1 year ago

How many outgoing emails are we talking about? Because there are a lot of free or cheap options for personal use and small businesses.

[–] [email protected] 12 points 1 year ago

Do we actually need people afraid of CLIs to host anything? Sounds like a hassle.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago)

Look at installing Gentoo, or Arch, or Alpine vs Ubuntu. There’s no technical reason we can’t make Gentoo installation GUI. It’s just going to be very very tedious. Orders of magnitude more tedious.

At the same time Gentoo allows you to customize WAAAAY more things during its install than Ubuntu.

So specifically for lemmy - yeah we can probably make some sort of default AWS image where you just select it when spinning up new VM and you’re up and running. But what if you want something slightly different? Maybe you prefer MySQL instead of Postgres. Or Apache instead of nginx, or maybe you want images hosted on a different machine. Suddenly it’s the install GUI author’s responsibility to support install of 10 different databases, or load-balancers, or something else, and each one has their own GUI options. Then someone else wants 11th database added and it has 10 more custom options…. Oh and now someone else is asking for a DigitalOcean image instead… or and now someone’s asking for Docker image… You see where this is going.

[–] [email protected] 7 points 1 year ago

It's not even about gui.
If you want to self host you get yourself a pile of software of community-level quality (i.e "it works good until it doesn't" is the best outcome) you need to care about. This means constantly being involved - updating, maintaining, learning something, etc, and honestly it's time-consuming even for experienced sysadmins.

[–] [email protected] 7 points 1 year ago

One-click would definitely lower the bar to entry but I have to admit the concept makes me uncomfortable. While it cloud eliminate those problem it creates the issue of creating thousands of server administrators who really don't understand the platform that they are now responsible for. Infrastructure and security IS hard because it's not just about getting the right syntax, it's understanding the concepts so that not only does it work, it works safely and reliably.

I've seen quite a bit of bad troubleshooting going on as newcomers have sought to set up their instances. It doesn't help that the current docker-compose in the Lemmy repository is outdated and doesn't work out of the box. More than a few "this worked for me" solutions that I've seen may have gotten things working, but broke fundamental security principles that may or may not come back to bite the administrators later.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

For NAT and SSL, you don't need to fiddle with those directly. You can use Wireguard for routing and encryption. For personal use I tend to host my servers as Tor hidden services which gives them routing, encryption, and anonymity. Client side SSL certificates are also something people underestimate here; you can use those for simultaneous encryption and authentication.

Outgoing email can be hard, but since you control the sender and the receiver, you don't need to go through the public internet's spam filters. You don't even necessarily need to use SMTP, you can just drop the files in the maildir and sync that across the systems.

[–] [email protected] 5 points 1 year ago (1 children)

The sad truth is that non-techy types will never want to host something themselves unless there’s a reason why doing so is better. I’m not just talking about better the way you and I think of better, either. Nobody really cares about privacy or security or ownership of data. A lot of people like to say those things matter but until it’s as easy to host your own email as signing up for gmail, and doing so provides all the fringe benefits you get with Google, you’re not going to get completely non-technical people self hosting.

You’re right, though. As part of this, there needs to be a way to have an all-in-one package that defaults to enabling the things you’re talking about. There are a lot of plug-n-play methods of self hosting any number of things, but the hard part of hosting is doing it right and securely.

[–] [email protected] 7 points 1 year ago (1 children)

The sad truth is that non-techy types will never want to host something themselves unless there’s a reason why doing so is better.

Not even techy types want it. It's not a coincidence that SaaS offerings are viable in enterprise contexts. Why build a shit ton of knowledge and drag yourself through the mud of learning tons of different tools if you can as well pay someone who already has all that knowledge. Then you can use the free mental capacity to solve your actual problems.

The only reasons to self host are "paranoia" (no matter if warranted or not) and - which is the important thing for us self-hosters here - curiosity (or rather the drive to learn shit). We basically do it for the sake of doing it.

[–] [email protected] 2 points 1 year ago (1 children)

That’s true. Though I would sub paranoia with control.

I self host things because I want control. I want to be in control of when it gets updates and goes down. I want to be in control of how to fix it when it breaks. I want to be in control of my account and whether it’s backed up etc.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I thought of that as well but concluded that this is also some kind of paranoia. The SaaS providers promise you availability, security etc, but we don't believe them and want that in our own hands. So IMO we only want to be in control, because we fear we could suddenly lose access or get betrayed. Which is a specific manifestation of paronia.

[–] [email protected] 1 points 1 year ago

Fair point.

[–] [email protected] 5 points 1 year ago

Technology is complicated. Period. Anything that "seems" simple is in reality extremely complicated underneath the hood. A GUI is nice as long as it works. But if for some reason it doesn't, you're shit out of luck.

[–] [email protected] 4 points 1 year ago

I would say specifically the hardest part for self hosting is the grok'ing of how SSL works and setting it up right with automatic renewal.

There's a lot of extra steps involved often.

Id also say understanding how routing works and why you need a reverse proxy is the other big one.

[–] [email protected] 4 points 1 year ago (3 children)

I’ve been working on getting Matrix Synapse running on my NAS, and the CLI hasn’t been my problem. I’m a programmer, and CLI doesn’t scare me; but the other issues you mention are all new to me, and getting a web service set up so people outside my local network can access it but without leaving me open to bad actors is wicked stressful.

The biggest problems end up being that I need to work with the soup of technologies, and there’s no one place to do all the things. I’ve got TWO routers (because my internet comes through one, and I run my LAN and wifi off one I trust better) which means I’m double-NATed, which is apparently the root of all evil; I can use Cloudflare to tunnel to my NAS, but I can’t accept simple (CNAME) redirects from a family member’s domain to one of my subdomains without paying Cloudflare $200/month, so that means I’m back to dealing with the double-NAT, and then I have to learn setting up TLS, which sounds like it’s simple, but still it’s jimmy way another thing to screw around with and another thing I could screw up on accident.

I could pay for a VPS, but that to me defeats a lot of the point of “host your own” federation when some company could be subpoenaed for copies of all their hosted accounts or something. (Yes, I could get subpoenaed for my data just as easily, but it takes more work to subpoena a thousand people than one company for a thousand people’s accounts.)

Anyway, I’d love to see things evolve to where it’s easy for newbies to host their own private instances of everything.

Personally, I’d love a drop-in tool that runs more like a temporary server while it’s running, syncing federated data you missed while your device was off; and only serving your data when it’s on. Likely with some kind of redirect service/NAT punchthrough so other clients can find you…

…but I think we’re a long way off from being able to do that.

[–] [email protected] 3 points 1 year ago (1 children)

You could get a VPS only for getting around the double NAT.

Run a reverse proxy on the VPS and forward requests over WireGuard to your NAS. That way you wouldn‘t actually host any data on the VPS.

[–] [email protected] 1 points 1 year ago (1 children)

This is an idea I didn’t know about! I’ll have to look more into it. If you feel like it, I’d love to hear a bit more detail; but also I know how to use DuckDuckGo, so no pressure!

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I don‘t know what specifically you would like to know and what your background is, so I will just elaborate a bit more.

The basic idea is that the VPS, which is not behind a NAT and has a static IP, listens on a port for WireGuard connections. You connect from the NAS to the VPS. On the NAS you configure the WireGuard connection with “PersistentKeepalive = 25”. That makes the NAS send keepalive packets every 25 seconds which should be enough to keep the connection alive, meaning that it keeps a port open in the firewall and keeps the NAT mapping alive. You now have a reliable tunnel between your VPS and your NAS even if your IP address changes at home.

If you can get a second (public) IP address from your provider you could even give your NAS that IP address on its WireGuard interface. Then, your VPS can just route IP packets to the NAS over WireGuard. No reverse proxy needed. You should get IPv6 addresses for free. In fact, your VPS should already have at least a /64 IPv6 network for itself. For an IPv4 address you will have to pay extra. You need the reverse proxy only if you can‘t give a public IP address to your NAS.

Edit: If you have any specific questions, feel free to ask.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Tailscale funnel

Tailscale is a fork of WireGuard. Tailscale has a cool feature called funnel that connects a node on your vpn to a domain at .ts.net

[–] [email protected] 1 points 1 year ago (1 children)

Wow, yeah, that sounds like a really frustrating situation. I wish you all the luck in figuring it out.

[–] [email protected] 1 points 1 year ago

I got it working! I’m fortunately, I know a kindly professional who took pity on me and showed me the secrets of Cloudflare free-tier, and we did work something out.

I have had to learn SO MUCH in just the last week, though, it’s crazy intense!

[–] [email protected] 3 points 1 year ago (1 children)

So you want non-technical people to set up botnet members?

There's this thing called 'money' which people who can't do a thing give other people who can do the thing well to make them do the thing.

[–] [email protected] 1 points 1 year ago

This seems unnecessarily snarky.

[–] [email protected] 3 points 1 year ago

We need an actual official setup tutorial that is kept up to date. The existing documentation for the Docker setup process is extremely bare-bones, and it doesn't even link to the right config files. There are some unofficial tutorials out there that are better, but they're outdated and they link to the wrong config files too.

[–] [email protected] 3 points 1 year ago

To be honest, the command line is an important tool, that when you are able to use correctly, will give you a better understanding of a lot of the inner workings of a machine.

The commandine might be intimidating at first, but I personally think not as big of a hurdle to think about replacing it with anything.

Most people that I know that at first were afraid of the command line but tried to get into it, now don't want to go back. Working with the cli is so efficient that it's hard to go back to GUI's.

[–] [email protected] 3 points 1 year ago (1 children)

As I can attest after playing with pfsense for years, GUI or not, if you don’t know what you’re doing you’re going to have a bad time.

For me personally, command line gives me a better understanding of what’s really going on. But then again I’m an old Unix nerd. But once I know what’s going on, I prefer the fancy GUI.

[–] [email protected] 3 points 1 year ago

Yep. Agree but kinda the inverse of your takeaway.

I prefer to skip the gui when I know what’s going on. It’s just a waste of resources in many cases and sometimes obfuscates options that otherwise are there.

For example on my opnsense box the NUT package doesn’t work in the gui. Never has. But I have setup an innumerable number of nut instances with that same ups. I did it via the cli and it works, even when the gui says not possible.

[–] [email protected] 2 points 1 year ago

YunoHost is a tool which aims to solve the problem of (relatively small scale) self-hosting for people. I use it to host my Mastodon and Lemmy instances and it was very easy. I haven't dealt with email but that's also something it supports.

It's a pretty great platform, although unfortunately it's currently unable to upgrade Lemmy past 0.16.7 which is a bit of a pain.. So it's hard to recommend it for Lemmy right now.

load more comments
view more: next ›