this post was submitted on 07 Feb 2024
68 points (94.7% liked)

Linux

48061 readers
683 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Edit2: OK Per feedback I am going to have a dedicated external NAS and a separate homeserver. The NAS will probably run TrueNAS. The homeserver will use an immutable os like fedora silverblue. I am doing a dedicated NAS because it can be good at doing one thing - serving files and making backups. Then my homeserver can be good at doing whatever I want it to do without accidentally torching my data.

I haven't found any good information on which distro to use for the NAS I am building. Sure, there are a few out there. But as far as I can tell, none are immutable and that seems to be the new thing for long term durability.

Edit: One requirement is it will run a media server with hardware transcoding. I'm not quite sure if I can containerize jellyfin and still easily hardware transcode without a more expensive processor that supports hyper-v.

all 39 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 9 months ago (1 children)

What functionality do you want from your NAS? If it's simple NFS and Samba then I imagine you can choose whatever you want really.

[–] [email protected] 1 points 9 months ago (1 children)

It's mostly for running media servers like jellyfin.

[–] [email protected] 3 points 9 months ago (1 children)

If the software you want to run has flatpak then I imagine you can try out Fedora Silverblue, Jellyfin do have a flatpak.

Personally I run my Jellyfin on a virtual Debian Bookworm server with transcoding off, my Jellyfin clients don't need the help.
I always clone my Jellyfin server before apt update && apt upgrade to be able to rollback.
Oh, and my NAS (network attached storage) isn't on the same machine, my Jellyfin server use Samba and /mnt/media/libraryfolders, so cloning it is quick and easy.

[–] [email protected] 1 points 9 months ago (1 children)

Is there a performance impact on the jellyfin server by having the NAS on a separate machine? How long does it take to serve a 20gb rip of a bluray?

[–] [email protected] 3 points 9 months ago

The network isn't a bottleneck in my system.
I don't have any 20gb bluray rips as I'm satisfied with the quality of a 5-8gb 1080p.
I don't notice a delay when starting it, it's just a datastream without transcoding.

[–] [email protected] 16 points 9 months ago

I use NixOS for this. It works wonderfully.

Immutable means different things to different people, but to me:

  1. Different programs don't conflict with each other.
  2. My entire server config is stored in a versioned Git repo.
  3. I can rollback OS updates trivially and pick which base OS version I want to use.
[–] [email protected] 14 points 9 months ago (4 children)

As of my understanding, immutable systems are useful for Devices that are more bound to change, like a Desktop you actually use to install programs try out things and so on.

I do not see much benefit here for a stable server system. If you are worried about stability and uptime, a testing system does a better job here, IMHO.

[–] [email protected] 3 points 9 months ago

Immutable systems are useful for separating the system and application layers and to enable clean and easy rollbacks. On servers the applications are often already separated anyway through the use of container technologies. So having atomic system updates could enable faster and less risky security patching without changing anything about how applications are handled.

[–] [email protected] 2 points 9 months ago

Yeah that part confuses me too. It’s a NAS. Install something simple and whatever services you need and I can’t imagine it breaking any time soon. Shit as long as someone else has tested the software I’d be more than happy to install something complex… Which I have and has been running for almost 10 years now with no issues. FreeNAS has been rock solid for me and it sure as hell ain’t minimal.

[–] [email protected] 1 points 9 months ago

As of my understanding, immutable systems are useful for Devices that are more bound to change, like a Desktop...I do not see much benefit here for a stable server system.

This logic is kind of backwards, or rather incomplete. Immutable typically means that the core system doesn't change outside of upgrades. I would prioritize putting an immutable OS on a server over a desktop if I was forced to pick one or the other (nothing wrong with immutable on both), simply because I don't want the server OS to change outside of very controlled and specific circumstances. An immutable server OS helps ensure that stability you speak of, not to mention it can thwart some malware. The consequences of losing a server is typically higher than losing a desktop, hence me prioritizing the server.

In a perfect world, you're right, the server remains stable and doesn't need immutablitiy...but then so does the desktop.

[–] [email protected] -3 points 9 months ago* (last edited 9 months ago) (2 children)
[–] [email protected] 10 points 9 months ago (1 children)

Virtual machines also exist. I once got bit by a proxmox upgrade, so I built a proxmox vm on that proxmox host, mirroring my physical setup, that ran a debian vm inside of the paravirtualized proxmox instance. They were set to canary upgrade a day before my bare-metal host. If the canary debian vm didn't ping back to my update script, the script would exit and email me letting me know that something was about to break in the real upgrade process. Since then, even though I'm no longer using proxmox, basically all my infrastructure mirrors the same philosophy. All of my containers/pods/workflows canary build and test themselves before upgrading the real ones I use in my homelab "production". You don't always need a second physical copy of hardware to have an appropriate testing/canary system.

[–] [email protected] 2 points 9 months ago (1 children)

I really like this strategy. I currently use proxmox for my home server needs, but I am curious what you use now instead?

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (1 children)

I have condensed almost all of my workflows into pure bash scripts that will run on anything from bare metal to a vm to a docker container (to set up and/or run an environment). My dockerfiles mostly just run bash scripts to set up environments, and then run functions within the same bash scripts to do whatever things they need to do. That process is automated by the bash scripts that built my main host. For the very few workflows I have that aren't quite as appropriate for straight docker (wireguard for example) I use libvirt to automate building and running virtual machines as if they were ephemeral containers. Once the abstraction between container and vm is standardized in bash, the automation doesn't really need to care which is which, it just calls start/stop functions that change based on what the underlying tech is. Because of that, I can have the canary system build and run containers/vms in a sandbox, run unit tests, and return whether or not they passed. It does that via cron once a week and then supplants all the running containers with the canary versions once unit tests pass.

Basically I got sick of reinventing the wheel every time a new technology came out and eventually boiled everything down into bash so that it'll run on anything it needs to. Maybe podman in userland becomes the new hotness next year, or maybe I run a full fat k8s like I do at work. Pure bash lets me have control over everything, see how everything goes together, and make minor modifications to accommodate anything I need it to.

It sounds more complicated than it really is, It took me like a week of evenings to write and it's worked flawlessly for almost a year now. I also really really really hate clicking things by hand lol, so I automate anything I can. Since switching off proxmox, this is the first environment that I have entirely automated from bare-metal to fully running in a single command.

I'm incredibly lazy; it's one of my best qualities.

[–] [email protected] 1 points 9 months ago (1 children)

That sounds pretty slick. I envy your scripting prowess. You really have to know your system top to bottom to be able to boil it all down like that.

I’m just beginning my journey into this whole space, and it’s really interesting how many different ways people have to deal with the same basic things.

I’m also incredibly lazy, so maybe more scripting is in my future! Thanks for taking the time to write such a detailed reply!

[–] [email protected] 1 points 9 months ago (1 children)

I certainly wasn't just born good at this. Unironically if you want to learn how something works, try to automate it. By the time it's automated you'll understand basically every part of it at at least a basic high-level.

[–] [email protected] 1 points 9 months ago

That is incredibly true. I try to automate everything I can. That’s where laziness is a superpower.

[–] [email protected] 1 points 9 months ago

my other server is a cloud tho

[–] [email protected] 9 points 9 months ago (1 children)

I would think that any immutable linux distribution would be suitable. Just configure it with the services that you want. Is there any special need that you specifically need?

[–] [email protected] 1 points 9 months ago (1 children)

Honestly I had never built an NAS and installed an OS on it before. I've only ever used the junk that ASUSTOR puts out and I want to have control over things. So a good part of the reason I asked on here was to see what other people had done and why.

[–] [email protected] 1 points 9 months ago

Just think of the NAS like a desktop that you ssh into. The only difference is that you install the server version of the distro. If you know how to use a desktop Linux box and configure it via the command like you can do so with a server. It will be the same except over ssh.

Hardware wise, normal desktop parts are good enough to build a NAS. You don’t need to buy anything special that is NAS specific. The only exception might be the case. If you want a lot of storage the case should be able to accommodate that. Some desktop cases don’t have 3.5” drive slots anymore.

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago)

Flatcar linux (this is what I use for my NAS/homeserver) and CoreOS are both good.

edit: OpenSUSE has microOS: https://microos.opensuse.org/

[–] [email protected] 7 points 9 months ago

Just use TrueNAS scale

[–] [email protected] 7 points 9 months ago

MicroOS from OpenSUSE. nice thing is initial config at boot is similar to nix config where you can set everything like network, user, passwords, installed packages, etc. this is done via ignition and combuation files. Has a handy file creator to make life . https://opensuse.github.io/fuel-ignition/edit

[–] [email protected] 6 points 9 months ago (1 children)

Would Truenas fit as immutable? I guess it doesn't stop you from changing things, but doing so might break the next update.

Configuration can be exported. Disaster resolution of fresh install and restore configuration has worked for me. No data loss and even the Virtual Machines started right back up.

[–] [email protected] 1 points 9 months ago

One could argue that they do "try to stop you"... technically... by disabling the execute bit on software update tools (like apt & deb)... but I see that more as a gentle reminder and acknowledgement of your ownership of the machine, as they could have easy just not had those tools present at all.

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago) (1 children)

github.com/secureblue/secureblue

It has a server variant!

I find it easier to use than CoreOS as I never dealt with learning how to use this ignite thing. And also they are hardened, which is important especially for servers.

[–] [email protected] 3 points 9 months ago (1 children)

Oh I like the look of that.

[–] [email protected] 2 points 9 months ago

It works great, after dealing with lots of the opinionated stuff, adding a userns variant, making Flatpaks work, disabling CUPS instead of removing it etc it is now very usable on the Desktop.

Server should just be as good. Use Podman for containers, installing Docker will weaken the security I guess.

[–] [email protected] 4 points 9 months ago (2 children)

Containerization is not virtualization, so why would it have any bearing on hardware transcoding?

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

I found this guide for setting up GPU access for Unprivileged LXC containers when I googled around:
Giving a LXC guest GPU access allows you to use a GPU in a guest while it is still available for use in the host machine.
https://bookstack.swigg.net/books/linux/page/lxc-gpu-access

Talked about here:
https://old.reddit.com/r/Proxmox/comments/15zbjyl/proxmox_igpu_passthrough_to_multiple_lxc_plex/jxgn7pb/

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

Several comments specifically talked about VMs for the various apps. And frankly I'm not super familiar with the limitations of containerizing apps either. That's part of why I was looking for an immutable os + flatpacks / snaps - it's much more similar to a normal linux system just organized in a way to not break shit.

[–] [email protected] 2 points 9 months ago

Use containers (Docker or LXC) instead of VMs.

[–] [email protected] 3 points 9 months ago

I'm using Unraid, which is built on top of Slackware. It has a very nice Docker web UI for apps like Jellyfin. It's not immutable though. I don't know of any NAS-specific OSes that are immutable.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

Typically on a home server you would virtualize services anyway so it really doesn't matter what distro is running on the metal.

And also if you're fully virtualized you can switch out the host distro anytime you want, so you can adopt an immutable one later if you want.

Why do you want an immutable distro anyway?

[–] [email protected] 2 points 9 months ago (1 children)

I want immutability because I come from a the debian world where everything just works. But I want the benefits of using modern versions of packages.

[–] [email protected] 0 points 9 months ago

If you're running unstable system packages, immutability won't really save your stability.

So don't complicate it, and just use Debian with nix and home-manager. That way you have a stable base, and you can create a list of bleeding edge packages that should be installed. In any case it should be essentially only docker + whatever can't be dockerised.

[–] [email protected] 3 points 9 months ago

Just use anything and set up a good workflow with snapshots.

Have a “current” snapshot, rollback to it before using and then re-snapshot over it.

Now your system is immutable in practice but you can still edit /etc to debug.