51
submitted 4 months ago by [email protected] to c/[email protected]

Hi all!

I have decided to setup and selfhost my own private Lemmy instance.

I will be doing with docker (podman, actually).

Should I host at home or use a dedicated VPS?

Anybody selfhosting its own Lemmy?

Do you guys have any hints for me?

17
submitted 5 months ago by [email protected] to c/[email protected]

Hi all!

i have a nice setup with some containers (podman rootless) and bare metal services (anything i can install bare metal, goes bare metal usually).

I used Monit, in the past, to keep an eye on my services and automatically restart something that for any reason goes down. I stopped using Monit because doesnt scale well on mobile browser and it's frankly clumsy to configure.

I could go back to Monit i guess, but i am wondering if there is anything better out there to try.

A few requirements (not necessarily mandatory, but preferable):

  • Open Source (ideally: true open source, not just commercial sulutions with dumbed down free verisons)
  • Not limited, or focuesd, on containers (no Watchtower and similar)
  • For containers, it can just support "works" or "restart"
  • For containers, if it goes above the minimum "works" and "restart" must support podman
  • Must support bare metal services (status, start, stop)
  • Must send email or other kind of notifications (ok IM notifications, but email preferred)
  • Should additionally monitor external machines (es other servers on the LAN), or generic IP addresses
  • Should detect if a web service is alive but blocked
  • No need for fancy GUIs or a Web GUI (it's a pro point, but not required)
  • No need for data reporting, graphics and such aminities. They are a plus, but 100% not required.

What do you guys use?

32
submitted 5 months ago by [email protected] to c/[email protected]

I had played for many years a great game called Hades Star. It was available on steam and on Android.

Amazing game, I loved the pace, the PvP (white stars, 5v5 up to 15v15 clan wars) and most specially the collaborative pve (red stars).

It had a unique feeling and, while the grind was real (delivering cargo, yuk) it was manageable to a level where you was trying to compete with other players to who was more good at it.

Then the idiot Dev decided to kill it for an "upgrade" called Dark Nebula: gameplay changed a lot, old players where forced to the new game, but feeling was totally different.

Player base dropped quickly, corporations (clans) went empty and collapsed, general gameplay was not fun anymore.

The Dev ignored most complaints and warning from users to achieve "his" view of the game.

The game is now dead to me.

Anybody played it? What did you guys moved to?

39
Shareisland (shareisland.org)
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]

Shareisland is open to registration.

Mostly Italian and multi language content.

Edit: seems broken at this time, but check on the next days as it will work sooner or later. There has been an official announcement that registrations will open soon for a brief window. No idea precisely when, i supposed they where already open at the time of the announcement.

[-] [email protected] 30 points 6 months ago

Not sure it counts.

For my 30th birthday my father opened a bottle of 1878 Porto his father bought.

So it was 130 years old.

It was... Unreliable. Full taste, very sweet, much more liquorous than regular Porto. We drank it quickly, what was left was fully undrinkable only a few hours later, totally spoiled. But for half an our after being opened, it was truly the most amazing Porto I ever had.

It has been bottled before cars existed... Before electricity became widespread...

Really a lifetime experience.

Now its gone, but I keep the bottle for future storytelling.

21
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]

Hi!

I used to have three raid1:

2 x 4Tb Ssd dedicated to store personal data

2 x 6Tb HDD dedicated to store "iso's", the eye patched ones.

2 x 4Tb ssd for backup.

Ext4 everywhere.

I run this setup for years, maybe even 20 years (with many different disks and sizes over time).

I decided that was time to be more efficient.

I removed the two HDD, saving quite a lot of power, and switched the four sdd to RAID5, Then put BTRFS on top of that. Please note, I am not using btrfs raid feature, but Linux mdadm software raid (which I have been using rock solid for years) with btrfs on top as if on a single drive.

I choose MD not only for my past very positive experience, but specially because I love how easy is to recover and restore from many issues.

I choose not to try zfs because I don't feel comfortable in using out of kernel drivers and I dislike how zfs seems to be RAM hungry.

What do you guys think?

23
Power usage (feddit.it)
submitted 6 months ago by [email protected] to c/[email protected]

I am smarting up my home with lots of ZigBee thingies and decided to add a ZigBee smart monitor to my selfhost setup.

Its a desktop box with a core i7 9th gen, 48gb ram, no monitor.

I used to have 4xSSD 4TB + 2xHDD 6TB. Three RAID1 for a total of 4+4+6=14Tbs.

Power was sitting at 50W.

I restructured my storage: 1 RAID5 with the 4 SSDs (12tb) and removed the 2 HDDs.

Power went down to 38W!

I am amazed.

In the future will run just 1 hdd for storing backups and keep it spinned down 99% of the time.

PS: the above wattage is during transcoding, so with high CPU and disk usage....

47
submitted 7 months ago by [email protected] to c/[email protected]

(crossposting, because i think it can be interesting here too)

I like my video collection nicely organized and omogeneous as formats, codecs and resolution.

I know there are already pleanty of solutions like Tidarr, but i find them a bit too much for my needs, and also pretty complex to use.

I wrote a simpler approach in a bash script called "Media Fixer", the URL is at the top of this post.

Feel free to check it out, play with it. I hope it can be useful for others as well as me.

It's released under the GPLv3.

42
submitted 7 months ago by [email protected] to c/[email protected]

Hi all fellow one-eyed and one-legged rum-gagglers sailors!

I like my ISO collection nicely organized and omogeneous as formats, codecs and resolution.

I know there are already pleanty of solutions like Tidarr, but i find them a bit too much for my needs, and also pretty complex to use.

I wrote a simpler approach in a bash script called "Media Fixer", the URL is at the top of this post.

Feel free to check it out, play with it. I hope it can be useful for others as well as me.

It's released under the GPLv3.

[-] [email protected] 30 points 7 months ago

Side hustles should be hobbies and done with no need to monetize them.

What the fuck, your job should be enough to support you and live, which includes free time to enjoy your life and hobbies.

But I understand, and more than once in my life I had to look for side hustles.

86
submitted 7 months ago by [email protected] to c/[email protected]

Well, here my story, might it be useful to others too.

I have a home server with 6Tb RAID1 (os on dedicated nvme). I was playing with bios update and adding more RAM, and out of the blue after the last reboot my RAID was somehow shutdown unclean and needed a fix. I probably unplugged the power chord too soon while the system was shutting down containers.

Well, no biggie, I just run fsck and mount it, so there it goes: "mkfs.ext4 /dev/md0"

Then hit "y" quickly when it said "the partition contains an ext4 signature blah blah" I was in a hurry so...

Guess what? And read again that command carefully.

Too late, I hit Ctrl+c but already too late. I could recover some of the files but many where corrupted anyway.

Lucky for me, I had been able to recover 85% of everything from my backups (restic+backrest to the rescue!) Recreate the remaining 5% (mostly docker compose files located in the odd non backupped folders) and recovered the last 10% from the old 4Tb I replaced to increase space some time ago. Luckly, that was never changing old personal stuff that I would have regret losing, but didn't consider critical enough on backup.

The cold shivers I had before i checked my restic backup discovering that I didn't actually postponed backup of those additional folders...

Today I will add another layer of backup in the form of an external USB drive to store never-changing data like... My ISOs...

This is my backup strategy up to yesterday, I have backrest automating restic:

  • 1 local backup of the important stuff (personal data mostly)
  • 1 second copy of the important stuff on an USB drive connected to an openwrt router on the other side of home
  • 1 third copy of the important stuff on a remote VPS

And since this morning I have added:

  • a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)
  • an additional USB local drive where I will be backup ALL files, even that 10% which never changes and its not "important" but I would miss if I lost it.

Tools like restic and Borg and so critical that you will regret not having had them sooner.

Setup your backups like yesterday. If you didn't already, do it now.

33
submitted 7 months ago by [email protected] to c/[email protected]

In my server I currently have an Intel i7 9th gen CPU with integrated Intel video.

I don't use or need A.I. or LLM stuff, but we use jellyfin extensively in the family.

So far jellyfin worked always perfectly fine, but I could add (for free) an NVIDIA 2060 or a 1060. Would it be worth it?

And as power consumption, will the increase be noticeable? Should I do it or pass?

[-] [email protected] 52 points 9 months ago

Podman guys... Podman All the way...

[-] [email protected] 71 points 9 months ago* (last edited 9 months ago)

There is no "write and forget" solution. There never has been.

Do you think we have ORIGINALS or Greek or roman written texts? No, we have only those that have been copied over and over in the course of the centuries. Historians knows too well. And 90% of anything ever written by humans in all history has been lost, all that was written on more durable media than ours.

The future will hold only those memories of us that our descendants will take the time to copy over and over. Nothing that we will do today to preserve our media will last 1000 years in any case.

(Will we as a specie survive 1000 more years?)

Still, it our duty to preserve for the future as much as we can. If today's historians are any guide, the most important bits will be those less valuable today: the ones nobody will care to actually preserve.

Citing Alessandro Barbero, a top notch Italian current historian, he would kill no know what a common passant had for breakfast in the tenth century. We know nothing about that, while we know a tiny little more about kings.

24
Web printing (feddit.it)
submitted 9 months ago by [email protected] to c/[email protected]

Hi!

I have setup ScanServJS which is an awesome web page that access your scanner and let you scan and download the scanned pages from your self hosted web server. I have the scanner configured via sane locally on the server and now I can scan via web from whatever device (phone, laptop, tablet, whatever) with the same consistent web interface for everyone. No need to configure drivers anywhere else.

I want to do the same with printing. On my server, the printer is already configured using CUPS, and I can print from Linux laptops via shared cups printer. But that require a setup anyway, and while I could make it work for phones and tablets, I want to avoid that

I would like to setup a nice web page, like for the scanner, where the users no matter the device they use, can upload files and print them. Without installing nor configuring anything on their devices.

Is there anything that I can self-host to this end?

41
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]

Hi fellow hosters!

I do selfhost lots of stuff, starting from the classical '*Arrs all the way to SilberBullet and photos services.

I even have two ISPs at home to manage failover in case one goes down, in fact I do rely on my home services a lot specially when I am not at home.

The main server is a powerful but older laptop to which i have recently replaced the battery because of its age, but my storage is composed of two raid arrays, which are of course external jbods, and with external power supplies.

A few years ago I purchased a cheap UPS, basically this one: EPYC® TETRYS - UPS https://amzn.eu/d/iTYYNsc

Which works just fine and can sustain the two raids for long enough until any small power outage is gone.

The downside is that the battery itself degrades quickly and every one or two years top it needs to be replaced, which is not only a cost but also an inconvenience because i usually find out always the worst possible time (power outage), of course!

How do you tackle the issue in your setups?

I need to mention that I live in the countryside. Power outages are like once or twice per year, so not big deal, just annoying.

[-] [email protected] 33 points 10 months ago

My guess you had broken cables or defective connectors. Because even on cat5 (not cat5e) you should get much more than 7mbit, or did you have coaxial? LoL.

In my experience 90% are plugs, specially if you crimped yourself with Chinese tools

[-] [email protected] 34 points 10 months ago

Foreword: I only stream my music, from FLAC preferably. I don't own vynils but mostly i don't own CD's anymore either.

CD is dead and should be dead. Rip it and stream it, full stop. No need or reason to keep a degrading digital format when you can just rip it (full quality and store as FLAC) and stream it. That's the whole point.

Vynil instead gives you the experience of listening, with all the associated crap/fun depending on your POV.

So while there is a case for vynil today, but I don't share it, there is zero case for CDs. Just download the bits. Don't waste plastic and shit with a polluting and degrading medium that make no sense today that downloading a full quality uncompressed audio file takes seconds.

[-] [email protected] 31 points 10 months ago

Database is a non starter for me too.

With plain MD files you can sync and edit everywhere with any tools.

Sorry to say that tools comes and go... My notes don't. In 20 years time I will be using different tools, same notes.

The same is true for photos, and that's why Immich is also a no go for me.

[-] [email protected] 31 points 1 year ago

Some notes: Don't use GPU to reencode you will lose quality.

Don't worry for long encoding times, specially if the objective is long term storage.

Power consumption might be significant. I run mine what the sun shine and my photovoltaic picks up the tab.

And go AV1, open source and seems pretty committed to by the big players. Much more than h265.

[-] [email protected] 62 points 1 year ago

Go AV1... In my direct experience the space saving is simply amazing at the same quality.

265 doesn't seems to be the future since all Android are going to support AV1 by mandatory from A14.

[-] [email protected] 61 points 1 year ago

I answer for myself. On linux the neat tool called "mediainfo" will print MKVs ,metadata, and that includes the real ISO title.

[-] [email protected] 40 points 2 years ago

I wouldn't dream to use any stock android at this point. Been on LOS forever and each new phone I buy either check if Los is available or, in one case (my current phone) I ported Los for it myself.

[-] [email protected] 36 points 2 years ago

Make no sense. Just download the ISO (from a torrent if you like "pirate stuff" or from official redhat after free registration) and install it. Dont sign-in, done.

Anyway what you pay for is supoort and online resources, not software.

Also, if you like "new" stuff use other distros. RHEL is for stability and long term support.

view more: next ›

Shimitar

0 post score
0 comment score
joined 2 years ago