Selfhosted

39650 readers
299 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

Evening Lemmy,

I have run into a small hiccup in my self-hosting journey. Youtube on my TV in the living room has ads... and they become more unbearable by the day. To that end, I'd like to set up a Raspberry Pi (Or something) to run as a one-stop for media. Ideally, I'd like it to have YouTube (Or more likely NewPipe/FreeTube), Steam Link and access to my Jellyfin instance. More ideally, I'd like this to be controllable with a controller (TV Remote, Steam controller, doesn't matter). The reason for the latter is that I'd rather not create too much trouble for my wife when she uses the TV.

I've done some looking, and I seem to be able to get an Amazon Firestick to run NewPipe, and Jellyfin, and maybe even the Steam Link but from the stories I've read it's... less than ideal. So, I was hoping there may be an alternative.

The goal is to get all three in one system, with decently user friendly functionality.

Has anyone set something similar up, and could you point me in a direction.

3
20
submitted 22 hours ago* (last edited 12 hours ago) by [email protected] to c/[email protected]
 
 

Hi people. I am running pihole under podman and its dedicated system account on my NAS. Now, from the NAS, I get a connection refused on ip.of.the.nas:53 but everywhere else in my network, pihole works perfectly. To run pihole as a rootless container, i made it listen on 1053 and I have a firewall redirection from 53 to 1053 for both udp and tcp. Any pointer to where (and how) I can debug this ?

Edit: Small precision about my current setup : ISP router (so I can't really do anything on it) and NAS running opensuse leap

4
96
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Hey guys, version 2.5.3 of Tasks.md just got released!

This release is actually pretty small, as I focused a lot on resolving technical debt, fixing visual inconsistencies and improving "under the hood" stuff. Which I will continue to do a little bit more before the next release.

For those who don't know, Tasks.md is a self-hosted, Markdown file based task management board. It's like a kanban board that uses your filesystem as a database, so you can manipulate all cards within the app or change them directly through a text editor, changing them in one place will reflect on the other one.

The latest release includes the following:

  • Feature: Generate an initial color for a new tags based on their names
  • Feature: Add new tag name input validation
  • Fix: Use environment variables in Dockerfile ENTRYPOINT
  • Fix: Allow dragging cards when sort is applied
  • Fix: Fix many visual issues

Edit: Updated with the correct link, sorry for the confusion! The fact that someone created another application with the same name I used for the one I made is really annoying

5
 
 

I have a small homelab that is not open to the internet. I am considering the following setup. Please let me know if there are any glaring issues or if I am over complicating things.

  • I want to setup a reverse proxy in the cloud that will also act as a certificate authority. (I want to limit who can access the server to a small group of people.)

  • I will setup a vpn from a raspberry pi in my home to the reverse proxy in the cloud.

  • The traffic will pass from the raspberry pi vpn to my homelab.

I am not sure if I need the raspberry pi. I like the cloud as the reverse proxy as I do not have a static IP. I would just get a cheap vps from hetzner or something like that.

6
26
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

I have Grafana and Influxdb setup but it is fairly complex for what I am doing. I don't want to spend a bunch of time creating dashboards and thinking about the movement of data. I am looking for something simple.

I am looking to mostly monitor uptime and Ansible automations.

Edit:

Found this: gethomepage.dev

7
 
 

I need to record information about what my cat eats and does, as she might have a food allergy and I need to track down what it is.

So I am after some kind of a user friendly locally hosted database (maybe via some kind of app), preferably Linux friendly.

It would be nice if it had similar relationships to the added image, some kind of relational DB that I can fill with data. But essentially I need to have a bunch of lookup tables to return some data specific to difference events.

Its a bit of a pain (and takes time) to have to write an entire webapp to manage all this from scratch, that's why I am looking for some kind of user friendly GUI way to do it. Surely there must be some kind of relational database managing "application" that lets you set up some lookup tables and enter data in a nice and easy GUI way to do it? sqllitebrowser doesn't count as it doesn't handle linked tables in a nice way (would be nice if its friendly for my wife to use) :)

Cheers!

8
 
 

Hi guys! I'm looking to monitor/control the power consumption of some old window-hanging aircon units, that don't really mind when the power is literally cut from the wall. I'd like to be able to see how much power they consume, and also being able to turn them on and off at the socket (the IR doesn't work all that well to begin with). I was checking about the Tapo P110M, but seems these are not sharing the power consumption offline, you need to register them in the app and they only do it through a Tapo account.

What alternatives do I have?

Important, I guess: As I live off an ex-UK colony here, we do have UK-like three pronged sockets, that's the form factor (Type G, I think?) I'd be needing.

9
33
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

Hi there,

I’m thinking about what kind of opportunities there is for a portable media center you can have with you in the car, train or whatever.

I imagine that the media center would create its own WiFi, so that devices would be able to connect to it and access the media.

I know you could do something with a Raspberry Pi, but how could this work in practice? What would be an easy way to access the media from an iPad fx? What software could be used?

As a bonus, it would be pretty cool if the media center could connect to a hotel WiFi and then create a hotspot from that.

Edit: This would be used when on the move. So you would have the media with you on the media center.

10
 
 

Am using Calibre and audiobookshelf. I'd love a solution where I can search the actual contents of the books. Like being able to search for topics inside all of my books.

Would be a cool AI feature - similar to how immich works.

Does anyone have a solution for that?

11
 
 

This app has really made my life better so I thought I'd share it. It's a bookmark everything app like raindrop.io or Pocket; except it is self-hosted. It has Firefox & Chrome extensions as well as iOS and Android mobile apps (so it's available pretty much everywhere).

You have the option to use AI for auto-tagging or you can not use the feature if AI bothers you. AI can either be your locally hosted LLM or you can use the ChatGPT API. I use it with a locally hosted LLM.

I'm not the developer, just a happy user.

https://hoarder.app/

12
 
 

I see a lot of talk of Ollama here, which I personally don't like because:

  • The quantizations they use tend to be suboptimal

  • It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table.

  • It abstracts away things that you should really know for hosting LLMs.

  • I don't like some things about the devs. I won't rant, but I especially don't like the hint they're cooking up something commercial.

So, here's a quick guide to get away from Ollama.

  • First step is to pick your OS. Windows is fine, but if setting up something new, linux is best. I favor CachyOS in particular, for its great python performance. If you use Windows, be sure to enable hardware accelerated scheduling and disable shared memory.

  • Ensure the latest version of CUDA (or ROCm, if using AMD) is installed. Linux is great for this, as many distros package them for you.

  • Install Python 3.11.x, 3.12.x, or at least whatever your distro supports, and git. If on linux, also install your distro's "build tools" package.

Now for actually installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: https://old.reddit.com/r/LocalLLaMA/comments/1fg3jgr/a_large_table_of_inference_engines_and_supported/

As far as I am concerned, 3 matter to "home" hosters on consumer GPUs:

  • Exllama (and by extension TabbyAPI), as a very fast, very memory efficient "GPU only" runtime, supports AMD via ROCM and Nvidia via CUDA: https://github.com/theroyallab/tabbyAPI

  • Aphrodite Engine. While not strictly as vram efficient, its much faster with parallel API calls, reasonably efficient at very short context, and supports just about every quantization under the sun and more exotic models than exllama. AMD/Nvidia only: https://github.com/PygmalionAI/Aphrodite-engine

  • This fork of kobold.cpp, which supports more fine grained kv cache quantization (we will get to that). It supports CPU offloading and I think Apple Metal: https://github.com/Nexesenex/croco.cpp

Now, there are also reasons I don't like llama.cpp, but one of the big ones is that sometimes its model implementations have... quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So:

This can go wrong, if anyone gets stuck I can help with that.

  • Next, figure out how much VRAM you have.

  • Figure out how much "context" you want, aka how much text the llm can ingest. If a models has a context length of, say, "8K" that means it can support 8K tokens as input, or less than 8K words. Not all tokenizers are the same, some like Qwen 2.5's can fit nearly a word per token, while others are more in the ballpark of half a work per token or less.

  • Keep in mind that the actual context length of many models is an outright lie, see: https://github.com/hsiehjackson/RULER

  • Exllama has a feature called "kv cache quantization" that can dramatically shrink the VRAM the "context" of an LLM takes up. Unlike llama.cpp, it's Q4 cache is basically lossless, and on a model like Command-R, an 80K+ context can take up less than 4GB! Its essential to enable Q4 or Q6 cache to squeeze in as much LLM as you can into your GPU.

  • With that in mind, you can search huggingface for your desired model. Since we are using tabbyAPI, we want to search for "exl2" quantizations: https://huggingface.co/models?sort=modified&search=exl2

  • There are all sorts of finetunes... and a lot of straight-up garbage. But I will post some general recommendations based on total vram:

  • 4GB: A very small quantization of Qwen 2.5 7B. Or maybe Llama 3B.

  • 6GB: IMO llama 3.1 8B is best here. There are many finetunes of this depending on what you want (horny chat, tool usage, math, whatever). For coding, I would recommend Qwen 7B coder instead: https://huggingface.co/models?sort=trending&search=qwen+7b+exl2

  • 8GB-12GB Qwen 2.5 14B is king! Unlike it's 7B counterpart, I find the 14B version of the model incredible for its size, and it will squeeze into this vram pool (albeit with very short context/tight quantization for the 8GB cards). I would recommend trying Arcee's new distillation in particular: https://huggingface.co/bartowski/SuperNova-Medius-exl2

  • 16GB: Mistral 22B, Mistral Coder 22B, and very tight quantizations of Qwen 2.5 34B are possible. Honorable mention goes to InternLM 2.5 20B, which is alright even at 128K context.

  • 20GB-24GB: Command-R 2024 35B is excellent for "in context" work, like asking questions about long documents, continuing long stories, anything involving working "with" the text you feed to an LLM rather than pulling from it's internal knowledge pool. It's also quite goot at longer contexts, out to 64K-80K more-or-less, all of which fits in 24GB. Otherwise, stick to Qwen 2.5 34B, which still has a very respectable 32K native context, and a rather mediocre 64K "extended" context via YaRN: https://huggingface.co/DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2

  • 32GB, same as 24GB, just with a higher bpw quantization. But this is also the threshold were lower bpw quantizations of Qwen 2.5 72B (at short context) start to make sense.

  • 48GB: Llama 3.1 70B (for longer context) or Qwen 2.5 72B (for 32K context or less)

Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize.

  • Now... you have to download the model. Bartowski has instructions here, but I prefer to use this nifty standalone tool instead: https://github.com/bodaay/HuggingFaceModelDownloader

  • Put it in your TabbyAPI models folder, and follow the documentation on the wiki.

  • There are a lot of options. Some to keep in mind are chunk_size (higher than 2048 will process long contexts faster but take up lots of vram, less will save a little vram), cache_mode (use Q4 for long context, Q6/Q8 for short context if you have room), max_seq_len (this is your context length), tensor_parallel (for faster inference with 2 identical GPUs), and max_batch_size (parallel processing if you have multiple user hitting the tabbyAPI server, but more vram usage)

  • Now... pick your frontend. The tabbyAPI wiki has a good compliation of community projects, but Open Web UI is very popular right now: https://github.com/open-webui/open-webui I personally use exui: https://github.com/turboderp/exui

  • And be careful with your sampling settings when using LLMs. Different models behave differently, but one of the most common mistakes people make is using "old" sampling parameters for new models. In general, keep temperature very low (<0.1, or even zero) and rep penalty low (1.01?) unless you need long, creative responses. If available in your UI, enable DRY sampling to tamp down repition without "dumbing down" the model with too much temperature or repitition penalty. Always use a MinP of 0.05 or higher and disable other samplers. This is especially important for Chinese models like Qwen, as MinP cuts out "wrong language" answers from the response.

  • Now, once this is all setup and running, I'd recommend throttling your GPU, as it simply doesn't need its full core speed to maximize its inference speed while generating. For my 3090, I use something like sudo nvidia-smi -pl 290, which throttles it down from 420W to 290W.

Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

13
 
 

Hello,

Long time lurker, first time poster and eternal newbie in selfhosting.

I have installed cloudflare tunnel in order to allow my Emby installation to be reached externally. (Previously was using tailscale but now trying this solution to expand my 'reach' and include my parents houshold)

The tunnel with email OTP works like a charm, but the access seems to be browser specific, so the Emby app doesn't seem to be able to connect (as it faces the email OTP challenge I suppose)

Is there a way to combine both?

I actually went down the path of writing a little script that tries to authorize the IP of someone that managed to pass the OTP challenge via browser. ( I get the user's IP and update the cloudflare policy via its API)

Seems to be overkill, any suggestions?

Thx

14
 
 

I’ve setup my own federated podcast through Castopod, but unsure of how to federate it with Lemmy directly. It is project-focused around FOSS tooling and just enjoying life.

Any suggestions on better integrating it with Lemmy? Thanks all. Posted to Technology community as well.

15
 
 

Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
16
 
 

I've been fighting with some issues on my unraid server recently and I'm at a point where I need a graphics card to see the actual video out as the machine boots. The psu in the chassis has no additional connectors that I could use for additional power to a graphics card.

Can someone point me in the direction of a super cheap graphics card that will be used for console only and not 3d graphics or gaming.

17
44
submitted 5 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]
 
 

Update: I solved my problem. I got everything working by using this repo, but also by not using LibreWolf - looks like either I'm missing something about its setup, or syncstorage-rs (firefox sync) doesn't handle it well: I noticed that when using it I would get "ua.os.ver":"UNKNOWN" in the logs, so maybe it's related.

I'm trying to host my Firefox Sync server. I got it running using docker and with instructions from this github repo. Everything looks fine, I think... I can reach the host and I can reach the __heartbeat__ endpoint, getting this response: {"version":"0.13.6","quota":{"enabled":false,"size":0},"database":"Ok","status":"Ok"}, but nothing seems to sync!

I set it up first on my LibreWolf instance and checking the docker container logs look like this:

Oct 12 10:43:42.840 INFO Starting 1 workers
Oct 12 10:43:42.844 INFO Starting "actix-web-service-0.0.0.0:8000" service on 0.0.0.0:8000
Oct 12 10:43:42.844 INFO Server running on http://0.0.0.0:8000 (mysql) No quota
Oct 12 10:43:59.438 INFO {"ua.os.ver":"NT 10.0","ua.name":"Firefox","ua.browser.family":"Firefox","uri.method":"GET","ua.os.family":"Windows","uri.path":"/__heartbeat__","ua.browser.ver":"130.0","ua":"130.0"}
Oct 12 10:43:59.706 INFO {"ua.os.ver":"NT 10.0","ua":"130.0","ua.browser.ver":"130.0","ua.os.family":"Windows","uri.path":"/favicon.ico","ua.name":"Firefox","ua.browser.family":"Firefox","uri.method":"GET"}
Oct 12 10:44:11.178 INFO {"ua.browser.family":"Firefox","ua.browser.ver":"130.0","uri.method":"GET","uri.path":"/1.0/sync/1.5","ua.os.family":"Linux","token_type":"OAuth","ua.os.ver":"UNKNOWN","ua.name":"Firefox","ua":"130.0"}
Oct 12 10:44:11.540 INFO {"ua.name":"HTTP Library","ua.os.family":"Other","ua.browser.family":"Other","ua":"curl","uri.path":"/__heartbeat__","uri.method":"GET","ua.browser.ver":"curl","ua.os.ver":"UNKNOWN"}
Oct 12 10:44:11.756 INFO {"ua.os.ver":"UNKNOWN","uri.path":"/1.0/sync/1.5","ua":"130.0","uri.method":"GET","token_type":"OAuth","ua.browser.ver":"130.0","ua.browser.family":"Firefox","ua.name":"Firefox","first_seen_at":"1728729851747","metrics_uid":"fcdfa197568a554e5f5b0a2d05d7b674","ua.os.family":"Linux","uid":"fcdfa197568a554e5f5b0a2d05d7b67452c597ab6caf7770a423378f86d1a4c0"}

I set my sync settings to have add-ons, bookmarks and history synced. I installed some add-ons, saved some bookmarks and tried to sync with a new browser profile, then with Firefox on Fedora and Mull on Android, but nothing seems to be moving.

Any idea what more to do to troubleshoot this?

18
19
 
 

I previously asked here about moving to ZFS. So a week on I'm here with an update. TL;DR: Surprisingly simple upgrade.

I decided to buy another HBA that came pre-flashed in IT mode and without an onboard BIOS (so that server bootups would be quicker - I'm not using the HBA attached disks as boot disks). For £30 it seems worth the cost to avoid the hassle of flashing it, plus if it all goes wrong I can revert back.

I read a whole load about Proxmox PCIE passthrough, most of it out of date it would seem. I am running an AMD system and there are many sugestions online to set grub parameters to amd_iommu=on, which when you read in to the kernel parameters for the 6.x version proxmox uses, isn't a valid value. I think I also read that there's no need to set iommu=pt on AMD systems. But it's all very confusing as most wikis that should know better are very Intel specific.

I eventually saw a youtube video of someone running proxmox 8 on AMD wanting to do the same as I was and they showed that if IOMMU isn't setup, then you get a warning in the web GUI when adding a device. Well that's interesting - I don't get that warning. I am also lucky that the old HBA is in its own IOMMU group, so it should pass through easy without breaking anything. I hope the new one will be the same.

Worth noting that there are a lot of bad Youtube videos with people giving bad advise on how to configure a VM for ZFS/TrueNAS use - you need them passed through properly so the VM's OS has full control of them. Which is why an IT HBA is required over an IR one, but just that alone doesn't mean you can't set the config up wrong.

I also discovered along the way that my existing file server VM was not setup to be able to handle PCIe passthrough. The default Machine Type that Proxmox suggests - i440fx - doesn't support it. So that needs changing to q35, also it has to be setup with UEFI. Well that's more of a problem as my VM is using BIOS. A this point it became easier to spin up a new VM with the correct setting and re-do the configuration of it. Other options to be aware of: Memory ballooning needs to be off and the CPU set to host.

At this point I haven't installed the new HBA yet.

Install a fresh version of Ubuntu Server 24.04 LTS and it all feels very snappy. Makes me wonder about my old VM, I think it might be an original install of 16.04 that I have upgraded every 2 years and was migrated over from my old ESXi R710 server a few years ago. Fair play to it, I have had zero issues with it in all that time. Ubuntu server is just absolutely rock solid.

Not too much to configure on this VM - SSH, NFS exports, etckeeper, a couple of users and groups. I use etckeeper, so I have a record of the /etc of all my VMs that I can look back to, which has come in handy on several occasions.

Now almost ready to swap the HBA after I run the final restic backup, which only takes 5 mins (I bloody love restic!). Also update the fstabs of VMS so they don't try mount the file server and stop a few from auto starting on boot, just temporarily.

Turn the server off and get inside to swap the cards over. Quite straightforward other than the SAS ports being in a worse place for ease of access. Power back on. Amazingly it all came up - last time I tried to add an NVME on a PCIe card it killed the system.

Set the PICe passthrough for the HBA on the new VM. Luckily the new HBA is on it's own IOMMU group (maybe that's somehow tied to the PCIE slot?) Make sure to tick the PCIE flag so it's not treated as PCI - remember PCI cards?!

Now the real deal. Boot the VM, SSH in. fdisk -l lists all the disks attached. Well this is good news! Try create the zpool zpool create storage raidz2 /dev/disk/by-id/XXXXXXX ...... Hmmm, can't do that as it knows it's a raid disk and mdadm has tried to mount it so they're in use. Quite a bit of investigation later with a combination of wipefs -af /dev/sdX, umount /dev/md126, mdadm --stop /dev/sd126 and shutdown -r now and the RAIDynes of the disks is gone and I can re-run the zpool command. It that worked! Note: I forgot to add in ashift=12 to my zpool creation command, I have only just noticed this as I write, but thankfully it was clever enough to pick the correct one.

$ zpool get all | grep ashift
storage  ashift                         0                              default

Hmmm, what's 0?

$ sudo zdb -l /dev/sdb1 | grep ashift
ashift: 12

Phew!!!

I also have passed through the USB backup disks I have, mounted them and started the restic backup restore. So far it's 1.503TB in after precisely 5 hours, which seems OK.

I'll setup monthly scrub cron jobs tomorrow.

P.S. I tried TrueNAS out in a VM with no disks to see what it's all about. It looks very nice, but I don't need any of that fancyness. I've always managed my VM's over SSH which I've felt is lighter weight and less open to attack.

Thanks for stopping by my Ted Talk.

20
 
 

Vague title I know, but I'm enough of a beginner at this to not really know what I need to ask!

I would like to rent a server, that allows me to spin up different services, including things like Windows to use as a remote desktop. Ideally, I would then be able to just migrate this whole setup to my home server.

I thought it would be as easy as renting a scalable VPS, but apparently if you run something like Proxmox on those, you'll get terrible performance?

My understanding is that I'd need to rent a bare metal server, but then my 'scalability' will suffer- I can't just wind up and down the specs as needed, correct?

My user case: For the next several months, I'm on the road, without a proper computer. I may have some work doing some CAD drafting, hence Windows. I'd also like to have some containers to run some dev tools, databases, web hosting. I'd also like to use the same service to start building my future home server environment- nextcloud, *arr, etc. Once I'm back home, I'd like to easily migrate this setup to a local machine, then continue to use the server as my own cloud and public entry point. And further down the line, hosting a gaming server for friends. In terms of location, Sydney would be great.

Will a VPS do this? Or do I need bare metal? Is there a single service that will allow me to do both, with one billing? Or am I doing a Dunning-Kruger?

Thanks in advance for your hints.

21
 
 

I'd like to host my own container images centrally in my network so that I can both cache the images (if dockerhub or similar goes down) but also host my own images that I don't want public. Anyone doing this?

22
 
 

I would like to scale back my hosting costs and migrate one (or a few) sites over to a machine that I host at home.

The bandwidth is more than enough to cover the traffic of these small sites.

The simplicity of IPv6 has attracted me to the idea of exposing that server over IPv6 for hosting, while my daily machines remain on the IPv4 side of the stack.

I don't care if this means that the sites are reachable by fewer visitors, as the traffic has never been huge.

Am I going down a rabbit hole that I will later regret? How would you do this right?

23
 
 

I want to set up ufw on my server, but something wrong here. Even when I trying to block 22 port ssh still working and nothing changing. I have ufw enabled, but nothing works.

24
 
 

I'm looking for a selfhosted app for inventory management with SSO support.

I already looked ad Grocy and Homebox. Grocy seems to be a really good app for the purpose. It's overloaded with features but fortunately you can deactivate the ones you don't need.

The only thing missing is SSO support via OIDC or SAML. Are there any alternatives that do support SSO?

25
 
 

Hello,

Small question to this incredible community.

Does anybody have a good suggestion about a link manager with plug-ins for different browsers?

If it could also support Samsung browser would be an incredible plus.

In my use case I intent to (easily) save some links for reading later and the integration with a mobile browser is fundamental to make the things easy.

Thanks in advance!!

view more: next ›