1
303
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
51
submitted 7 hours ago* (last edited 7 hours ago) by [email protected] to c/[email protected]

Hello Self-Hosters,

What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )

Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.

My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it'll be double-backed up to externals.

  1. Is it better to run #cp /var/lib/docker/volumes/* /backupLocation every once in a while, or is it preferable to define mountpoints for everything inside of /home/user/Containers and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches?

  2. How do you test your backups? I'm thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.

  3. I started documenting my system in my notes and making a checklist for what I need to backup and where it's stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do docker-compose down edit the mountpoints in docker-compose.yml and run docker-compose up to get a working system?

3
216
submitted 21 hours ago* (last edited 21 hours ago) by [email protected] to c/[email protected]

Announcing Linkwarden 2.11

Today, we're excited to announce the release of Linkwarden 2.11! 🥳 This update brings significant improvements and new features to enhance your experience.

For those who are new to Linkwarden, it’s basically a tool for saving and organizing webpages, articles, and documents all in one place. It’s great for bookmarking stuff to read later, and you can also share your resources, create public collections, and collaborate with your team. Linkwarden is available as a Cloud subscription or you can self-host it on your own server.

This release brings a range of updates to make your bookmarking and archiving experience even smoother. Let’s take a look:

What’s new:

✨ Customizable Readable View

You can now configure the font style, font size, line height, and line width for the readable view. This allows you to create a more personalized reading experience that suits your preferences.

This feature essentially gives Linkwarden what other read-it-later apps like Pocket offered.

Customizable Readable GIF

📝 Add Notes to Highlights

You can now add notes to your highlights in the readable view and view them in the highlights sidebar. This is a great way to jot down your thoughts or insights while reading, making it easier to remember key points later.

Notes GIF

⚙️ Customizable Dashboard

The dashboard has received a major overhaul! You can now customize it to show the information that matters most to you. Choose from various widgets like recent links, pinned links, or your saved collections. This makes it easier to access the content you care about right from the dashboard.

📥 Import from Pocket

Good news for Pocket users! You can now import your saved links from Pocket into Linkwarden. This makes it easy to transition to Linkwarden without losing your existing bookmarks.

🌐 Crowdin translation

We’ve integrated Crowdin for translations, making it easier to contribute translations for Linkwarden. If you’re interested in helping out with translations, check out our Crowdin page.

To start translating a new language, please contact us so we can set it up for you. New languages will be added once they reach at least 50% translation completion.

Crowdin

🎨 Improved UI

Thanks to Shadcn UI, the user interface has been improved with a more modern and polished look. This update enhances the overall user interface, making it easier to use Linkwarden.

✅ And more...

There are also a bunch of smaller improvements and fixes in this release to keep everything running smoothly.

Full Changelog: https://github.com/linkwarden/linkwarden/compare/v2.10.2...v2.11.0

Want to skip the technical setup?

If you’d rather skip server setup and maintenance, our Cloud Plan takes care of everything for you. It’s a great way to access all of Linkwarden’s features—plus future updates—without the technical overhead.


We hope you enjoy these new enhancements, and as always, we'd like to express our sincere thanks to all of our supporters and contributors. Your feedback and contributions have been invaluable in shaping Linkwarden into what it is today. 🚀

4
54
submitted 1 day ago by [email protected] to c/[email protected]
5
583
submitted 1 day ago by [email protected] to c/[email protected]
6
456
Jellyfin over the internet (startrek.website)
submitted 1 day ago by [email protected] to c/[email protected]

What’s your go too (secure) method for casting over the internet with a Jellyfin server.

I’m wondering what to use and I’m pretty beginner at this

7
44
submitted 1 day ago by [email protected] to c/[email protected]

Hello,

as you may can guess i am here because i need some help because i want to self host some stuff and i am pretty new to this stuff. I did a loto f research and i came up with a lot of stuff. I will present you my thoughts and maybe some people here can tell me if i am good or not.

First the Hardware.

I did a lot of research and came up with a HP Elitedesk 800 G5 Mini as my home server.

It can hold 2x NVME SSD and 1xSata SSD. It has an Intel 5 9500T and is upgradeable to 64gb of Ram.

I can get one from ebay used for maybe 150-170€. Then i need to upgrade the ram because it comes with 8gb only. I thought maybee upgrade it to 32gb for now. And buy 2 nvme ssds both 2tb dont know which brand is cheap and good there. The sata ssd could be my operating system i have 1 with 120 gb at home hope this is enough.

The NVME SSDs are 1 for storage of mainly photos videos and maybe a small audio collection. The other is to make a backup of all this. (Mirrored)

 

Second Operating System

I know there area lot of things out there and i know people can recommend a lot of stuff but …. I wanna keep it as simple as possible for my first homeserver ... also i dont ´have too Much time with an 2 year old child. So my thoughts were using Ubuntu server with docker and portainer. Just that.

 

Third My apps and Stuff.

So mainly i wanted to run the following Applications on that.

-            Immich

-            Homeassisstant

-            Joplin

-            Audiobookshelf

-            Calibre Ereader

-            CalDav App for a Calender Sync with MY Phone and MY wife

-            Pi hole

-            Vaultwarden

-            And Homarr as a Dashboard for all of this.

Fourth Using all this from my phone

Thats the only part where i didnt have time to do some research how i use all off that safly from my phone.

I guess i need some kind of VPN for a secure use?

I hope that part is easy.

Son ow i shared all off my initial researches and thoughts. I hope i wrote not to much mistakes.

And i hope you guys can help me out a little.

Greetings

8
7
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]

Kind of an odd question, and something I think is a long shot, but here goes.

I’ve long known and used GitHub pages for the odd static site, and generally like the service, Microsoft ownership aside, for the purpose of free hosting for temporary sites.

I was just trying to figure out how to host an instance of something for a popup event and wanted to be able to have a url that was mostly readable/recognizable. So my mind jumped to GitHub pages. I know it possible to connect GH pages to a custom domain, I used to host my personal website like this, but is the reverse possible? Can I expose my self hosted services on

user.github.io

in some way?

9
176
submitted 2 days ago by [email protected] to c/[email protected]

Here's the link to the docker docs

10
58
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]

Today I can share a major development status update of XPipe, a connection hub that allows you to access your entire server infrastructure from your local desktop. It can make your life easier when working with any kind of servers by eliminating all the commonly tedious tasks that come up when interacting with remote systems, either from the terminal or from a graphical interface. XPipe comes with integrations for SSH, docker and other containers, various hypervisors, and more without requiring setup on your remote systems. You can also keep using your favourite text/code editors, terminals, password managers, shells, command-line tools, and more with it.

Hub

Docker compose

This release introduces support for docker compose. Containers in compose projects are grouped together and can be managed all at the same time via compose project entries.

The container state information shown is also improved, always showing the container state in combination with the system information.

Compose

Batch mode

There is now a batch mode available that allows you to select multiple systems via checkboxes and perform actions for the entire batch. This can include starting/stopping, automatically adding available subconnections, or running scripts on all selected systems.

You can toggle the batch mode in the top left corner.

Batch

Password managers

The password manager integrations have been upgraded:

  • There is now support for KeePassXC
  • All password manager integrations have been reworked to work out of the box without configuration
  • There is now support to use password manager SSH agents more easily
  • You can now unlock the xpipe vault with your password manager

Password Manager

Terminals

The terminal integration comes with many new features:

  • There is now built-in support for the terminal multiplexers tmux, zellij, and screen. This is especially useful for terminals without tabbing support.
  • There is also now built-in support for custom prompts with starship, oh-my-posh, and oh-my-zsh.
  • On Windows, you now have the ability to use a WSL distribution as the terminal environment, allowing you to use the new terminal multiplexer integration seamlessly on Windows systems as well.

SSH

Various improvements were made to the SSH implementation:

  • The SSH gateway implementation has been reworked so that you can now use local SSH keys and other identities for connections with gateways
  • The VSCode SSH remote integration has been reworked to allow more connections it to be opened in vscode. It now supports essentially all simple SSH connections, custom SSH connections, SSH config connections, and VM SSH connections. This support includes gateways
  • There is now built-in support to refresh an SSO openpubkey with the opkssh tool when needed
  • There is now the option to enable verbose ssh output to diagnose connection issues better
  • For VMs, you can now choose to not use the hypervisor host as SSH gateway and instead directly connect to the VM IP

Other

  • Connection names, e.g. VM names, will now automatically update on refresh when they were changed
  • You can now launch custom scripts within XPipe with a command output dialog window without having to open a terminal
  • Various installation types like the linux apt/rpm repository and homebrew installations now support automatic updates as well
  • The k8s integration will now automatically add all namespaces for the current context when searching for connections
  • The application window will now hide any unnecessary sidebars when being resized to a small width. This makes it much easier to use XPipe in a tiling window arrangement
  • The webtop has been updated to have terminal multiplexers, proper konsole tab support, disabled kwallet, and more
  • Various error messages and connection creation dialogs now contain a help link to the documentation sections

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place with limitations on what kind of systems you can connect to in the community edition as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

Outlook

If this project sounds interesting to you, you can check it out on GitHub, visit the Website, or check out the Docs for more information.

Enjoy!

11
62
submitted 2 days ago by [email protected] to c/[email protected]

Hey all. I'm starting to plan out how to build a home camera system. For now I just want to use it to keep an eye on the dogs while I'm out of the house, so all of it indoors and with audio, but with plans to expand in the future. My one hard requirement is that the camera themselves are only communicating locally and the streams are accessible outside my network in a secure manner.

I already have a server running some docker containers, including a reverse proxy*, with a GPU (Arc B580) installed for other video streaming. I also got a Google Coral on its way for future camera detection funs. Would the B580 be able to cope with say 2-4 camera streams (of say 1080p quality) and streaming a 4k HDR movie? This support page says it might be possible, but could stretch the limits a bit.

My imagined setup is PoE IP cameras with RTSP streaming to my home server running Frigate (I'm open to suggestions) with some Home Assistant on the side.

For cameras I've seen Dahua and Hikvision recommended. Do they all have/is RTSP a common feature on IP cameras? As none of the cameras I've looked at on Dahua's website has explicitly said they support it.

I've been thinking about installing a separate network card on the server as well just for the cameras. But this might be a bit over-kill, and might be enough to block them on the router? But I image I will need a special switch for PoE either way.

Outside of buying cameras, switch, and cables and then configuring it all, are there any big ticket items I've missed? Or is my set up kinda meek and a separate server for the video streams is recommended?

  • I know a reverse proxy isn't typically as safe as a VPN tunnel, but it's a balance with easy of use.
12
26
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]

Hi!

I have a subsonic instance running but I rarely listen to Albums. Stuff I really like are DJ performances like by the channel The Moment.

So I thought: why not download and self-host them before Google makes Youtube sign-in only, (like Elon and Facebook did).

That stuff is probably quite hard to organize. But the type of music simply breaks the common services, like Jellyfin, or Subsonic.

I know of funkwhale. But I'd like to keep the contents private. I just wanna listen to music at work (so being open to the web is a plus). I thought funkwhale is a bit too... "social" for me. I'm a (re)uploader, not creator.

You got any ideas? Maybe a youtube-cloner with audio-only support? (I know how to download videos already)

Edit: Of course, I'd download the sets legally, e.g. from their patreon discord, or whatever. ;)

Also: I know that restricting it to my VPN would be ideal for security and legality reasons. But that's a bit inconvenient. And I want to check my options.

13
111
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]

I've just re-discovered ollama and it's come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.

I'd like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created [email protected] for everyone to discuss, ask questions, and help each other out with ollama!

So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!

Thanks!

14
85
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]

I made this website to improve the experience of shopping for drives on https://serverpartdeals.com/ as I find their interface not very helpful.

If there are any additional features or changes you want to make feel free to open a pull request https://github.com/Ykrej/ServerPartDealsTable

EDIT: critiques on my code also welcomed, this is my first time writing Svelte.

15
24
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]

Evening y’all

I’ll try to keep it brief, I need to move my reverse proxy (traefik) to another machine and I’m opting to utilize Docker Swarm for the first time this way I’m not exposing a bunch of ports on my main server over my network, so ideally I’d like to have almost everything listening on local host while traefik does it’s thing in the background

Now I gotta ask, is Docker Swarm the best way to go about this? I know very little about Kubernetes and from what I’ve read/watched it seems like Swarm was designed for this very purpose however, I could be entirely wrong here.

What are some key changes that differ typical Compose files from Swarm?

Snippet of my current compose file:

services:
  homepage:
    image: ghcr.io/gethomepage/homepage
    hostname: homepage
    container_name: homepage
    networks:
      main:
        ipv4_address: 172.18.0.2
    environment:
      PUID: 0 # optional, your user id
      PGID: 0 # optional, your group id
      HOMEPAGE_ALLOWED_HOSTS: MY.DOMAIN,*
    ports:
      - '127.0.0.1:80:3000'
    volumes:
      - ./config/homepage:/app/config # Make sure your local config directory exists
      - /var/run/docker.sock:/var/run/docker.sock #:ro # optional, for docker integrations
      - /home/user/Pictures:/app/public/icons
    restart: unless-stopped
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.homepage.rule=Host(`MY.DOMAIN`)"
      - "traefik.http.routers.homepage.entrypoints=https"
      - "traefik.http.routers.homepage.tls=true"
      - "traefik.http.services.homepage.loadbalancer.server.port=3000"
      - "traefik.http.routers.homepage.middlewares=fail2ban@file"
  traefik:
    image: traefik:v3.2
    container_name: traefik
    hostname: traefik
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    networks:
      main:
        ipv4_address: 172.18.0.26
    ports:
      # Listen on port 80, default for HTTP, necessary to redirect to HTTPS
      - target: 80
        published: 55262
        mode: host
      # Listen on port 443, default for HTTPS
      - target: 443
        published: 57442
        mode: host
    environment:
      CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
      # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
      TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
    secrets:
      - cf_api_token
    env_file: .env # use .env
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./config/traefik/traefik.yml:/traefik.yml:ro
      - ./config/traefik/acme.json:/acme.json
      # - ./opt:/opt
      #- ./config/traefik/config.yml:/config.yml:ro
      - ./config/traefik/custom-yml:/custom
      # - ./config/traefik/homebridge.yml:/homebridge.yml:ro
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik.entrypoints=http"
      - "traefik.http.routers.traefik.rule=Host(`traefik.MY.DOMAIN`)"
      #- "traefik.http.middlewares.traefik-ipallowlist.ipallowlist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 208.118.140.130, 172.18.0.0/16"
      #- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
      - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
      - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`traefik.MY.DOMAIN`)"
      #- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
      - "traefik.http.routers.traefik-secure.tls.domains[0].main=MY.DOMAIN"
      - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.MY.DOMAIN"
      - "traefik.http.routers.traefik-secure.service=api@internal"
      - "traefik.http.routers.traefik.middlewares=fail2ban@file"

networks:
  main:
    external: true
    ipam:
     config:
       - subnet: 172.18.0.0/16
         gateway: 172.18.0.1

I censored out my actual domain with MY.DOMAIN so if that confuses people i apologize.

16
30
submitted 3 days ago by [email protected] to c/[email protected]

What (lightweight) solution are you using for network monitoring? Routers, switches, AP's fw's.

17
319
submitted 5 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]
18
43
submitted 4 days ago by [email protected] to c/[email protected]

I'm looking for some kind of File Drop / File Upload service.

I'd like to be able to create a folder, and create a share / upload link for that folder that I can give to a customer to use to upload their documents.

I've been using nextcloud but I don't use nextcloud for any other purpose and it's a behemoth so I'd like to transition to something else.

Some of these requirements are essential (!):

  • no login for customers uploading (!)
  • optional password protection for uploads
  • can't see / download files already present in the shared folder
19
62
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

Hey,

I'm using Joplin (a Markdown note taking app) and think about migrating to Logseq because of multiple reasons.

The main problems I have not yet solved:

  1. OSS-Syncing Logseq notes between Desktop OS and Android. Logseq does not have an OSS selfhostable sync-server like Joplin has...
  2. Making sure to transform my stuff, so that Logseq can work with it. Yes, it's both Markdown, but especially images and how Joplin handles them seem to be a problem for this migration.

What are your experiences? Have you ever switched between 2 Markdown note taking apps?

  • Which ones?
  • How well went it?

Is it maybe even possible to use app 1 and a Desktop OS and a totally different app on Android simultaneously on the same data? The common standard is Markdown...

20
81
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

Seedit is a selfhosted peer-to-peer Reddit Alternative using IPFS

doesn’t rely on any servers or instances .

We mainly use 3 technologies, which each have several protocols and specifications:

IPFS (for content-addressed, immutable content, similar to bittorrent) https://docs.ipfs.tech/ https://specs.ipfs.tech/

IPNS (for mutable content, public key addressed)

https://docs.ipfs.tech/concepts/ipns/

Libp2p Gossipsub (for publishing content and votes p2p)

https://docs.libp2p.io/concepts/pubsub/overview/

They also have a youtube channel where they cover how most of their tech works:

https://www.youtube.com/c/IPFSbot

the problem with federated social media is that each federated instance is just a regular centralized sites. They can censor each other, they can get taken down at any moment, and they are hard to run and manage. Whereas on p2p tech like bittorrent or bitcoin or plebbit, the p2p nodes don't require domains, they just work straight out of the box. On plebbit, you open the app, and you're instantly receiving p2p connections right away, just like a bittorrent client, no domain or server required. Users connect to your node directly, p2p, and nobody can stop you. P2P also scales infinitely, which is the reverse of centralized websites like federated instances: the more users there are, the faster it gets. And it's impossible to censor at scale.

Seedit is not Nostr

nostr isnt p2p, the relays can censor you, the relays can run out of money and shut down, the relays can get DDOSed, they earn no money to serve your content.

the people running the relays are probably legally obligated to censor you by their jurisdiction. for example in the UK you go to jail for mean tweets. the person running the relay with mean content would probably go to jail if they set foot in the UK.

CP

  • the protocol is text only, to embed media, you need to host it on the regular ( Centralized ) internet, and then you link to it like https://example.com/image.jpg, and the host will stop hosting that image and report your IP.

  • the community creator can assign mods, mods can remove posts from that community. if a community is badly moderated, the user will never see it, it wont be recommended to him. the user can visit bad communities directly just like you can visit a bad website directly, but it's not recommended to you so it's safe to use.

it’s the same as bittorrent , this p2p tech can’r prevent people from sharing stuff, but on seedit you can’t share media, it’s text-only so the liability falls to the centralized provider of the embedded media from the link the user shares as text. Also being p2p, seedit is not private, so it can’t really be used for illegal activity

About ActivityPub

the problem with federated social media is that each federated instance is just a regular centralized sites. They can censor each other, they can get taken down at any moment, and they are hard to run and manage. Whereas on p2p tech like bittorrent, p2p nodes don't require domains, they just work straight out of the box. On seedit, you open the app, and you're instantly receiving p2p connections right away, just like a bittorrent client, no domain or server required. Users connect to your node directly, p2p, and nobody can stop you. P2P also scales infinitely, which is the reverse of centralized websites like federated instances: the more users there are, the faster it gets. And it's impossible to censor at scale.

Also the code is fully open source

https://github.com/plebbit/seedit

21
98
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

https://getoffpocket.com/self_hosted

The link is the view for people who like to self-host. I'm also hoping to guide people who would never self-host to using open source tech. I'm a big proponent of that myself. I switched to Wallabag quite some time ago.

22
88
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

I am in the EU. I want to help make the TOR network more robust by contributing a relay node. I have one of three hardware options: a raspberry pi zero W, raspberry pi 4B, or ThinkPad T470s.

In your practical experience, which of these computers would be the best for the network? As I understand, beyond a point, the CPU power doesn't matter unless massive traffic loads go through the node.

P.S: Not sure if this is relevant, but I currently have a pihole hosted in a separate RPI zero. I plan to host this at home. I do not have a separate connection line. My router doesn't support vlan.

Add: Thank you for the kind replies. Based on the feedback, it think I'm currently not setup to help the network. I will instead continue with my annual contribution.

I will look into hosting a node on a VPS and just pay a monthly subscription fee or something.

23
31
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

I'm looking for experiences and opinions on kubernetes storage.

I want to create a highly available homelab that spans 3 locations where the pods have a preferred locations but can move if necessary.

I've looked at linstore or seaweedfs/garage with juicefs but I'm not sure how well the performance of those options is across the internet and how well they last in long term operation. Is anyone else hosting k3s across the internet in their homelab?

Edit: fixed wording

24
14
submitted 6 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]

So, recently I spun up cAdvisor to provide some metrics for the Grafana dashboard. I created both the docker-compose.yml and prometheus.yml thusly:

prometheus.yml:

spoiler

scrape_configs:
- job_name: cadvisor
  scrape_interval: 5s
  static_configs:
  - targets:
    - cadvisor:8080

docker-compose.yml

spoiler

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
    - 9090:9090
    command:
    - --config.file=/etc/prometheus/prometheus.yml
    volumes:
    - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
    depends_on:
    - cadvisor
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    ports:
    - 8080:8080
    volumes:
    - /:/rootfs:ro
    - /var/run:/var/run:rw
    - /sys:/sys:ro
    - /var/lib/docker/:/var/lib/docker:ro
    depends_on:
    - redis
  redis:
    image: redis:latest
    container_name: redis
    ports:
- 6379:6379

Placed them both in /tmp/cadvisor/ and ran docker compose up. All well and good, got some metrics to feed Grafana and all would seem jippity jippity.

Next day I notice Prometheus is off line. Hmm, check everything out. Logs complaining of a missing prometheus.yml. On a hunch I recreated the above prometheus.yml and placed it back in /tmp/cadvisor/, restart Prometheus, and it fires right up no runs, no drips, no errors. Before I uploaded the new prometheus.yml, I notice that there is a directory now named prometheus.yml in /tmp/cadvisor/, which is empty. Deleted it.

Next day, same scenario. Missing prometheus.yml, directory called prometheus.yml in /tmp/cadvisor/. I thought well, if it's getting deleted, change the permissions, and continued my daily affairs.

Today, same exact scenario. So, wtf, over? Run some commands:

stat /tmp/cadvisor/prometheus.yml
sudo lsof /tmp/cadvisor/prometheus.yml
grep "delete" /var/log/syslog

I can see that the file IS being deleted, but I cannot seem to trace down what is deleting it. It's like there is a cron job that fires off every day at a certain time and deletes prometheus.yml, and in it's place, creates a directory called prometheus.yml effectively taking Prometheus offline. I have no such cron job tho.

Any ideas? Suggestions? Ancient wizardry? Any mystical incantations or tomes to consult?

25
14
submitted 1 week ago* (last edited 6 days ago) by [email protected] to c/[email protected]

I started a webui container and then I started to get this error in OpenWebUI interface.

SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data

  • latest Ollama on windows
  • latest Open WebUI in docker desktop

according to a post online, I should set ENABLE_WEBSOCKET_SUPPORT=True in my docker compose, but I'm not using reverse proxy. Is ENABLE_WEBSOCKET_SUPPORT=True necessary?

What could a possible solution be for this?

My docker compose

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:cuda 
    container_name: open-webui
    restart: unless-stopped
    ports:
      - "3000:8080"
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - ./data:/app/backend/data
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
volumes:
  open-webui:

log

2025-06-21 10:43:57 open-webui  | 2025-06-21 00:43:57.601 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.21.0.1:37276 - "GET /_app/version.json HTTP/1.1" 304 - {}
2025-06-21 10:44:58 open-webui  | 2025-06-21 00:44:58.114 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.21.0.1:49064 - "GET /_app/version.json HTTP/1.1" 304 - {}
2025-06-21 10:45:58 open-webui  | 2025-06-21 00:45:58.779 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.21.0.1:55958 - "GET /_app/version.json HTTP/1.1" 304 - {}
2025-06-21 10:46:59 open-webui  | 2025-06-21 00:46:59.179 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.21.0.1:47424 - "GET /_app/version.json HTTP/1.1" 304 - {}

UPDATE:

  • when I open http://localhost:3000/ in another browser it works perfectly fine. I think the issue is about the browser I used (firefox with a lot extension installed and setting tweaked)

UPDATE 2: The problem is with this plugin https://addons.mozilla.org/en-US/firefox/addon/chameleon-ext/ Everything works fine with it disabled.

The reason my chameleon breaks openwebui is because I changed a setting in it that it blocks all websocket connection

Thank you everyone for your help

view more: next ›

Selfhosted

46672 readers
1248 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS