this post was submitted on 23 Feb 2025
60 points (89.5% liked)

Selfhosted

43108 readers
1479 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I set it to debug at somepoint and forgot maybe? Idk, but why the heck does the default config of the official Docker is to keep all logs, forever, in a single file woth no rotation?

Feels like 101 of log files. Anyway, this explains why my storage recipt grew slowly but unexpectedly.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 1 week ago (2 children)

You should always setup logrotate. Yes the good old Linux logrotate...

[–] [email protected] 35 points 1 week ago (5 children)

We should each not have to configure log rotation for every individual service. That would require identify what and how it logs data in the first place, then implementing a logrotate config. Services should include a reasonable default in logrotate.d as part of their install package.

[–] [email protected] 4 points 1 week ago

Docker services should let docker handle it, and the user could then manage it through Docker or forward to some other logging service (syslog, systemd, etc). Processes in containers shouldn't touch rotation or anything, just log levels and maybe which types of logs go to stdout vs stderr.

[–] [email protected] 2 points 1 week ago

Ideally yes, but I've had to do this regularly for many services developed both in-house and out of house.

Solve problems, and maybe share your work if you like, I think we all appreciate it.

load more comments (3 replies)
[–] [email protected] 28 points 1 week ago (14 children)

I don't disagree that logrotate is a sensible answer here, but making that the responsibility of the user is silly.

load more comments (14 replies)
[–] [email protected] 15 points 1 week ago (3 children)

Imho it’s because docker does away with (abstracts?) many years of sane system administration principles (like managing logfile rotations) that you are used to when you deploy bare metal on a Debian box. It’s a brave new world.

[–] [email protected] 49 points 1 week ago (3 children)

It's because with docker you don't need to do log files. Logging should be to stdout, and you let the host, orchestration framework, or whoever is running the container so logs however they want to. The container should not be writing log files in the first place, containers should be immutable except for core application logic.

[–] [email protected] 4 points 1 week ago

At worst it saves in the config folder/volume where persistent stuff should be.

[–] [email protected] 3 points 1 week ago

Good point!

[–] [email protected] 2 points 1 week ago (1 children)

Docker stores that stdout per default in a log file in var/lib/docker/containers/...

[–] [email protected] 3 points 1 week ago

You can configure the default or override per service. This isn't something containers should be doing.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago) (4 children)

Or you can use Podman, which integrates nicely with Systemd and also utilizes all the regular system means to deal with log files and so on.

[–] [email protected] 2 points 1 week ago (1 children)

Good suggestion, although I do feel it always comes back to this “many ways to do kind of the same thing” that surrounds the Linux ecosystem. Docker, podman, … some claim it’s better, I hear others say it’s not 100% compatible all the time. My point being more fragmentation.

[–] [email protected] 2 points 1 week ago (1 children)

100 ways to configure a static ip.
Why does it need that? At least one per distro controlled by the distro-maintainers.

[–] [email protected] 3 points 1 week ago (3 children)

There's basically three types of networking config:

  • direct with the kernel - don't do this
  • some distro-specific abstraction - e.g. /etc/network/interfaces for Debian
  • networking manager - wicked, network manager, etc

I do the last one because it's distro-agnostic. I use Network Manager and it works fine.

load more comments (3 replies)
load more comments (3 replies)
[–] [email protected] 4 points 1 week ago* (last edited 1 week ago) (2 children)

I disagree with this, container runtimes are a software like all others where logging needs to be configured. You can do so in the config of the container runtime environment.

Containers actually make this significantly easier because you only need to configure it once and it will be applied to all containers.

[–] [email protected] 3 points 1 week ago (1 children)

You are right and as others have pointed out correctly it’s Nextcloud not handling logging correctly in a containerized environment. I was ranting more about my dislike of containers in general, even though I use the technology (correctly) myself. It’s because I am already old on the scale of technology timelines.

[–] [email protected] 3 points 1 week ago

Or you can forward to your system logger, like syslog or systemd.

But then projects like NextCloud do it all wrong by using a file. Just log to stdout and I'll manage the rest.

[–] [email protected] 11 points 1 week ago (7 children)

Everything I hear about Nextcloud scares me away from messing with it.

[–] [email protected] 3 points 1 week ago

If you only use it for files, the only thing it's good for imho. it's awesome! :)

[–] [email protected] 2 points 1 week ago (3 children)

Just use the official Docker AIO and it is very, very little trouble. It's by far the easiest way to use Nextcloud and the related services like Collabora and Talk.

load more comments (3 replies)
load more comments (4 replies)
[–] [email protected] 7 points 1 week ago

for some helpful config, the below is the logging config I have and logs have never been an issue.

You can even add 'logfile' => '/some/location/nextcloud.log', to get the logs in a different place

  'logtimezone' => 'UTC',
  'logdateformat' => 'Y-m-d H:i:s',
  'loglevel' => 2,
  'log_rotate_size' => 52428800,
[–] [email protected] 3 points 1 week ago (1 children)

Reminds me of when my Jellyfin container kept growing its log because of something watchtower related. Think it ended up at 100GB before I noticed. Not even debug, just failed updates I think. It's been a couple of months.

[–] [email protected] 3 points 1 week ago

Well that's not jellyfins faults but rather watchtower...

[–] [email protected] 2 points 1 week ago (1 children)

Wow, thanks for the heads up! I use Nextcloud AIO and backups take VERY long. I need to check about those logs!

Don't know if I'm just lucky or what, but it's been working really well for me and takes good care of itself for the most part. I'm a little shocked seeing so many complaints in this thread because elsewhere on the Internet that's the go-to method.

[–] [email protected] 2 points 1 week ago (1 children)

It can be fidgety, especially if you stray from the main instructions, generally I do think it's okay, but also updates break it a bit every now and again.

load more comments (1 replies)
load more comments
view more: next ›