Apparently posting it caused enough load to take down my pict-rs server, sorry about that.
Go has a heavy focus on simplicity and ease-of-use by hiding away complexity through abstractions, something that makes it an excellent language for getting to the minimum-viable-product point. Which I definitely applaud it for, it can be a true joy to code an initial implementation in it.
The issue with hiding complexity like such is when you reach the limit of the provided abstractions, something that will inevitably happen when your project reaches a certain size. For many languages (like C/C++, Ruby, Python, etc) there's an option to - at that point - skip the abstractions and instead code directly against the underlying layers, but Go doesn't actually have that option.
One result of this is that many enterprise-sized Go projects have had to - in pure desperation - hire the people who designed Go in the first place, just to get the necessary expertice to be able to continue development.
Here's one example in the form of a blog - with some examples of where hidden complexity can cause issues in the longer term; https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride
What is truly bloated is their network-install images, starting with a 14MB kernel and 65MB initrd, which then proceeds to pull a 2.5GB image which they unpack into RAM to run the install.
This is especially egregious when running thin VMs for lots of things, since you now require them to have at least 4GB of RAM simply to be able to launch the installer at all.
Compare this to regular Debian, which uses an 8MB kernel and a 40MB initrd for the entire installer.
Or some larger like AlmaLinux, which has a 13MB kernel and a 98MB initrd, and which also pulls a 900MB image for the installer. (Which does mean a 2GB RAM minimum, but is still almost a third of the size of Ubuntu)
If you're going to post release notes for random selfhostable projects on GitHub, could you at least add the GitHub About text for the project - or the synopsis from the readme - into the post.
Well, things like the fact that snap is supposed to be a distro-agnostic packaging method despite being only truly supported on Ubuntu is annoying. The fact that its locked to the Canonical store is annoying. The fact that it requires a system daemon to function is annoying.
My main gripes with it stem from my job though, since at the university where I work snap has been an absolute travesty;
It overflows the mount table on multi-user systems.
It slows down startup a ridiculous amount even if barely any snaps are installed.
It can't run user applications if your home drive is mounted over NFS with safe mount options.
It has no way to disable automatic updates during change critical times - like exams.
There's plenty more issues we've had with it, but those are the main ones that keep causing us issues.
Notably Flatpak doesn't have any of the listed issues, and it also supports both shared installations as well as internal repos, where we can put licensed or bulky software for courses - something which snap can't support due to the centralized store design.
This won't really affect the development of ZLUDA much in particular, since the main developer happens to live in The Netherlands, and clean-room reverse engineering - especially for interoperability purposes - is fully protected by law in the EU.
But NVIDIA does really like to make it as much of a pain as possible to support CUDA software anywhere but for a single user on their personal consumer-grade desktop.
Flatpak already creates executable wrappers for all applications as part of regular installs, though they're by default named as the full package name.
For when inkscape has been installed into the system-wide Flatpak installation, you could simply symlink it like; ln -s /var/lib/flatpak/exports/bin/org.inkscape.Inkscape /usr/local/bin/inkscape
For the user-local installation, the exported runnable is in ~/.local/share/flatpak/exports/bin
instead.
A lot of that data doesn't actually exist, ostree hardlinks data blobs internally, so the actual size on disk is much smaller than most disk usage tools will show.
The naïve and unoptimized version ran in under 4 seconds for me, that's nowhere near "Time to knuckle down and actually optimize this" territory.
A.k.a. do you have a larger version?
The main benefits to BTRFS over something like ext4 tends to be considered as; the subvolume support - which is what's used for snapshotting, the granluar quotas, reflinks, transparent compression, and the fact that basically all filesystem operations can be performed online.
I'm personally running BTRFS in a couple of places; NAS, laptop, and desktops. Mainly for the support to do things like snapshots and subvolumes, but I also make heavy use of both reflinks and compression, and I've also made use of online filesystem actions quite a few times.
ace
0 post score0 comment score
One has super cow powers, the other one doesn't.