[-] Samueru_sama@programming.dev 10 points 2 months ago* (last edited 2 months ago)

The points in the screenshot are just false and outdated, the guy even says that it doesn't work on every distro lol

This is signal made with sharun working on ubuntu 12.04 a 14 year old distro: https://imgur.com/a/1f5S0P7 The distro is so old that internet no longer worked lol, had to use a flash drive to transfer the appimage.

we use a lot less storage than flatpak and this comparison is outdated, we have reduced the size of a lot of apps since then.

I actually did a quick test installing an alpine linux container and about 10 GUI apps vs appimage and appimage used less storage, now this comparison was flawed because I later realized that the alpine stable repo was super old and for example GIMP was pulling the gtk2 version instead of the gtk3 one (which brings all of gtk2 into the system instead of sharing the existing gtk3) but you get the idea of how close we are.

For updates and desktop integration use AM or soar

but they will never replace native implementations.

Eden makes its appimage using sharun, they made it with PGO optimizations on it which has a ~10% increase in FPS.

native implementations wouldn't bother doing this, in fact PCSX2 had to tell people to not use the official archlinux package while it existed, because archlinux compiled it with generic flags which was horrible lol

https://www.reddit.com/r/linux_gaming/comments/ikyovw/pcsx2_official_arch_linux_package_not_recommended/

This is also the reason why this benchmark showed appimage performing much better, you are free to optimize your application, while distros packages, flatpak, etc you often have to deal with packaging policies that do not allow this:

https://www.reddit.com/r/linux/comments/u5gr7r/interesting_benchmarks_of_flatpak_vs_snap_vs/

[-] Samueru_sama@programming.dev 21 points 2 months ago

The problem is that NixOS achieves all of this by breaking assumptions that almost all Linux software relies on. Most Linux binaries assume the Filesystem Hierarchy Standard exists, and they expect interpreters and libraries at fixed global paths.

This is a problem of those applications, we began to make appimages that do not make those assumptions and work in NixOS directly. (And this also means it works in places like alpine where a lot of those binaries wont either).

People like to throw the FHS around but the reality is that not a single distro follows it fully, I wouldn't rely on it to be the same in the near future at all.

[-] Samueru_sama@programming.dev 12 points 3 months ago

dec05eba is a totally different person, it is the guy that makes gpu-screen-recorder which is amazing btw.

It is the person that made this PR that fixed a massive blunder by metux lol https://github.com/X11Libre/xserver/pull/56

[-] Samueru_sama@programming.dev 11 points 4 months ago* (last edited 4 months ago)

they are not following the spec if this is the solution.

  • data files (like extensions, sessions, etc) need to go in XDG_DATA_HOME.

  • config files (user preferences) need to go in XDG_CONFIG_HOME.

  • cache files need to go in XDG_CACHE_HOME.

  • log files need to go in XDG_STATE_HOME.

also hopefully they do check the variables and not just hardcoded ~/.config, ~/.local/share, etc

[-] Samueru_sama@programming.dev 11 points 7 months ago
# do an ls to list files in the current working directory
ls .
[-] Samueru_sama@programming.dev 18 points 7 months ago

This is the single most important aspect of immutable distributions. Because the core of the system is mounted in read-only mode, it cannot be changed. With the core system locked down as read-only, it's not possible to change settings in directories like /etc, /boot, /dev, /proc, or other critical locations. That means if you wound up with malware on your system, it wouldn't be able to alter the contents of those directories.

Because of this, immutable distributions are more reliable than non-immutable. Even better, if you accidentally break something, it will most likely be fixed during the next reboot.

Atomic updates are quite different from standard updates. Instead of the OS treating an update on a package-by-package basis, it's an all-or-none situation. In other words, if an update to a single package would break something, the update will not happen and the system rolls back to the previous working state.

You get the same by setting up btrfs snapshots with any regular distro...

With an immutable system, you are always guaranteed to have a bootable system.

lies

[-] Samueru_sama@programming.dev 7 points 8 months ago

isnt it the same with arch?

I once to helped troubleshoot an EndeavourrOS user.

during the process I discovered that their kernel parameters were being reset with every kernel update, this was because Endeavour was using dracut instead of mkinitcpio...

[-] Samueru_sama@programming.dev 13 points 8 months ago

.mozilla : “Oh but I AM special”

You would think that thunderbird would use ~/.mozilla as well but nope. It is ~/.thunderbird 🤣

[-] Samueru_sama@programming.dev 21 points 8 months ago

appimage hasn't depended on libfuse2 (or any libfuse) since the static runtime came out in 2022.

The issue is that some projects haven't updated to it, most notably electron builder:

https://github.com/electron-userland/electron-builder/issues/8686

never integrate well on the system, and run unsandboxed.

https://github.com/ivan-hc/AM

You have sandboxing and perfect integration, including adding the binary to PATH.

[-] Samueru_sama@programming.dev 8 points 9 months ago

rip chimera.

[-] Samueru_sama@programming.dev 9 points 11 months ago

I want full-scale applications that are so big they have to use system libraries to keep their disk size down

Linux is in such sad state that dynamic linking is abused to the point that it actually increases the storage usage. Just to name a few examples I know:

most distros ship a full blown libLLVM.so, this library is a massive monolith used for a bunch of stuff, it is also used for compiling and here comes the issue, by default distros build this lib with support for the following targets:

-- Targeting AArch64
-- Targeting AMDGPU
-- Targeting ARM
-- Targeting AVR
-- Targeting BPF
-- Targeting Hexagon
-- Targeting Lanai
-- Targeting LoongArch
-- Targeting Mips
-- Targeting MSP430
-- Targeting NVPTX
-- Targeting PowerPC
-- Targeting RISCV
-- Targeting Sparc
-- Targeting SystemZ
-- Targeting VE
-- Targeting WebAssembly
-- Targeting X86
-- Targeting XCore

Gentoo used to offer you the option to limit the targets and make libLLVM.so much smaller, but now rust applications that link to llvm have issues with this with caused them to remove that feature...

Another is libicudata, that's a 30 MiB lib that all GTK applications end up linking to for nothing, because it is a dependency of libxml2, which distros override to build with icu support (by default this lib does not link to libicudata) and what's more sad is that the depenency to libxml2 comes because of transitive dependency to libappstream, yes that appstream that I don't even know why most applications would need to link to this.

And then there is archlinux that for some reason builds libopus to be 5 MiB when most other distros have this lib <500 KiB

Sure dynamic linking in the case of something like the coreutils, where you are going to have a bunch of small binaries makes sense, except you now have stuff like busybox which is a single static bin that acts as each of the different tools by checking the name of the symlink that launched it and it is very tiny at 1 MiB and it provides all your basic unix tools including a very good shell.

Even Linus was surprised by how much dynamic linking is abused today: https://lore.kernel.org/lkml/CAHk-=whs8QZf3YnifdLv57+FhBi5_WeNTG1B-suOES=RcUSmQg@mail.gmail.com/

To pick how I’m going to install something,

https://github.com/ivan-hc/AM

I have all these applications using 3.2 GIB of storage while the flatpak equivalent actually uses 14 GiB 💀: https://i.imgur.com/lvxjkTI.png

flatpak is actually sold on the idea that shared dependencies are good, you have flatpak runtimes and different flatpaks can share, the problem here is that those runtimes are huge on their own, the gnome runtime is like 2.5 GiB which is very close to all those 57 applications I have as appimage and static binaries.

but it doesn’t actually make it easier for me, it just makes it easier for the packager of the software

Well I no longer have to worry about the following issue:

  • My application breaking because of a distro update, I actually now package kdeconnect as an appimage because a while ago it was broken for 2 months on archlinux. The only app I heavily rely from my distro now is distrobox.

  • I also get the latest updates and fixes as soon as upstream releases a new update, with distro packaging you are waiting a week at best to get updates. And I also heard some horror stories before from a dev where they were told that they had to wait to push an update for their distro package and the only way to speed it up was if it was a security fix.

  • And not only you have to make sure the app is available in your distro packages, you also have to make sure it is not abandoned, I had this issue with voidlinux when I discovered the deadbeef package was insanely out of date.

  • Another issue I have with distro packages in general is that everything needs elevated rights to be installed, I actually often hear this complains from linux newbies that they need to type sudo for everything and it doesn't have to be this way, AM itself can be installed as appman which makes it able to work on your HOME with all its features. And you can take your HOME and drop it in any other distro and be ready to go as well.

view more: next ›

Samueru_sama

0 post score
0 comment score
joined 1 year ago