this post was submitted on 05 Aug 2023
44 points (92.3% liked)

Linux

48023 readers
962 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I'm doing a bunch of AI stuff that needs compiling to try various unrelated apps. I'm making a mess of config files and extras. I've been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would like to keep them out of my main home directory.

all 50 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 1 year ago* (last edited 1 year ago) (3 children)

I did Linux From Scratch recently and they have a brilliant solution. Here's the full text but it's a long read so I'll briefly explain it. https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt

Basically you make a new user with the name of the package you want to install. Login to that user then compile and install the package.

Now when you search for files owned by the user with the same name as the package you will find every file that package installed.

You can document that somewhere or just use the find command when you are ready to remove all files related to the package.

I didn't actually do this for my own LFS build so I have no further experience on the matter. I think it will eventually lead to dependency hell when two packages want to install the same file.

I guess flatpaks are better about keeping libraries separate but I'm not sure if they leave random files all over your hard drive the way apt remove/apt purge does. (Getting really annoyed about all the crud left in my home dir)

[–] [email protected] 6 points 1 year ago (1 children)

That’s clever. It should work on any system, shouldn’t it?

[–] [email protected] 2 points 1 year ago (1 children)

Any POSIX compliant system as far as I know.

[–] [email protected] 3 points 1 year ago

Thanks. I’ll keep that in mind for again.

[–] [email protected] 2 points 1 year ago (2 children)

Thanks for the read. This is what I was thinking about trying but hadn't quite fleshed out yet. It is right on the edge of where I'm at in my learning curve. Perfect timing, thanks.

Do you have any advice when the packages are mostly python based instead of makefiles?

[–] [email protected] 5 points 1 year ago

for python, a bunch of venvs should do it

[–] [email protected] 2 points 1 year ago (1 children)

This method should work with any command that's installing files on your disk but it's probably not worth the headache when virtual environments exist for python.

[–] [email protected] 2 points 1 year ago (1 children)

Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.

I've noticed a few oddball times during installations pip said something like "package unavailable; reverting to base system." This was while it is inside conda, which itself is inside a distrobox container. I'm not sure what "base system" it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I hope someone who has more info comes along. It might be time for you to make a new post though since we're getting to the heart of the problem now.

Also it will be a lot easier for people to diagnose if you are specific about which programs you are failing to install.

I've only experimented with Python in docker and it gave me a lot of headaches.

That's why I prefer to pip install things inside venvs because I can just tar them myself and have decent portability.

But since your installing files across the system I'm not sure what the best solution is.

[–] [email protected] 12 points 1 year ago (1 children)
[–] [email protected] 4 points 1 year ago (1 children)

NixOS containers could do what OP's asking for, but it'll be trickier with just nix (on other distro). It'll handle build dependencies and such, but you'll still need to keep your home or other directories clean some other way.

[–] [email protected] 5 points 1 year ago (1 children)

OP could use flakes to create these dev environments and clean them up without a trace once done.

[–] [email protected] 0 points 1 year ago (1 children)

Any files created by programs running in the dev environments will remain.

[–] [email protected] 3 points 1 year ago (1 children)
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Does NOT delete any files that were written to, for example, ~/.local or ~/.config from dev shell.

One of OP's problems was,

I’m making a mess of config files and extras.

[–] [email protected] 8 points 1 year ago

I use a mixture of systemd-nspawn and different user logins. This is sufficient for experimentation, for actual use I try to package (makepkg) those tools to have them organized by my package manager.

Also LVM thinpools with snapshots are a great tool. You can mount a dedicated LV to each single user home to keep everything separated.

[–] [email protected] 7 points 1 year ago

Qubes: you can install software inside of its own disposable VM. Or it can be a persistent VM we're only the data in home persists. Or it can be a VM where the root persists. You have a ton of control. And it's really useful to see what's changed in the system.

All the other solutions here are talking about in the operating system, qubes is doing it outside the operating system

[–] [email protected] 6 points 1 year ago (1 children)

I use Gentoo where builds from source are supported by the package manager. ;)

Overall though, any containerisation option such as Docker / Podman or Singularity is what I would typically do to put things in boxes.

For semi-persistent envs a chroot is fine, and I have a nice Gentoo-specific chroot script that makes my life easier when reproing bugs or testing software.

[–] [email protected] 1 points 1 year ago (1 children)

Wait. Does emerge support building packages natively when they are not from Gentoo?

Most of the stuff I'm messing with is mixed repos with entire projects that include binaries for the LLMs, weights, and such. Most of the "build" is just setting up the python environment with the right dependency versions for each tool. The main issues are the tools and libraries like transformers, pytorch, and anything that interacts with CUDA. These get placed all over the file system for each build.

[–] [email protected] 2 points 1 year ago

Ebuilds (Gentoo packages) are trivial to create for almost anything, so while the answer is 'no the package manager doesn't manage non PM packages', typically you'll make an ebuild (or two or three) to handle that because it's (typically) as easy as running make yourself. :)

[–] [email protected] 5 points 1 year ago

For "desktop" stuff (gaming, office etc.) I just install bare-metal, for "server" stuff I basically only look for containerisation in the form of Podman (Docker compatible). If it doesn't exist as a compose file it isn't worth my time.

[–] [email protected] 5 points 1 year ago

I think Podman should do a good job but I never used it myself, Distrobox is build on it and a lot easier to use so that's what I would recommend!

[–] [email protected] 4 points 1 year ago (1 children)

Not sure if that's a good idea but if you use Fedora, you also have your root on a BTRFS partition after a default installation. You could utilize the snapshot features of BTRFS to roll back after testing.

[–] [email protected] 2 points 1 year ago

I need to explore this BTRFS feature, I just don't have a good place or reason to start Dow that path yet. I've been on Silverblue for years, but decided to try Workstation for now. Someone in the past told me I should have been using BTRFS for FreeCAD saves, but I never got around to trying it.

[–] [email protected] 3 points 1 year ago

software like stow keeps track of files installed, and helps you remove it later

[–] [email protected] 3 points 1 year ago (1 children)

it it does not need a gui, use docker and log in into it. do the stuff and when you are done, docker rm and everything disappear.

you can enable cuda inside the container, follow the docs for that.

bonus point, vs code can open itself inside a container.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

You can use GUI stuff in docker as well, though it can be a bit fiddly to setup.

[–] [email protected] 2 points 1 year ago (1 children)

I've never worried about this but I'd use Flatpak. The whole install goes in a specific directory and the metadata/config/data files go in their own specific directory.

[–] [email protected] 1 points 1 year ago

Those Flatpak configs are not quite as scattered, most are in .config .var or .local. Most Flatpaks leave junk behind in these directories. I just deleted a few today. A lot of the problems start happening when you need to compile stuff where each package has the same dependency but a different version of the dep in each one. Then you have a problem and need to track down some related library that is not in the execution path and suddenly there are 10 copies of a dozen files all related to the stupid thing on your system and scattered all over the place. It becomes nearly impossible to track down which file is related to the container with the problem.

This is only an issue if you find yourself playing in software that is not yet supported directly my any packagers for Linux distros; stuff like FOSS AI right now.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Have an lxc config that enables glx on x11 in the container, spin one up and throw stuff in there, temp zfs volume.

Lxc-rm when done.

[–] [email protected] 2 points 1 year ago

Give a look at distrobox

[–] [email protected] 2 points 1 year ago
[–] [email protected] 2 points 1 year ago

Haven't tried it (and don't use docker), so a wild shot: https://github.com/jupyterhub/repo2docker

'repo2docker fetches a repository (from GitHub, GitLab, Zenodo, Figshare, Dataverse installations, a Git repository or a local directory) and builds a container image in which the code can be executed. The image build process is based on the configuration files found in the repository.'

That way you can perhaps just delete the docker image and everything is gone. Doesn't seem to depend on jupyter..

[–] [email protected] 2 points 1 year ago (1 children)

You can't completely remove distrobox image and contents later?

[–] [email protected] 1 points 1 year ago (1 children)

By default it is just the packages and dependencies that are removed. Your /home/user/ directory is still mounted just the same. This puts all of your config and dot files in all the normal places. If you install another distro like Arch on a Fedora base, it also installs all of the extra root package locations for arch and these get left on the host system after removing the distrobox instance. So yeah it still makes a big mess.

[–] [email protected] 6 points 1 year ago (1 children)

You can mount any directory you want as the “home” directory of a given container with distrobox, it just defaults to using your home directory.

[–] [email protected] 1 points 1 year ago

Do you happen to know what distrobox options there are for extra root directories associated with other distro containers, if there is an effective option to separate these, or if this is part of the remote "home" mount setting? I tried installing an Arch container on a fedora base system. Distrobox automatically built various Arch root directories even though the container should have been rootless.

[–] [email protected] 1 points 1 year ago

Chroot would be fine for this and not overly complicated

[–] [email protected] 1 points 1 year ago

There’s a method using systemd-sysext that would work well for this on any distro without dealing with poking holes in containers. One of the gnome folks blogged about it recently here: https://blogs.gnome.org/alatiera/2023/08/04/developing-gnome-os-systemd-sysext/

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)
export LDFLAGS="-Wl,-rpath=/sw/app/version/lib"
./configure --prefix=/sw/app/version
make
sudo make install
unset LDFLAGS