- When I hit the power button, it turns off. It still does its shutdown and all, but it's not an extended negotiation where I find a bunch of programs that are refusing to "let me" do what I want the computer to do, and have to try to make each of them happy. It just turns off.
Double edged sword. Applications asking if you want to save your stuff aren't designed to annoy you, they're designed to save you from the headache of losing your work.
But I can see why you'd want the power button to be a "stronger signal" than clicking Shut Down in some menu.
I guess now is a good time to knee-jerkily yell "session management!"
Apps and DEs with proper session management in place will still save your work in progress and restore it on next logon.
Until your toddler presses it and the OS just tosses all the work that you didn't save yet. It's good with a safeguard, and Windows will eventually force shut down after a timeout.
2025 no autosave skill issue
I just flip through all the workspaces, make sure there's nothing going on I care about, and then hit the button.
Computers that teach you not to do that, but instead to just blindly pick "shut down" and then assume that the computer will protect you against having anything unsaved, but also refuse to shut down if there's some app this is not cooperating, have 0 upside compared to the other way.
There's a line somewhere between "computers that teach you not to do that" and computers that prevent dire consequences when you make a human mistake. The "just don't do that" policy is never enough. If there are no safeguards, at one point the mistake will be made.
Even by highly trained astronauts: https://wehackthemoon.com/people/margaret-hamilton-her-daughters-simulation
Yeah, I can agree with that, I'm just saying at the moment of shutdown isn't the time to do that and often the programs that are holding up my shutdown are doing it for reasons of their own, not because they're trying to help me by saving my work. Just do autosave and let me shut my stuff down.
I'm a big fan of init 0
. My friends say I'm living on the edge but if an application can't handle it, I don't want it.
Firmware update: am I a joke to you?
i KNEW that what it does seemed a little too fast! idc tho cause it hasn't yet caused any trouble lol
What negotiation? I have a hard time to follow what you mean. Which operating system does turn off when shutting down? If it does not, then either its configured to do so (or not to) or there is an issue that needs to be handled and resolved. You don't want your PC turn off immediately, so it can do stuff that is needed (such as wait for all drives to write the data) or remove temporary files and unmount drives and so on. Otherwise an instant turn off is equivalent to a crash (including all background services and running applications, losing data, corrupting drives...).
Pretty sure both windows and macos allow programs to interrupt shutdown, usually if there's any unsaved documents open. I quite like that feature actually, if it's used correctly anyway.
My laptop will send a signal to all programs telling them to shut down, which includes cleaning up their stuff, and then it unmounts the drives, and then it shuts down. It just doesn't wait forever and make me fix the problem if some program is having trouble shutting down. That is the correct behavior.
I do get that it's nice to be protected against having your work blown away. As a first step, the idea of checking with every program to make sure it's okay to turn off was a good progress, back in the past when it was first invented. The solution in the present day to that is autosave. The solution is definitely not to leave all the user's work unsaved for a potentially unlimited amount of time, and then refuse to shut down if there is any terminal that still has an ssh session open, any settings window still open, or any GIMP session with files exported but not saved as .xcf.
Literally 2/3 of those obstacles happen pretty much every time I shut down my Mac, and I have to wander through the programs resolving programs' problems that have nothing to do with saving my work. It's annoying. I do understand that, with the other way, you have to go around checking that you have no work unsaved before shutting down. But, if you are mature enough to do that, then the "init 0" way is objectively better.
rip that document you forgot to save
when u open the update manager in mint and find out it's linux kernel update day >>>>>>
I was straight up SO excited for the kernel update this morning idek why.
windows update could never
That was actually Signal.
Optimization like that is a sign of a good dev, imo.
Alright, I want two apps that depend on two different version of python, but won't work on the other.
No warning, no notice, just one of the two fails to start. Thank you package manager
venv or nix
These are 2014 problems
Tried both, didn't like 'm, using docker now
Solved problem. Python virtual environments. Or install another python version with your package manager and make sure the python script calls it in the shebang instead of a generic /usr/bin/env python
.
Tried it, but some apps depend on spawning other python processes. Half the time that results in them breaking out of the env cuz they're using the python in the system path
So change the shebang to explicitly reference the venv python.
Ye that's handy, until some script inside a library or something doesn't
So you reported the issue before complaining ?
No I threw it in a docker container
Valid XD
yay python31{0..2}
What's wrong with 3.13?
That it's the newest and therefore already installed version, but, in this scenario, also not the correct one.
Or three docker containers
And that's why nix exists.
I tried it, ye. And although I like the concept, I can't say the implementation was to my liking
What didn't you like about it? I am just curious; I finally stepped out of using Debian for everything which I have been doing for approximately 200 years, and tried NixOS, and to me it is incredibly nice the way it solves a lot of these issues.
When I tried it it looked really cool. Up until it just.. didn't work. And then looking around I found a bunch of people giving me better snippets of scripts and it was not helpful
But given I just need docker and nothing more, I did not bother and looked further
Huh.
IDK man, my experience is that Nix solves the problem you originally talked about and a bunch of others, pretty effectively. Among other things if things "just... don't work" you can trivially roll back to an earlier working config, and see what changed between working and not-working, and so what would be a pretty grueling debugging process in some other environment becomes pretty easy to sort out.
But whatever. If for some reason Docker makes you more happy and not less, you're welcome to it and best of luck.
Perhaps it's improved over the last year, I can give it a shot. But yes, for my own packaged applications without shared dependencies, docker is handy. And that's exclusively what I run
I mean if it makes you happy, I won't tell you to do anything different. I think a certain amount of it is just prejudice against Docker on my part. Just in my experience NixOS is the best of both worlds: You can have a single coherent system if everything in that system can play nice with each other, and if not, then things can be containerized completely that way still works too. And then on top it has a couple of other nice features like rolling back configs easily, or source builds that get slotted in in-place as if they were standard packages (which is generally where I abandon Docker installs of things, because making changes to the source seems like it's going to be a big hassle).
I'm not trying to evangelize though, you should in all seriousness just do what you find to be effective.
Hold up, nix added containerization? How did I miss that? I will have another look now!
Also, you're right. For small quick scripts docker can be a hassle. Nowadays though I add building a docker image as part of my project's build/compilation process. The main reason I do this is so that I can work with whatever machine I happen to be on, then just copy paste the app to whatever machine I want it on. No extra config or even a look at the environment required. Just install docker and forget about the rest
update: installing docker on nixos (on a vm) with a nix package failed, not sure why. Perhaps some dependencies were no longer available?
update: nix is is available as a docker image. I'm running it now, we shall see how it goes
Hold up, nix added containerization? How did I miss that? I will have another look now!
Nix is containerization. Here is firing up a temporary little container with a new python version and then throwing it away once I'm done with it (although you can also do this with more complicated setups, this is just showing doing it with one thing only):
[hap@glimmer:/proc/69235/fd]$ python --version
Python 3.12.8
[hap@glimmer:/proc/69235/fd]$ nix-shell -p python39
this path will be fetched (27.46 MiB download, 80.28 MiB unpacked):
/nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21
copying path '/nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21' from 'https://cache.nixos.org/'...
[nix-shell:~]$ python --version
Python 3.9.21
[nix-shell:~]$ exit
exit
[hap@glimmer:/proc/69235/fd]$ python --version
Python 3.12.8
The whole "system" you get when moving from Nix to NixOS is basically just a composition of a whole bunch of individual packages like python39 was, in one big container that is "the system." But you can also fire up temporary containers trivially for particular things. I have a couple of tools with source in ~/src
which, whenever I change the source, nix-os rebuild
will automatically fire up a little container to rebuild them in (with their build dependencies which don't have to be around cluttering up my main system). If it works, it'll deploy the completed product into my main system image for me, but if it doesn't then nothing will have changed (and either way it throws away the container it used to attempt the build in).
Each config change spawns a new container for the main system OS image ("generation"), but you can roll back to one of the earlier generations (which are, from a functional perspective, still around) if you want or if you broke something.
And so on. It's very nice.
Aw, meh. From what I saw it's more like a jail, there's no imaging the containers
Yes because that is a wrong and clunky way to do it lol.
If you really wanted to, you could use dockerTools.BuildImage to create an "imaged" version of the container you made, or you could send around the flake.nix and flake.lock files exactly as someone would send around Dockerfiles. That stuff is usually just not necessary though, because it's replaced with just a better approach (for the average-end-user case where you don't need large numbers of Docker containers that you can deploy quickly at scale) that accomplishes the same thing.
I feel like I'm not going to convince you of this though. Have fun with Docker, I guess.
The issue is, nix builds are only guaranteed to be reproducible if the dependencies don't change. Which they shouldn't, but you can't trust the internet to be consistent. Things won't be there to be fetched forever.
Images do. And you can turn one into a container in seconds. I suppose it's a matter of preference. I like one a package to be independent
The issue is, nix builds are only guaranteed to be reproducible if the dependencies don’t change.
Dude, this is exactly why Nix is better. Docker builds are only guaranteed to be reproducible if the dependencies don't change. Which they will. The vast majority of real-world Dockerfiles do pip install
, wget
, all kinds of basically unlimited nonsense to pull down their dependencies from anywhere on the internet.
Nix builds, on the other hand, are forbidden from the internet, specifically to force them to declare dependencies explicitly and have it within a managed system. You can trust that the Nix repositories aren't going to change (or store them yourself, along with all the source that generated them and will actually produce the same binaries, if you're paranoid). You can send the flake.nix and flake.lock files and it will actually work to reproduce a basically byte-identical container on the receiver's end, which means you don't have to send multi-gigabyte "images" in order to be able to depend on the recipient actually being able to make use of it. This is what I was saying that the whole thing of needing "images" is a non-issue if your workflow isn't allowing arbitrary fuckery on an industrial scale whenever you are trying to spin up a new container.
I suspect that making a new container and populating it with something useful is so trivial on Nix, that you're missing the point of what is actually happening, whereas with Docker you can tell something big is happening because it's such a fandango when it happens. And so you assume Docker is "real" and Nix is "fake" or something.
I like one a package to be independent
Yes, me too, which is why an affinity for Docker is weird to me.
you can trust the nix repositories aren't going to change
That, I do not. And storing the source and such for every dependency would be bigger than, and result in essentially the same thing as an image.
I think you're trying to achieve something different than what docker is for. Docker is like installing onto an empty computer then shipping the entire machine to the end user. You pretty much guarantee thing will work. (yes this is oversimplified)
And storing the source and such for every dependency would be bigger than, and result in the same thing as an image.
Let's flip that around.
The insanity that would be downloading and storing everything you need, wrapping it all up into a massive tarball and then shipping it to anyone who wants to use the end product, and also by the way assuming that everything you need in order to rebuild it will always be available from every upstream source if you want to make any changes, is precisely what Docker does. And yes, it's silly to trust that everything it's referencing will always be available from whoever's providing it.
(Also, security)
Docker is like installing onto an empty computer then shipping the entire machine to the end user.
Correct. Because it's not capable enough to make actually-reproducible builds.
My point is, you can do that imaging (in a couple of different ways) with Nix, if you really wanted to. No one does, because it would be insane when you have other more effective tools that can accomplish the exact same goal without needing to ship the entire machine to the end user. There are good use cases for Docker, making it easy to scale services up as was the original intent is a really good one. The way people commonly use it today, as a way to make reproducible environments for ease of one-off deployment, is not one. In my opinion.
I've been tempted into a "my favorite technology is better" pissing match, I guess. Anyway, Nix is better.
I might just start bundling my apps inside an environment setup with nix inside docker. A lot of them are similar to identical, So those docker images actually share a lot of layers under the hood.
My apps after compiling and packaging are usually around 50mb. That's 48mb of debian, which is entirely shared between all the images that I build. So the eventual size of my deployed applications isn't nearly as big as they seem from the size of the tarball being sent around. So for 10 apps, that's not 500mb, that's 68mb.
If anything, the docker hub and registry are a bit of a mess.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.