I switched from KDE 3.5 (whenever that was current).
Terrifyingly, I think someone is still maintaining KDE 3.5 proper for OpenSUSE. Then there's TDE, which is widely available. (But you probably mean 15-20 years ago.)
I switched from KDE 3.5 (whenever that was current).
Terrifyingly, I think someone is still maintaining KDE 3.5 proper for OpenSUSE. Then there's TDE, which is widely available. (But you probably mean 15-20 years ago.)
What exactly is the point of stable release? I don't need everything pinned to specific versions—I'm not running a major corporate web service that needs a 99.9999% uptime guarantee—and Internet security is a moving target that requires constant updates.
Security and bug fixes—especially bug fixes, in my experience—are a good enough reason to go rolling-release even if you don't usually need bleeding-edge features in your software.
I think part of what you're missing may be a set of very old assumptions about where the danger is coming from.
Linux was modeled after UNIX, and much of its core software was ported from other UNIX versions, or at least written in imitation of their utilities. UNIX was designed to be installed on large pre-Internet multi-user mainframe+dumb terminal systems in industry or post-secondary education. So there's an underlying assumption that a system is likely to have multiple human users, most of whom are not involved in maintaining the system, some of whom may be hostile to each other or to the owner of the system (think student pranks or disgruntled employees), and they all log in at once. Under those circumstances, users need to be protected from each other, and the system needs to be protected from malicious users. That's where the system of user and root passwords is coming from: it's trying to deal with an internal threat model, although separating some software into its own accounts also allows the system to be deployed against external threats. Over the years, other things have been layered on top of the base model, but if you scratch the paint off, you'll find it there underneath.
Windows, on the other hand, was built for PCs, and more or less assumes that only one user can be logged in to a machine at a time. Windows security is concerned almost entirely with external threats: viruses and other malware, remote access, etc. User-versus-user situations are a very minor concern. It's also a much more recent creation—Windows had essentially no security until the Internet had become well-established and Microsoft's poor early choices about macros and scripts came back to bite them on the buttocks.
So it isn't so much that one is more secure than the other as that they started with different threat models and come from different periods of computing history.
Your problem is that you're starting from the wrong premise: the primary goal of most people working on Linux is not to make more people switch to it, strange as that may sound, it's to create an operating system that they personally want to use. Which can mean a lot of different things, depending on the person. So it's inevitable that there are a lot of different distros, and the only reason there aren't even more is that most of the one-man shows that don't attract many users peter out and vanish after a few months or years.
There's an old joke from a couple of decades ago about what operating systems would be like if they were airlines:
Linux Airlines
Disgruntled employees of all the other OS airlines decide to start their own airline. They build the planes, ticket counters, and pave the runways themselves. They charge a small fee to cover the cost of printing the ticket, but you can also download and print the ticket yourself. When you board the plane, you are given a seat, four bolts, a wrench and a copy of the seat-HOWTO.html. Once settled, the fully adjustable seat is very comfortable, the plane leaves and arrives on time without a single problem, the in-flight meal is wonderful. You try to tell customers of the other airlines about the great trip, but all they can say is, “You had to do what with the seat?”
Gentoo is still very much a "You had to do what with the seat?" distro, while most others have retired that concept to varying degrees, at the cost of the seats being less easy to perform unusual adjustments on.
One detail about Rust in the kernel that often gets overlooked: the Linux kernel supports arches to which Rust has never been ported. Most of these are marginal (hppa, alpha, m68k—itanium was also on this list), but there are people out there who still use them and may be concerned about their future. As long as Rust remains in device drivers only this isn't a major issue, but if it penetrates further into the kernel, these arches will have to be desupported.
(Gentoo has a special profile "feature" called "wd40" for these arches, which is how I was aware of their lack of Rust support. It's interesting to look at the number and types of packages it masks. Lotta python there, and it looks like gnome is effectively a no-go.)
I consider bootloader attacks a very low-probability threat, and quite honestly I don't trust the average board vendor to produce anything that's actually secure anyway. If I were in the habit of carrying a laptop back and forth across international borders I might feel differently, but for a desktop stuck in a room in Canada that hardly anyone enters when I'm not present, Secure Boot is a major hassle in return for a small security gain. So I just don't bother.
The Gentoo news post is not about having /bin and /usr/bin as separate directories, which continues to work well to this day (I should know, since that's the setup I have). That configuration is still supported.
The cited post is about having /bin and /usr on separate partitions without using an iniramfs, which is no longer guaranteed to work and had already been awfully iffy for a while before January. Basically, Gentoo is no longer jumping through hoops to make sure that certain files land outside /usr, because it was an awful lot of work to support a very rare configuration.
Gnome and other desktops need to start working on integrating FOSS
In addition to everything everyone else has already said, why does this have anything to do with desktop environments at all? Remember, most open-source software comes from one or two individual programmers scratching a personal itch—not all of it is part of your DE, nor should it be. If someone writes an open-source LLM-driven program that does something useful to a significant segment of the Linux community, it will get packaged by at least some distros, accrete various front-ends in different toolkits, and so on.
However, I don't think that day is coming soon. Most of the things "Apple Intelligence" seems to be intended to fuel are either useless or downright offputting to me, and I doubt I'm the only one—for instance, I don't talk to my computer unless I'm cussing it out, and I'd rather it not understand that. My guess is that the first desktop-directed offering we see in Linux is going to be an image generator frontend, which I don't need but can see use cases for even if usage of the generated images is restricted (see below).
Anyway, if this is your particular itch, you can scratch it—by paying someone to write the code for you (or starting a crowdfunding campaign for same), if you don't know how to do it yourself. If this isn't worth money or time to you, why should it be to anyone else? Linux isn't in competition with the proprietary OSs in the way you seem to think.
As for why LLMs are so heavily disliked in the open-source community? There are three reasons:
Item 1 can theoretically be solved by bigger and better AI models, but 2 and 3 can't be. They have to be decided by the courts, and at an international level, too. We might even be talking treaty negotiations. I'd be surprised if that takes less than ten years. In the meanwhile, for instance, it's very, very dangerous for any open-source project to accept a code patch written with the aid of an LLM—depending on the conclusion the courts come to, it might have to be torn out down the line, along with everything built on top of it. The inability to use LLM output for open source or commercial purposes without taking a big legal risk kneecaps the value of the applications. Unlike Apple or Microsoft, the Linux community can't bribe enough judges to make the problems disappear.
Dude. I actually have sources for most of my installed packages lying around, because Gentoo. Do you know how much space that source code takes up?
Just under 70GB. And pretty much everything but maybe the 10GB of direct git pulls is compressed, one way or another.
That means that even if your distro is big and has 100 people on development, they would each have to read 1GB or more of decompressed source just to cover the subset of packages installed on my system.
How fast do you read?
sudo is already an optional component (yes, really—I don't have it installed). Don't want its attack surface? You can stick with su and its attack surface instead. Either is going to be smaller than systemd's.
systemd's feature creep is only surpassed by that of emacs.
After 20 years of Gentoo, I don't see myself switching in the next five. Comfortable, capable, flexible.