Jokes aside, vim as PID 1 is just a bad idea.
Emacs on the other hand: https://github.com/emacs-os/el-init
Jokes aside, vim as PID 1 is just a bad idea.
Emacs on the other hand: https://github.com/emacs-os/el-init
Let’s stop perfect getting in the way of better.
For the threat models and data harvesting the general consumer (i.e. our moms) will face, MacOS does a far better job than Windows and iOS far better than Android (and no, your mom isn’t actually using a pixel with Graphene. Maybe she could, but she isn’t. Not really.)
If Apple can’t satisfy your threat model and privacy posturing, fine. But don’t assume everyone’s requirements are the same as yours, that’s how we scare people away.
Ok, while most of these don’t have companies behind them with huge revenues, most work on these projects is done by paid developers, with money coming from sponsorships, grants, donations and support deals. (Or in the case of Linux - device drivers are a prerequisite for anyone buying your product).
Developers getting paid to work on open source is a good thing. These projects may have begun their life as small hobby projects - they aren’t anymore. (And that’s probably good)
Well, a few issues:
For fun, home use, research or small time hacking? Sure, buy all the gaming cards you can. If you actually need support and have a commercial use case? Pony up. Either way, benchmark your workload, don’t look at marketing numbers.
Is it a scam? Of course, but you can’t avoid it.
For you? No. For most people? Nope, not even close.
However, it mitigates certain threat vectors both on Windows and Linux, especially when paired with a TPM and disk encryption. Basically, you can no longer (terms and conditions apply) physically unscrew the storage and inject malware and then pop it back in. Nor can you just read data off the drive.
The threat vector is basically ”our employees keep leaving their laptops unattended in public”.
(Does LUKS with a password mitigate most of this? Yes. But normal people can’t be trusted with passwords and need the TPM to do it for them. And that basically requires SecureBoot to do properly.)
This will be so much fun for people with legacy systems
I’ll bite. It’s getting better, but still a long way to go.
But what do I know, I’ve only deployed and managed desktop linux for a few thousand people. People were screaming about these design flaws back in 2008 when this all started. The criticisms above were known and dismissed as FUD, and here we are. A few architectural changes back then, and we could have done this migration a decade faster. Just imagine, screen sharing during the pandemic!
As an example, see Arcan, a small research project with an impressively large subset of features from both X11 and Wayland (including working screen sharing, network transparency and a functioning security model). I wouldn’t use it in production, but if it was more than one guy in a basement working on it, it would probably be very usable fairly fast, compared to the decade and half that RedHat and friends have poured into Wayland thus far. Using a good architecture from the start would have done wonders. And Wayland isn’t even close to a good architecture. It’s just what we have to work with now.
Hopefully Xorg can die at some point, a decade or so from now. I’m just glad I don’t work with desktops anymore, the swap to Wayland will be painful for a lot of organisations.
I have a mac I use for some specific tasks. I’ll agree the Apple is, ehh, Apple.
But mounting network fileshares is dead simple. My SMB share pops right up, authentication works fine, the user interface for it is fine. If I wanted to use it remotely, I’d just export it over my tailnet.
’sshfs’ is good for short stints of brief use, but ultimately it breaks on a protocol level as soon as your socket dies, on any OS.
If you can have NAND-gates, a clock and some wires, you can build anything.
Go visit https://nandgame.com/ to try it out yourself!
Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)
IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.
It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.
Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg
#define yeet throw
#define let const auto
#define mut &
#define skibidi exit(1)
The future is now!
That javascript hole was probably caused by a bicycle.