this post was submitted on 24 Aug 2024
382 points (99.0% liked)
Linux
48153 readers
1105 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sounds like zfs with extra steps
But is GPL-compatible, unlike ZFS.
How do you define GPL compatible?
Do your own research, that's a pretty well-discussed topic, particularly as concerns ZFS.
I'm all over ZFS and I am not aware of any unresolved "licence issues". It's like a decade old at this point
License incompatibility is one big reason OpenZFS is not in-tree for Linux, there is plenty of public discussion about this online.
Like this that states there is no issue https://opensource.stackexchange.com/questions/2094/are-cddl-and-gpl-really-incompatible
Yes, but note that neither the Linux foundation nor OpenZFS are going to put themselves in legal risk on the word of a stack exchange comment, no matter who it's from. Even if their legal teams all have no issue, Oracle has a reputation for being litigious and the fact that they haven't resolved the issue once and for all despite the fact they could suggest they're keeping the possibility of litigation in their back pocket (regardless of if such a case would have merit).
Canonical has said they don't think there is an issue and put their money where their mouth was, but they are one of very few to do so.
Keen to see how Canonical goes. There's another one or two distros doing the same. Maybe everyone will wake up and realise they have been fighting over nothing
Your lack of awareness is fine with me.
Okay thanks for your comment?
Not under a license which prohibits also licensing under the GPL. i.e. it has no conditions beyond what the GPL specifies.
Not true
The only condition is that CCDL and GPL don't apply to the same file. Wifi works just fine and the source code isn't GPL yet wifi drivers are in the kernel..
https://opensource.stackexchange.com/questions/2094/are-cddl-and-gpl-really-incompatible
...because they are incompatible licenses.
There's no requirement for them to apply to the same file? There's already blobs in the kernel the gpl doesn't apply to the source of
The question was "How do you define GPL compatible?". The answer to that question has nothing to do with code being split between files. Two licenses are incompatible if they can't both apply at the same time to the same thing.
The two works can live harmoniously together in the same repo, therefore, not incompatible by one definition and the one that matters.
There's already big organisations doing it and they haven't had any issues
ZFS doesn't support tiered storage at all. Bcachefs is capable of promoting and demoting files to faster but smaller or slower but larger storage. It's not just a cache. On ZFS the only option is really multiple zpools. Like you can sort of do that with the persistent L2ARC now but TBs of L2ARC is super wasteful and your data has to fully fit the pool.
Tiered storage is great for VMs and games and other large files. Play a game, promote to NVMe for fast loadings. Done playing, it gets moved to the HDDs.
You're misrepresenting L2ARC and it's a silly comparison to claim to need TBs of L2ARC and then also say you'd copy the game to nvme just to play it on bcachefs. That's what ARC does. RAM and SSD caching of the data in use with tiered heuristics.
I know, that was an example of why it doesn't work on ZFS. That would be the closest you can get with regular ZFS, and as we both pointed out, it makes no sense, it doesn't work. The L2ARC is a cache, you can't store files in it.
The whole point of bcachefs is tiering. You can give it a 4TB NVMe, a 4TB SATA SSD and a 8 GB HDD and get almost the whole 16 TB of usable space in one big filesystem. It'll shuffle the files around for you to keep the hot data set on the fastest drive. You can pin the data to the storage medium that matches the performance needs of the workload. The roadmap claims they want to analyze usage pattern and automatically store the files on the slowest drive that doesn't bottleneck the workload. The point is, unlike regular bcache or the ZFS ARC, it's not just a cache, it's also storage space available to the user.
You wouldn't copy the game to another drive yourself directly. You'd request the filesystem to promote it to the fast drive. It's all the same filesystem, completely transparent.
Looks dead on arrival to me, so much complexity for "performance" but the filessystem is outclassed by everything else in existence. If there was any real performance from this complexity it could have cool niche use cases but this is very disappointing https://www.phoronix.com/review/bcachefs-linux-67/2
Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.
Premature optimisation could kill a project's maintainability; wait a few years. Even then, despite Ken's optimism I'm not certain we'll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.
Cow is an excuse for writing performance, though the read is awful too currently
A rather overly simplistic view of filesystem design.
More complex data structures are harder to optimise for pretty much all operations, but I'd suggest the overwhelmingly most important metric for performance is development time.
At the end of the day the performance of a performance oriented filesystem matters. Without performance, it's just complexity
It has gotten better since November of last year though, here's a more recent benchmark showing it beating btrfs quite often: https://www.phoronix.com/review/linux-611-filesystems/2
Improvement is nice to see, still not ready for prime time