135
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 16 Jun 2023
135 points (99.3% liked)
Technology
39227 readers
379 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
Ultimately this is a problem that's never going away until we replace URLs. The HTTP approach to find documents by URL, i.e. server/path, is fundamentally brittle. Doesn't matter how careful you are, doesn't matter how much best practice you follow, that URL is going to be dead in a few years. The problem is made worse by DNS, which in turn makes URLs expensive and expire.
There are approaches like IPFS, which uses content-based addressing (i.e. fancy file hashes), but that's note enough either, as it provide no good way to update a resource.
The best™ solution would be some kind of global blockchain thing that keeps record of what people publish, giving each document a unique id, hash, and some way to update that resource in a non-destructive way (i.e. the version history is preserved). Hosting itself would still need to be done by other parties, but a global log file that lists out all the stuff humans have published would make it much easier and reliable to mirror it.
The end result should be "Internet as globally distributed immutable data structure".
Bit frustrating that this whole problem isn't getting the attention it deserves.
No offense, but that solution sounds like a pipedream that wouldn't work on a technical level. So you wish to keep not just the item someone published, but previous versions of it, have mirrors of it and tie it up in some sort of a blockchain thing. That sounds insanely more resource heavy than just hosting the document itself on one instance somewhere. It would be much more reliable sure, but currently even companies like reddit can struggle with all of the traffic, similarly with smaller open source projects like Lemmy instances or kbin, and your solution is to increase the amount of data?
zeronet solved this problem years ago and no one cared lol. how it works is it uses public/private key addressing for addresses, and then uses p2p torrent style filesharing for hosting. it lets the owner of the private key update their content while also having the sites be hosted in a decentralized manner. since the public keys are immutable, the addressing never changes.
it also has a federated system for it's social media where the frontend/gui for a site is separate from the data storage, and it aggregates the collective data sites that you have downloaded/fetched.
It has it's problems but it works remarkably well. but unfortunately it's dead since the dev vanished and people lost interest.