Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
Is it working? -> (Yes) --> Fix DNS
Does anyone else have the thought that maybe it's time to just replace these 30+ year old ancient protocols? Seems like the entire networking stack is held together with string and duct tape and unnecessarily complicated.
A lot of the decisions made sense somewhat in the 80s and 90s, but seems ridiculous in this day and age lmao
Seems like the entire networking stack is held together with string and duct tape and unnecessarily complicated.
The more you learn about network technology the more you realize how cobbled together it all is. Old, temporary fixes become permanent standards as new fixes are written on top of them. Apache, which was the most widely used web server for a long time, is literally named that because it was "a patchy" server. It's amazing that any of it works at all. It's even more amazing that it's been developed to the point where people with no technical training can use it.
The open nature of IP is what allows such a varied conglomerate of devices to share information with each other, but it also allows for very haphazard connections. The first modems were just an abuse of the existing voice phone network. The internet is a functional example of building the airplane while you're flying it. We try to revise the standards as we go, but we can't shut the whole thing down and rebuild it from scratch. There are no green fields.
It has always been so. It must be so. It will continue to be so.
(the flexibility of it all is really amazing though - in 2009 phreakmonkey was able to connect a laptop to the internet with a 1964 Livermore Data Systems Model A acoustic coupler modem and access Wikipedia!)
Nothing quite as permanent as a temporary fix!
Very cool post, thanks for sharing
Some ancient protocols get replaced gradually though. Look at http3 not using TCP anymore. I mean at least it's something.
HTTP3 uses UDP, which is 6 years younger than TCP.
Nope, it uses a protocol on top of UDP called QUIC. If you count underlying protocols further down the stack, obviously all of them are really old.
Wait till you hear about when ipv6 was first introduced (90s) and how 50% of the internet still doesn't work with it.
Businesses don't want to change shit that "works" so you still have stuff like the original KAME project code floating around from the 90s.
Data Link layer be pretty stable to be fair ^_^
I definitely would love to see a rework of the network stack at large but idk how you'd do it without an insane amount of cooperation among tech giants which seems sort of impossible
I may be waaaay off here, but the internet as it exists is pretty much built on DNS, isn't it? I mean, the whole idea of DARPANet back in the 60s and 70s was to build a robust, redundant, and self-healing network to survive nuclear armageddon, and except when humans f it up (intentional or otherwise), it generally does what it says on the tin.
Now, there's arguments to beade about securing the protocol, but to rip and replace the routing protocols, I think you'd have to call it something other than the Internet.
Making a typo in the BGP config is the internet's version of nuclear Armageddon
Same unfortunately goes for a big chunk of the law on a global scale.. Constant progress, new possibilities and technologies, changes in general are really outpacing some dusted and constantly abused solutions. Every second goes by and any “somehow still holding” relic is under more pressure. As a species we can have some really great ideas but the long-term planning or future-proofing is still not our strongest suit.
Me last week when my pi-hole was down
Oh dang, I need to rebuild that one as well by chance. Still running on Buster...
Why do Canadians make such good network engineers?
We always make sure to check the Eh Record.
Literally this, literally today.
Same here, quite literally this morning, it was fucking DNS
Those little bastards, so sneaky. I've checked if d(uck)dns is working before my local DNS.
Am I the only one who can't think of a time DNS has caused a production outage on a platform I worked on?
Lots of other problems over the years, but never DNS.
I have a coworker who always forgets TTL is a thing, and never plans ahead. On multiple occasions they've moved a database, updated DNS to reflect the change, and are confused why everything is broken for 10-20 minutes.
I really wish the first time they learned, but every once and a while they come to me to troubleshoot the same issue.
How would you prevent that?
While planning your change (or project requiring such change), check the relevant(* see edit) DNS TTL. Figure out the point in the future you want to do the actual change (time T), and set the TTL to 60 seconds at T-(TTL*2) or earlier. Then when it comes to the point where you need to make your DNS change, the TTL is reasonable and you can verify your change in some small amounts of minutes instead of wondering for hours.
Edit: literally check all host names involved. They are all suspect
This. For example, if you have a DNS entry for your DB and the TTL is set to 1 hour, an hour before you intend to make the changes, just lower the TTL of the record to a minute. This allows all clients to be told to only cache for a minute and to do lookups every minute. Then after an hour, make the necessary changes to the record. Within a minute of the changes, the clients should all be using the new record. Once you've confirmed that everything is good, you can then raise TTL to 1 hour again.
This approach does require some more planning and two or three updates to DNS, but minimizes downtime. The reason you may need to keep TTL high is if you have thousands of clients and you know the DNS won't be updated often. Since most providers charge per thousand or million lookups, that adds up quickly when you have thousands of clients who would be doing unnecessary lookups often. Also a larger TTL would minimize the impact of a loss of DNS servers.
Set it to 5 seconds ??? Profit
??? Is when the underwear gnomes send you a massive bill because you're paying per 1k lookups. They profit, you don't
"yes boss we need another 20 dns servers" "idk why dns traffic is so heavy these days"
For real, I've had problems where I specifically checked if it was DNS, concluded it was not, but it still turned out to be DNS.
The problem is the cache. Always.
Actually while for myself it is sometimes DNS, if I see an internet wide outage it's usually BGP.
I feel like there's some context here I'm missing...
Networking issues are very often caused by DNS, even in cases which don't initially look DNS related at all.
It's a haiku about network issues
Nice painting!
No words describe such 🤌
Not uncommon.
I'm not going to get old at the beach
There's no way it doesn't hold logically
I got old
I have this one hanging up in my cube