[-] [email protected] 2 points 2 hours ago

...oxygen isn't a propellant

[-] [email protected] -1 points 1 day ago

I'm not sure I'd call LEO satellite benefits already known and dismiss them like that.

People were calling Starlink an impossible business model for years and years and years.... but it was the first one to actually succeed, and provide a rather good service actually.

If you've used Starlink because there are no better options then you would understand just how good it is for filling in the Internet coverage gaps that will always exist to some extent

[-] [email protected] 1 points 3 days ago

it's not so hard. you can just link previous PRs for comments, and re-home them. you can make a PR cross-platform it just won't necessarily render right in the web UI.

git is stupid powerful. reject web UI return to email list (Linux kernel vibes)

[-] [email protected] 0 points 3 days ago

The magic of git is that if something happens it's trivial to switch. Honestly, I would just stick on GitHub until there's an actual reason to change. You can just do git remote set-url origin NEW_SERVER, do a git push and bam, your repo is restored with all of its history.

It's so easy to move, it's not worth worrying about imo

[-] [email protected] 3 points 4 days ago

I don't disagree, but in this case that is not true, because that is not what the terms and conditions say.

[-] [email protected] 3 points 4 days ago

that's not the case here though

41
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

Captured these beauties near Olympic National Park

[-] [email protected] 29 points 4 weeks ago

on the other hand, it is REALLY annoying

[-] [email protected] 27 points 1 month ago

source: https://xkcd.com/932/

(for those that want to read the alt-text)

11
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]

Retrieval of most pictures seems to not be currently working. I am still attempting to understand and resolve this.

Edit: This has been fixed

27
submitted 6 months ago by [email protected] to c/[email protected]
14
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]

UPDATE DAY 2: backend has successfully been migrated onto new dedicated hosting after some pain. there should not be major downtime from here on. tomorrow I will be working on integrating a better backup solution and then I'll leave it alone for a little while.

UPDATE: I was able to deploy the database onto dedicated server hardware tonight, but have not finished moving over the other components I wanted to. You may notice a performance degradation due to increased database-backend latency (...or maybe it will just be better anyways, lol).

I will finish off work on this tomorrow!

Lemdro.id has been struggling with some performance issues lately, as you've likely noticed. This is due to changes made by our hosting provider that causes the database to run much slower. Tonight at 10pm PST, I will be putting lemdro.id into maintenance mode to migrate some parts of the infrastructure to a new dedicated server.

Thanks for your patience!

40
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

Hey all! I've done a lot of database maintenance work lately and other things to make lemdro.id better. Wanted to give a quick update on what's up and ask for feedback.

For awhile, we were quite a ways behind lemmy.world federation (along with many other instances) due to a technical limitation in lemmy itself that is being worked on. I ended up writing a custom federation buffer that allowed us to process activities more consistently and am happy to say that we are fully caught up with LW and will not have that problem again!

Additionally, on the database side of things, I've setup barman in the cluster to allow for point of time backups. Basically, we can now restore the database to any arbitrary point in time. This is on top of periodic automatic backups which also gets pulled to storage both on my personal NAS as well as a Backblaze bucket (both encrypted of course).

Today, I deployed a new frontend at https://next.lemdro.id/. This one is very early stages and experimental but is being developed by https://lemm.ee/ and seems promising!

If you live outside of the US and experience consistently long load times I want to hear from you! I am deploying the first read replica node to Europe soon, so if you live in that region you'll soon notice near-instaneous loading of content. Very exciting!

Finally, looking for feedback. Is there anything you want to see changed? Please let me know!

109
submitted 1 year ago by [email protected] to c/[email protected]

Google today announced a handful of wearable and navigation updates, starting with public transit directions in Google Maps for Wear OS.

71
submitted 2 years ago by [email protected] to c/[email protected]
21
submitted 2 years ago* (last edited 1 year ago) by [email protected] to c/[email protected]

I am rolling out the Photon UI as a replacement to the default lemmy UI right now. Initially, only about 50% of requests will be routed to Photon, determined by a hash of your IP address and user agent (sorry for any inconsistencies...). As I determine that this configuration is stable I will be slowly increasing the percentage until Photon is the new default frontend for lemdro.id.

If you have any difficulties, please reach out. Additionally, the "old" lemmy frontend will remain available at https://l.lemdro.id/

Edit: I am aware of some problems with l.lemdro.id. It wasn't designed to run on a subdomain so I'll need to add a proxy layer to it to redirect requests. A task for tomorrow!

FINAL EDIT: https://l.lemdro.id/ is now fully operational, if you choose to use the old lemmy UI it is available there

9
submitted 2 years ago by [email protected] to c/[email protected]

Over the course of the last couple weeks, I managed to root cause and solve the problem causing stale sorting on lemdro.id. My apologies!

94
submitted 2 years ago by [email protected] to c/[email protected]

We typically like Pixel phones a lot, but we have some reservations about Google's quality control

61
submitted 2 years ago by [email protected] to c/[email protected]

Google Maps is changing with pretty significant redesigns across key surfaces, including when searching for directions...

14
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Edit: Upgrade went off without much of a hitch! Some little tasks left to do but we are now running 0.19.3. Seems like a lot of the federation issues have been solved by this too.

You will have to re-login and 2FA has been reset.

This update is scheduled to take place this weekend. No specific day or time because I am infamously bad at following those. I will try to minimize impact by keeping downtime to lower-traffic periods.

Ideally, there will be no downtime, but if there is it is likely to only last an hour maximum. During this time I will add an "under maintenance" page so you can understand what we are up to.

Feel free to join our Matrix space for more information and ongoing updates! My apologies for how long this took - I was in the middle of a big move and a new job.

Additionally, there may be small periods of increased latency or pictures not loading as I perform maintenance on both the backend database and pictrs server in preparation for this upgrade.

[-] [email protected] 39 points 2 years ago* (last edited 2 years ago)

Basically, the lemmy backend service for some reason marked every instance we federated with as inactive, which caused it to stop outbound federation with basically everyone. I have a few working theories on why, but not fully sure yet.

TL;DR lemmy bug, required manual database intervention to fix

This was a stressful start to a vacation!

For a more detailed working theory...

I've been doing a lot of infrastructure upgrades lately. Lemdro.id runs on a ton of containerized services that scale horizontally for each part of the stack globally and according to load. It's pretty cool. But my theory is that since the backend schedules inactive checking for 24 hours from when it starts that it simply was being restarted (upgraded) before it had a chance to check activity until it was too late.

theory:

  • scheduled task checks instances every 24 hours

  • I updated (restarted it) more than every 24 hours

  • it never had a chance to run the check

  • ???

  • failure

This isn't really a great design for inactivity checking and I'll be submitting a pull request to fix it.

[-] [email protected] 33 points 2 years ago

this might be an improvement

[-] [email protected] 87 points 2 years ago

Hello! Admin here at lemdro.id. This is the result of several problems in the default reference nginx config. I am working on resolving this right now and should have it fixed within the next 30 minutes!

view more: next ›

cole

0 post score
0 comment score
joined 2 years ago
MODERATOR OF