00Lemming

joined 1 year ago
MODERATOR OF
[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

Great work, team. Happy Halloween! πŸ‘»

[–] [email protected] 3 points 1 year ago

This is honestly hilarious 🀣

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Ruud is the MVP of the Fediverse πŸ₯³

[–] [email protected] 2 points 1 year ago

I appreciate the concern, but we are doing this because we enjoy doing it and our skill sets allow us to contribute. These are exciting times, even with the issues we are working through :) No worries.

[–] [email protected] 3 points 1 year ago (1 children)

Yup you nailed it. For additional context, Ruud is running an almost identical server for his Mastadon.world server which has 160k users. Relatively speaking, these are large, performant, and expensive servers. They can absolutely handle the current user influx we are getting from the Reddit exodus. Are hands are tied by software limitations unfortunately. I can confidently tell you were are constantly in communication about ways we can amplify user experience with the tools that we do have access to. For instance, this status page was recently spun up which you can access anytime you think there might be server issues to help confirm that what you are seeing is recognized at a server level. Things like that.

All that being said, for users who are looking for a smoother experience right now, I can recommend lemm.ee as a solid home as well. Their admin Sunaurus has been very active and helpful throughout this process and handles his instance very professionally. He is essentially another Ruud (though Ruud is the best! ;)). Just something to keep in mind going forward as I can't make any promises about the time frames for these issues being resolved. Hopefully once we get contact back from the Lemmy devs we can start expediting a resolution. They have a lot on their plates right now though, haha, so we will see. Cheers!

[–] [email protected] 7 points 1 year ago (2 children)

FYI, this is due to a confluence of issues.

  • We are the largest instance with the highest active user count - and by a good margin.
  • We are dealing with a premature software base that was not designed to handle this load. For example, the way the ActivityPub Federation queues are handled are not conducive to high volume requests. Failed messages stay in queue for 60 seconds before they retry once, and if that attempt fails it sits in queue for one hour before attempting to retry. These queued messages sit in memory the whole time. It's not great, and there isn't much we can currently do to change this, other than to manually defederate from 'dead' servers in order to drop the number of items stuck in queue that are never going to get a response. Not an elegant solution by any means, and one we will go back and address when future tools are in place, but we have seen significant improvement because of this.
  • We have attempted contacting Lemmy devs for some insight/assistance with this, but have not heard back yet, at this time. Much of this is in their hands.
  • We were able to confirm receipt of our federation messages (from lemmy.world) to other instance admins instances at lemm.ee and discuss.as200950.com. As such we do know that federation is working at least to some degree, but it is obviously still in need of some work. As mentioned above, we have reached out to the Lemmy devs, who are instance owners of Lemmy.ml, to collaborate. I cannot confirm if they are getting our federation at this time. Hopefully in coming Lemmy releases this becomes easier to analyze without needing direct server access to both instances servers.

As you can see, we are trying to juggle several different parameters here to try and provide the best experience we can, with the tools we have at our disposal. You may consider raising an issue on their GitHub about this to try to get more visibility to them from affected users.

[–] [email protected] 1 points 1 year ago

Just to be clear, are you saying that when you go to your profile page, the sublemmys that you have 'subscription pending' for do not show up there??

[–] [email protected] 1 points 1 year ago

That is good to hear :) we continue to analyze the config to see where this failure is potentially happening.

[–] [email protected] 2 points 1 year ago (3 children)

This is being reviewed by the Admin team. Are you logging in via browser or app?

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (3 children)

Technically speaking, yes, a portion of our issues are due to the highest user base of an Lemmy instance. So in theory, if half of our users dispersed to other instances, we would likely see some performance improvement here. However, lemmy.world is intended to be an accessible instance for the general population. The server itself that is running lemmy.world is beyond spec'd to handle much more than this user load. We are running up against code-level issues that we may or may not be able to get around with our internal configurations. This is just part of developing software in an environment were you go from a few thousand users total to hundreds of thousands in the space of a few weeks. There is no directive to have users create accounts on new instances, though if you are looking for an immediate performance improvement, that may be your best option currently. That is up to you to decide :)

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

Yeah this is accurate. We wanted to get away from websockets ASAP while also maintaining Captcha functionality.

[–] [email protected] 8 points 1 year ago (5 children)

Thanks for the kind words! Yeah, there are definite growing pains, and likely will be for some time (just do to the codebase we are working with, understandably). We have a really solid group though heading up lemmy.world though so we will be just fine ;)

 

cross-posted from: https://lemmy.world/post/679532

cross-posted from: https://lemmy.world/post/679531

cross-posted from: https://lemmy.world/post/679471

Not sure the best place to ask this.

I have a DS420J 4 bay NAS, primarily used for my Plex server and data backups (among a few other things). I currently have 8x6x6 TB Iron Wolf NAS drives in a single volume with SRH and an extra 1 TB SSD JBOD. I have my Plex app and metadata stored on the SSD due to the increased performance I have seen vs. having it installed on the large pool (7200 RPM cap). I am sitting at about 85% used storage of my available 10.8 TB on the primary volume. As such, I am pre-planning my next storage upgrade and am curious about my options while staying with the current hardware. The future plan will be a NAS upgrade, but this little beast has been chugging along so perfectly I want to push it as far as I can.

If I was to remove the 1 TB drive and replace it with another 8 TB Iron Wolf, I would jump to 20 TB available storage. https://www.synology.com/en-us/support/RAID_calculator?hdds=6%20TB%7C6%20TB%7C8%20TB%7C8%20TB This increase would last me for quite some time ahead of a full NAS upgrade with more bays. In order to do this, I would obviously need to remove the 1 TB SSD to be replaced by the new drive. I have na external enclosure for this drive that can connect over USB to the NAS.

My question: I am finding somewhat conflicting information on how external drives are intended to be used/what their capabilities are when connected to the USB 3.2 port. It seems the intended functionality for backups (which makes sense). Am I able to utilize a USB connected drive and have it function in a similar manner to it being internal? Are you able to install apps from the Package Center to an external drive? Create volumes? I assume there will be some performance degradation due to the translation from SATA to USB, then back to SATA, but I anticipate the SSD will still perform better than adding the app back to the main pool. I just don’t know if I am potentially missing something with my evaluation. Those that have more experience with USB connected drive with their NAS, I would love to hear your experience. Thanks!

 

cross-posted from: https://lemmy.world/post/679471

Not sure the best place to ask this.

I have a DS420J 4 bay NAS, primarily used for my Plex server and data backups (among a few other things). I currently have 8x6x6 TB Iron Wolf NAS drives in a single volume with SRH and an extra 1 TB SSD JBOD. I have my Plex app and metadata stored on the SSD due to the increased performance I have seen vs. having it installed on the large pool (7200 RPM cap). I am sitting at about 85% used storage of my available 10.8 TB on the primary volume. As such, I am pre-planning my next storage upgrade and am curious about my options while staying with the current hardware. The future plan will be a NAS upgrade, but this little beast has been chugging along so perfectly I want to push it as far as I can.

If I was to remove the 1 TB drive and replace it with another 8 TB Iron Wolf, I would jump to 20 TB available storage. https://www.synology.com/en-us/support/RAID_calculator?hdds=6%20TB%7C6%20TB%7C8%20TB%7C8%20TB This increase would last me for quite some time ahead of a full NAS upgrade with more bays. In order to do this, I would obviously need to remove the 1 TB SSD to be replaced by the new drive. I have na external enclosure for this drive that can connect over USB to the NAS.

My question: I am finding somewhat conflicting information on how external drives are intended to be used/what their capabilities are when connected to the USB 3.2 port. It seems the intended functionality for backups (which makes sense). Am I able to utilize a USB connected drive and have it function in a similar manner to it being internal? Are you able to install apps from the Package Center to an external drive? Create volumes? I assume there will be some performance degradation due to the translation from SATA to USB, then back to SATA, but I anticipate the SSD will still perform better than adding the app back to the main pool. I just don’t know if I am potentially missing something with my evaluation. Those that have more experience with USB connected drive with their NAS, I would love to hear your experience. Thanks!

 

I am a PSE for a large corporation that most people would not be familiar with (those users that frequent this sub probably would). However, we supply business critical software to many of the big companies you definitely do know. This puts me in a position where I work directly with some of the most well paid 'tech execs' you can find and has lead to many hilarious situations. Those are stories for another day however. Today is about Reddit - for they have angered me greatly.

I get a ticket this morning around 10 AM. As usual, I get a bunch of helpful information including an irrelevant screenshot and a one liner about how the RSS feed that they have pulling into one of their widgets wasn't working. On closer inspection, these mf's were hitting the r/sysadmin(!) RSS feed and pulling in new posts. Now, this is strictly business software we are dealing with. So while I can absolutely see why certain groups would value that feed, it was definitely the first I had ever seen such a thing in any of our environments.

Naturally (I feel), I am immediately floored with the potential possibilities and started thinking about how I might have to explain to this guy all that has transpired the last ~week in a business-professional email... I took a minute just to soak that in and let out a small chuckle. Fuck u/spez, I mutter.

Well since I was given zero actual information about their issue, other than 'no workie', I slid over to my main PC to go check r/sysadmin as I have done many times in the past - like muscle memory. I snap out of that, of course. I am done with Reddit. I had an idea. Just for fun I hit up Lemmy, just to see what was there. And lo and behold we have a fucking post about the massive reddit outage that went down today. I am all smiles at what has already happened here and hit downdector just to confirm. Yup, almost 50k reports at peak. LMFAO. I mean, really? My god Reddit. What are you doing?

So, given the info I was provided, I let him know that there was an outage and that was likely all the issue was - Try again once it has resided. A few small chuckles and I thought the story was done.

Now here's where I really lost it. I get word back a bit later and it's once again a one liner - 'No. Our sad, sad admins have been without r/sysadmin for almost two weeks now :(' I was laughing for a good 5 minutes at just the absurdity of it all (this issue obviously doesn't have anything to do with the recent changes, lol), all against the background of what we are seeing with Reddit. It also helped me realize how far reaching these failures are actually going to be once the end of the month rolls around. Colossal fuck up.

Happy to be here on Lemmy with you boys!

view more: next β€Ί