4
submitted 23 hours ago by [email protected] to c/[email protected]
9
submitted 2 weeks ago by [email protected] to c/[email protected]
[-] [email protected] 19 points 4 weeks ago* (last edited 4 weeks ago)

Looks easy : https://www.ifixit.com/Guide/Steam+Deck+SSD+Replacement/148989

Edit: Is it worth 30-60minutes of your time, the screwdrivers, maybe the spatchula, and reinstalling steamOS onto the drive?

14
submitted 1 month ago by [email protected] to c/[email protected]

I'm classing this as an exploit because it sounds like backblaze exploited their shareholders!

We (Reddthat) were going to use them as our object storage provider when we started. Luckily we didn't! It would make me want to migrate asap!

12
submitted 1 month ago by [email protected] to c/[email protected]

I nice write up on the #TikTok VM

3
submitted 1 month ago by [email protected] to c/[email protected]

We regularly see this on Reddthat's and my own personal services too.

5
submitted 1 month ago by [email protected] to c/[email protected]

I don't usually link to Reddit but damn... Entra leak is a big deal

9
submitted 1 month ago by [email protected] to c/[email protected]
24
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]

April is here!

So much has happened since the last update, we've migrated to a new server, we've failed to update to a new lemmy version, automated our rollouts, fought with OVH about contracts. It's been a lot.

Strap in for story time about the upgrade, or skip till you see the break for the next section.

So good news is that we are successfully on v0.19.11.

The bad news is that we had an extended downtime.

Recently I had some extra time to completely automate the rollout process so Reddthat didn't rely solely on me being on 1 specific computer which had all the variables that was needed for a deployment.
As some people know I co-manage the lemmy-ansible repository. So it wasn't that hard to end up automating the automation. Now when a new Version is announced, I update a file/files, it performs some checks to make sure everything is okay, and i approve and roll it out. Normally we are back online within 30 seconds as the lemmy "backend" containers do checks on start to make sure everything is fine and we are good to go. Unfortunately it never came back up.

So I reverted the change thinking something was wrong with the containers and the rollout proceeded to happen again. Still not up :'( Not having my morning coffee and being a little groggy after just waking up.

Digging into it our database was in a deadlock. Two connections were attempting to do the same but different which resulted in it being locked up and not processing any queries.

Just like Lemmy World, when you are "scaling" sometimes bad things can happen. re: https://reddthat.com/post/37908617.

We had the same problem. When rolling out the update two containers ended up starting at the same time and both tried to do the migrations instead of realising one was already doing them.

After quickly tearing it all down. We started the process of only having 1 container to perform the migration and then once that had finished starting everything else we were back online.

Going forward we'll probably have to have a brief downtime for every version to ensure we don't get stuck like this again. But we are back-up and everything's working.


Now for the scheduled programming.

OVH

OVH scammed me out of the Tax on our server renewal last month. When our previously 12 month contract was coming to the end we re-evaluated our finances and were found wanting. So we ended up scaling down to a more cost-effective server and ended up being able to pay in AUD instead of USD which will allow us to stay at a single known price and not fluctuate month to month.
Unfortunately I couldn't cancel the contract. The OVH system would not let me click terminate. No matter what I did, what buttons I pressed, or how many times I spun my chair around it wouldn't let me cancel. I didn't want to get billed for another month when we were already paying for the new server. So a week before the contract ended I sent a support ticket to OVH. You can guess how that went. The first 2 responses I got from them after 4 days was "use the terminate feature". They didn't even LOOK at the screenshots clearly outlining the steps I had taken and the generic error... So I get billed for another month... and then have to threaten them with legal proceedings. They then reversed the charge. Except for the Tax. So I had to pay 10% of the fee to cancel our service. Really unhappy with OVH after this ordeal.

Automated rollouts

I spent some time after our migration ensuring that we have another system setup which will be able to rollout updates. So we are not dependant on just me and my one random computer :P All was going very well until an upgrade with database migrations happened. We'll be working on that soon to make sure we don't have unforeseen downtime.

Final Forms

Now that the dust has settled and we've performed the migrations starting next month I'll probably go back to our quarterly updates unless something insane happens. (IE: Lemmy drops v1 👀 )

We also modified our "Reddthat Community and Support" community to be a Local Only community. The original idea for the community was to have a place where only reddthat could chat, but back when we started out that wasn't a thing! So now if you want to voice your opinion to other Reddthat users please feel free too knowing other instances won't come in and derail the conversation.

As a reminder we have many ways to donate if you are able and feel like it! A recurring donation of $5 is worth more to me than a once of $60 donation. Only because it allows me to forecast the year and work out when we need to do donation drives or relax knowing everything in it's current state will be fine.

Cheers,

Tiff

8
submitted 1 month ago by [email protected] to c/[email protected]

Could be worse, I could be using parquet...

33
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]

We just successfully upgraded to the latest Lemmy version, 0.9.10, probably the last before the v1 release.

This addresses some of the PM spam that everyone has been getting. Now when that user is banned and we remove content it also removes the PMS. So hopefully you won't see them anymore!

Over the next couple days will be planning for our migration to our new server as our current server's contract has ended. I expect the down time to last for about an hour, if not shorter. You'll be able to follow updates for the migration by our status page at https://status.reddthat.com/

Normally this update would be a week in advance and more nicely formatted that turns out the contract ends on the 25th and I don't want to get charged for another month at a higher rate when I just purchased the new server.

See you on the other side,

Tiff

EDIT:

22 Mar 2025 02:42: I'm going to start the migration in 5 mins (@ 3:00)

22 Mar 2025 03:01: that was the fastest migration I've ever done. pre-seeding the server and Infra as Code is amazing!

We've turned off our crypt donation p2pool (as no-one was using it), and two of our frontends, alexandrite and next (for the same reasons)

Time to celebrate with some highly accurate Australian content:

43
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]

Hello Reddthat! We are back for another update on how we are tracking. It's been a while eh? Probably because it was such smooth sailing!

In the middle of February we updated Lemmy to v0.19.9 which contained some fixes for federating between Mastodon and Lemmy so hopefully we will see less spam and more interaction from the larger mastodon community. While that in of itself is a nice fix, the best fix is the recent thumbnail fix! Thumbnails now have extra logic around generating them and now have a higher chance of actually being created! Let us know if you think there has been a change over the past month-ish.

Budget & Upcoming Migration

Reddthat has been lucky to have such a great community that has helped us stay online for over a year and if you can believe it, in just a few more months it will be 2 years, if we can make it.

Our costs have slowly increased over the years as you can all see by our transactions on OpenCollective (https://opencollective.com/reddthat). We've managed to reduce some costs in our S3 hosting after it balooned out and bring it down to a more manageable level. Unfortunately as well, the current economic issues have resulted in the Australian dollar slipping further and as we pay everything in USD or EUR it has resulted in slightly higher costs on a month-to-month basis..

Our best opportunity to keep online for the foreseeable future is to downsize our big server from a 32GB ram instance to a 16GB ram instance which will still provide enough memory that we will be able to function as we currently do while not affecting us in a meaningful way.

This means we'll need to reassess if running all our different front ends are useful, or do we only choose a few? Currently I am looking to turn off next and alexandrite. If you are a regular user of these frontends and prefer them please let me know as from our logs these are the least used while also take up the most resources. (Next still has bugs regarding caching every single image).

We can get a vps for about ~A$60-70 per month which will allow us to still be as fast as we are now while saving 40% off our monthly costs. This will bring us to nearly 90% funded by the community. We'll still be slowly "losing" money from our open collective backlog but we'll have at least another 6 months under our belt, if not 12 months! (S3 costs and other currency conversion not withstanding).

All of this will happen in late March early April as we will need to make sure we do it before the current contract is up so we don't get billed for the next month. Probably the 29th/30th of I don't fall asleep too early on those days.
It'll probably take around 45mins to 60mins but if I get unlucky maybe 2 hours.

Age Restriction

Effective immediately everyone on Reddthat needs to be 18 years old and futher interaction on the platform confirms you are over the age of 18 and agree with these terms.

If you are under the age of 18 you will need to delete your account under Settings

This has also been outlined in our signup form that has been updated around the start of February.

Australian & UK Policy Changes

It seems the UK has also created their own Online Safety Act that makes it nearly impossible for any non-corporation to host a website with user generated content (USG). This is slightly different to the Australian version where it specifically targets Social Media websites.

Help?

I would also like your help!
To keep Reddthat online, and to help comply with these laws, if you see content or user accounts which are under the age of 18 please report the account/post/content citing that the user might be under the age of 18.
We will then investigate and take action if required.

Thanks everyone

As always keep being awesome and having constructive conversations!

Cheers,

Tiff!

PS. Like what we are doing? Keep us online for another year by donating via our OpenCollective or other methods (crypto, etc) via here

3
submitted 3 months ago by [email protected] to c/[email protected]
[-] [email protected] 16 points 8 months ago

This is sso support as the client. So you could use any backend that supports the oauth backend (I assume, didn't look at it yet).

So you could use a forgejo instance, immediately making your git hosting instance a social platform, if you wanted.
Or use something as self hostable like hydra.

Or you can use the social platforms that already exist such as Google or Microsoft. Allowing faster onboarding to joining the fediverse. While allowing the issues that come with user creation to be passed onto a bigger player who already does verification. All of these features are up for your instance to decide on.
The best part, if you don't agree with what your instance decides on, you can migrate to one that has a policy that coincides with your values.

Hope that gives you an idea behind why this feature is warranted.

[-] [email protected] 18 points 8 months ago

We enabled the CloudFlare AI bots and Crawlers mode around 0:00 UTC (20/Sept).

This was because we had a huge number of AI scrapers that were attempting to scan the whole lemmyverse.

It successfully blocked them... While also blocking federation 😴

I've disabled the block. Within the next hour we should see federation traffic come through.

Sorry for the unfortunate delay in new posts!

Tiff

[-] [email protected] 36 points 10 months ago* (last edited 10 months ago)
[-] [email protected] 20 points 1 year ago* (last edited 1 year ago)

That's a big decision I won't make without community input as it would affect all of us.

If we purely treated it as just another instance with no history then I believe our stance on it would be to allow them, as we are an allow-first type of instance. While there are plenty of people we might not want to interact with, that doesn't mean we should immediately hit that defederate button.

When taking history into account it becomes a whole different story. One may lean towards just saying no without thought.

All of our content (Lemmy/Fediverse) is public by default (at the present time) searchable by anyone and even if I were to block all of the robots and crawlers it wouldn't stop anyone from crawling one of the many other sites where all of that content is shared.

A recent feature being worked on is the private/local only communities. If a new Lemmy instance was created and they only used their local only communities, would we enact the same open first policy when their communities are closed for us to use? Or would we still allow them because they can still interact, view comments, vote and generate content for our communities etc?

What if someone created instances purely for profit? They create an instance corner stone piece of the "market" and then run ads? Or made their instance a subscription only instance where you have to pay per month for access?

What if there are instances right now federating with us and will use the comments and posts you make to create a shit-posting-post or to enhance their classification AI? (Obviously I would be personally annoyed, but we can't stop them)

An analogy of what threads is would be to say threads is a local only fediverse instance like mastodon, with a block on replies. It restricts federation to their users in USA, Canada and Japan and Users cannot see when you comment/reply to their posts and will only see votes. They cannot see your posts either and only allow other fediverse users to follow threads users.

With all of that in mind if we were to continue with our open policy, you would be able to follow threads users and get information from them, but any comments would stay local to the instance that comments on the post (and wouldn't make it back to threads).

While writing up to this point I was going to stay impartial... But I think the lack of two way communication is what tips the scales towards our next instance block. It might be a worthwhile for keeping up-to-date with people who are on threads who don't understand what the fediverse is. But still enabled the feature because it gives their content a "wider reach" so to speak. But in the context of Reddthat and people expressing views and opinions, having one sided communication doesn't match with what we are trying to achieve here.

Tiff

Source(s): https://help.instagram.com/169559812696339/about-threads-and-the-fediverse/

PS: As we have started the discussion I'll leave what I've said for the next week to allow everyone to reply and see what the rest of the community thinks before acting/ blocking them.

Edit1:(30/Mar) PPS: we are currently not federated with them, as no one has bothered to initiate following a threads account

[-] [email protected] 13 points 1 year ago* (last edited 1 year ago)

I managed to streamline the exports and syncs so we performed them concurrently. Allowing us to finish just under 40 minutes! Enjoy the new hardware!

So it begins: (Federation "Queue")
Federation queue showing a upwards trend, then down then slightly back up again

[-] [email protected] 13 points 1 year ago

Successfully migrated from Postgres 15 to Postgres 16 without issues.

[-] [email protected] 21 points 2 years ago

It's a sad day when something like this happens. Unfortunately with how the Lemmy's All works it's possible a huge amount of the initial downvotes are regular people not wanting to see the content, as downvotes are federated. This constituted as part of my original choices for disabling it when I started my instance. We had the gripes people are displaying here and it probably constituted to a lack in Reddthat's growth potential.

There needs to be work done not only for flairs, which I like the idea of, but for a curated All/Frontpage (per-instance). Too many times I see people unable to find communities or new content that piques their interest. Having to "wade through" All-New to find content might attribute to the current detriment as instead of a general niche they might want to enjoy they are bombarded with things they dislike.

Tough problem to solve in a federated space. Hell... can't even get every instance to update to 0.18.5 so federated moderation actions happen. If we can't all decide on a common Lemmy instance version, I doubt we can ask our users to be subjected to not using the tools at their disposal. (up/down/report).

Keep on Keeping on!

Tiff - A fellow admin.

[-] [email protected] 23 points 2 years ago

Don't forget & in community names and sidebars.

Constantly getting trolled by &

[-] [email protected] 13 points 2 years ago

No worries & Welcome!

That is correct, we have downvotes disabled though this instance. There was a big community post on it earlier over here: https://reddthat.com/post/110533
Basically, it boils down to: If we are trying to create a positive community, why would we have a way to be negative?

While that is also a very limited view on the matter, it's one I want to instill into our communities. Sure downvotes do help with "offtopic" posts & the possible spam, but at the current time of that post. No application thirdparty (a mobile app) or firstparty (lemmy-ui) had any features about hiding negatively voted content. (ie; if it was -4 don't show it to me).

By default (which is the same now as it was then) "Hot" only takes into account the votes as one of many measures about how "hot" a post is for the ranking. Up & Down votes are only really good for sorting by "Top".

My biggest concern, from then and now. Is that because we now federate with over 1000 different instances, and by design Lemmy accepts all votes from any instance that you are federating against, vote manipulation (10's of thousands) of accounts could downvote every post on our instance into oblivion. Or the even more subtle and more nefarious, down vote every post until it hits 0/1 constantly. You might assume that the posts are not doing well, and nothing is happening.

As Reddthat is basically run by a single person at this point in time and for the foreseeable future (3-6 months), adding downvotes would have added extra effort on my part in monitoring and ensuring nothing nefarious is happening. Moderation is still a joke in Lemmy, reports are a crapshoot and the ability for people to spam any lemmy server is still possible.

Until we get bigger, have more mods in our communities, and I can find others who are equally invested in Reddthat as myself to become Admins, I won't be enabling downvotes (unless the community completely usurps me on the matter of course).

I hope that answers your question.

Cheers

Tiff

[-] [email protected] 20 points 2 years ago

Updates hiding in the comments again!

We are now using v0.18.3!

There was extended downtime because docker wouldn't cooperate AT ALL.

The nginx proxy container would not resolve the DNS. So after rebuilding the containers twice and investigating the docker network settings, a "simple" reboot of the server fixed it!

  1. Our database on the filesystem went from 33GB to 5GB! They were not kidding about the 80% reduction!
  2. The compressed database backups went from 4GB to ~0.7GB! Even bigger space savings.
  3. The changes to backend/frontend has resulted in less downtime when performing big queries on the database so far.
  4. The "proxy" container is nginx, and because it utilises the configuration upstream lemmy-ui & upstream lemmy. These are DNS entries which are cached for a period of time. So if a new container comes online it doesn't actually find the new containers because it cached all the IPs that lemmy-ui resolves too. (In this example it would have been only 1, and then we add more containers the proxy would never find them). 4.1 You can read more here: http://forum.nginx.org/read.php?2,215830,215832#msg-215832
  5. The good news is that https://serverfault.com/a/593003 is the answer to the question. I'll look at implementing this over the next day(s).

I get notified whenever reddthat goes down, most of the time it coincided with me banning users and removing content. So I didn't look into it much, but honestly the uptime isn't great. (Red is <95% uptime, which means we were down for 1 hour!).

Actually, it is terrible.

With the changes we've made i'll be monitoring it over the next 48 hours and confirm that we no longer have any real issues. Then i'll make a real announcement.

Thanks all for joining our little adventure!
Tiff

[-] [email protected] 27 points 2 years ago

These were because of recent spam bots.

I made some changes today. We now have 4 containers for the UI (we only had 1 before) and 4 for the backend (we only had 2)

It seems that when you delete a user, and you tell lemmy to also remove the content (the spam) it tells the database to mark all of the content as deleted.

Kbin.social had about 30 users who posted 20/30 posts each which I told Lemmy to delete.
This only marks it as deleted for Reddthat users until the mods mark the post as deleted and it federates out.

The problem

The UPDATE in the database (marking the spam content as deleted) takes a while and the backend waits(?) for the database to finish.

Even though the backend has 20 different connections to the database it uses 1 connection for the UPDATE, and then waits/gets stuck.

This is what is causing the outages unfortunately and it's really pissing me off to be honest. I can't remove content / action reports without someone seeing an error.

I don't see any solutions on the 0.18.3 release notes that would solve this.

Temp Solution

So to combat this a little I've increased our backend processes from 2 to 4 and our front-end from 1 to 4.

My idea is that if 1 of the backend processes gets "locked" up while performing tasks, the other 3 processes should take care of it.

This unfortunately is an assumption because if the "removal" performs an UPDATE on the database and the /other/ backend processes are aware of this and wait as well... This would count as "locking" up the database and it won't matter how many processes I scale out too, the applications will lockup and cause us downtime.

Next Steps

  • Upgrade to 0.18.3 as it apparently has some database fixes.
  • look at the Lemmy API and see if there is a way I can push certain API commands (user removal) off to its own container.
  • fix up/figure out how to make the nginx proxy container know if a "backend container" is down, and try the other ones instead.

Note: we are kinda doing #3 point already it does a round-robbin (tries each sequentially). But from what I've seen in part of the logs it can't differentiate between one that is down and one that is up. (From the nginx documentation, that feature is a paid one)

Cheers, Tiff

view more: next ›

ticoombs

0 post score
0 comment score
joined 2 years ago
MODERATOR OF